title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Applying strptime function to padas series | 38,797,854 | <p>I have a pandas DataSeries that contains a contains a string formatted date in the form of:</p>
<p><code>2016-01-14 11:39:54</code></p>
<p>I would like to convert the string to a timestamp.</p>
<p>I am using the <code>apply</code> method to attemp to pass 'datetime.strptime' to each element of the series</p>
<p><code>date_series = date_string.apply(datetime.strptime, args=('%Y-%m-%d %H:%M:%S'))</code></p>
<p>when I run the code, I get the following error</p>
<p><code>strptime() takes exactly 2 arguments (18 given)</code></p>
<p>my questions are (1) am I taking the correct approach, (2) why is <code>strptime</code> converting my args into 18 arguments?</p>
| 2 | 2016-08-05T21:15:20Z | 38,797,881 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>pd.to_datetime</code></a>:</p>
<pre><code>date_series = pd.to_datetime(date_string)
</code></pre>
<p>In general it's best have your dates as Pandas' <code>pd.Timestamp</code> instead of Python's <code>datetime.datetime</code> if you plan to do your work in Pandas. You may also want to review the <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html" rel="nofollow">Time Series / Date functionality documentation</a>.</p>
<p>As to why your <code>apply</code> isn't working, <code>args</code> isn't being read as a tuple, but rather as a string that's being broken up into 17 characters, each being interpreted as a separate argument. To make it be read as a tuple, add a comma: <code>args=('%Y-%m-%d %H:%M:%S',)</code>.</p>
<p>This is standard behaviour in Python. Consider the following example:</p>
<pre><code>x = ('a')
y = ('a',)
print('x info:', x, type(x))
print('y info:', y, type(y))
x info: a <class 'str'>
y info: ('a',) <class 'tuple'>
</code></pre>
| 3 | 2016-08-05T21:18:00Z | [
"python",
"pandas",
"strptime"
] |
django - factory_boy AttributeError: 'NoneType' object has no attribute '_meta' | 38,797,868 | <p>I'm writing unit tests for my <code>Django REST Framework</code> app and I'm creating my fake testing data using <a href="https://factoryboy.readthedocs.io/en/latest/index.html" rel="nofollow"><code>factory_boy</code></a>. I've come across the following error message when I try to run my tests</p>
<pre><code>File "/Users/thomasheatwole/osf-meetings/meetings/conferences/tests.py", line 69, in setUp
contributor = UserFactory()
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/factory/base.py", line 67, in __call__
return cls.create(**kwargs)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/factory/base.py", line 594, in create
return cls._generate(True, attrs)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/factory/base.py", line 519, in _generate
obj = cls._prepare(create, **attrs)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/factory/base.py", line 494, in _prepare
return cls._create(model_class, *args, **kwargs)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/factory/django.py", line 181, in _create
return manager.create(*args, **kwargs)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/db/models/manager.py", line 122, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/db/models/query.py", line 401, in create
obj.save(force_insert=True, using=self.db)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/db/models/base.py", line 708, in save
force_update=force_update, update_fields=update_fields)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/db/models/base.py", line 745, in save_base
update_fields=update_fields, raw=raw, using=using)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 192, in send
response = receiver(signal=self, sender=sender, **named)
File "/Users/thomasheatwole/osf-meetings/meetings/submissions/signals.py", line 19, in add_permissions_on_submission_save
submission, submission_contributor, conference_admin, approval)
File "/Users/thomasheatwole/osf-meetings/meetings/submissions/permissions.py", line 167, in set_unapproved_submission_permissions
approval, submission_contributor)
File "/Users/thomasheatwole/osf-meetings/meetings/approvals/permissions.py", line 62, in add_approval_permissions_to_submission_contributor
assign_perm("approvals.delete_approval", submission_contributor, approval)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/guardian/shortcuts.py", line 92, in assign_perm
return model.objects.assign_perm(perm, user, obj)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/guardian/managers.py", line 43, in assign_perm
obj_perm, created = self.get_or_create(**kwargs)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/db/models/manager.py", line 122, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/db/models/query.py", line 467, in get_or_create
return self._create_object_from_params(lookup, params)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/db/models/query.py", line 499, in _create_object_from_params
obj = self.create(**params)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/db/models/query.py", line 401, in create
obj.save(force_insert=True, using=self.db)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/guardian/models.py", line 39, in save
content_type = ContentType.objects.get_for_model(self.content_object)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/contrib/contenttypes/models.py", line 55, in get_for_model
opts = self._get_opts(model, for_concrete_model)
File "/Users/thomasheatwole/.virtualenvs/django/lib/python2.7/site-packages/django/contrib/contenttypes/models.py", line 32, in _get_opts
model = model._meta.concrete_model
AttributeError: 'NoneType' object has no attribute '_meta'
</code></pre>
<p>I pretty much have no clue what's going on since my understanding of backend structure isn't great. Here's the factories:</p>
<pre><code>class UserFactory(factory.DjangoModelFactory):
class Meta:
model = User
class ConferenceFactory(factory.DjangoModelFactory):
class Meta:
model = Conference
class ApprovalFactory(factory.DjangoModelFactory):
class Meta:
model = approvalModels.Approval
class SubmissionFactory(factory.DjangoModelFactory):
class Meta:
model = submissionModels.Submission
</code></pre>
<p>And here's where I call them:</p>
<pre><code>def setUp(self):
self.user1 = UserFactory(
username = 'Leo',
id = '99'
)
self.user2 = UserFactory(
username = 'LeoLeo'
)
self.conference = ConferenceFactory(
admin = self.user1
)
self.submission1 = SubmissionFactory(
conference = self.conference,
contributor = UserFactory()
)
self.submission2 = SubmissionFactory(
conference = self.conference,
contributor = UserFactory()
)
</code></pre>
<p>If you look through the error message, it's specifically complaining about <code>contributor = UserFactory()</code></p>
<p>Let me know if there's an easy fix, or even some explanation of what's going on would be nice.</p>
<p>Thanks so much!</p>
<p>Here's the file:</p>
<p><a href="https://github.com/TomHeatwole/osf-meetings/blob/feature/conference-tests/meetings/conferences/tests.py" rel="nofollow">tests.py</a></p>
| 0 | 2016-08-05T21:16:27Z | 38,800,936 | <p>You need to add correct relations (<code>SubFactory</code>) in the factory definition first. </p>
<p>Please read this part carefully:</p>
<p><a href="http://factoryboy.readthedocs.io/en/latest/recipes.html#copying-fields-to-a-subfactory" rel="nofollow">http://factoryboy.readthedocs.io/en/latest/recipes.html#copying-fields-to-a-subfactory</a></p>
| 0 | 2016-08-06T05:40:14Z | [
"python",
"django",
"django-rest-framework"
] |
Concurrently parse in-memory xml tree with lxml | 38,797,933 | <p>Say I have a program that looks like this:</p>
<pre><code>from lxml import etree
class ParseXmlFile(object):
def __init__(self, xml_to_parse):
self.xml = etree.parse(xml_to_parse)
def a(self):
return self.xml.xpath('//something')
def b(self):
return self.xml.xpath('//something-else')
</code></pre>
<p>lxml frees the GIL, so it should be possible to run <code>a</code> and <code>b</code> concurrently in separate threads or processes.</p>
<p>From the lxml docs:</p>
<blockquote>
<p>lxml frees the GIL (Python's global interpreter lock) internally when parsing from disk and memory...The global interpreter lock (GIL) in Python serializes access to the
interpreter, so if the majority of your processing is done in Python
code (walking trees, modifying elements, etc.), your gain will be
close to zero. The more of your XML processing moves into lxml,
however, the higher your gain. If your application is bound by XML
parsing and serialisation, or by very selective XPath expressions and
complex XSLTs, your speedup on multi-processor machines can be
substantial.</p>
</blockquote>
<p>I have done little to no work with multithreading.</p>
<p>Your run of the mill multiprocessing implementation would use something like <code>multiprocessing.Pool().map()</code>, which seems to be of no use here since I have a list of functions and a single argument rather than a single function and a list of arguments. Attempting to wrap each function in another function and then multiprocess as described in one of the answers raises the following exception:</p>
<pre><code>cPickle.PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
</code></pre>
<p>Is it possible to do what I'm describing? If so, how?</p>
| 0 | 2016-08-05T21:21:50Z | 38,797,995 | <p>Functions are data, so you can do something like this:</p>
<pre><code>from multiprocessing import Pool
def f1(xml):
print "applying f1 to xml"
def f2(xml):
print "applying f2 to xml"
if __name__ == '__main__':
xml = "the xml"
def applyf(f):
f(xml)
p = Pool(5)
print(p.map(applyf, [f1, f2]))
</code></pre>
| 1 | 2016-08-05T21:28:03Z | [
"python",
"multithreading",
"multiprocessing",
"lxml"
] |
get the amplitude data from an mp3 audio files using python | 38,797,934 | <p>I have an mp3 file and I want to basically plot the amplitude spectrum present in that audio sample.
I know that we can do this very easily if we have a wav file. There are lot of python packages available for handling wav file format. However, I do not want to convert the file into wav format then store it and then use it.
What I am trying to achieve is to get the amplitude of an mp3 file directly and even if I have to convert it into wav format, the script should do it on air during runtime without actually storing the file in the database.
I know we can convert the file like follows:</p>
<pre><code>from pydub import AudioSegment
sound = AudioSegment.from_mp3("test.mp3")
sound.export("temp.wav", format="wav")
</code></pre>
<p>and it creates the temp.wav which it supposed to but can we just use the content without storing the actual file?</p>
| 0 | 2016-08-05T21:21:56Z | 38,798,166 | <p>MP3 is encoded wave (+ tags and other stuff). All you need to do is decode it using MP3 decoder. Decoder will give you whole audio data you need for further processing.</p>
<p>How to decode mp3? I am shocked there are so few available tools for Python. Although I found a good one in <a href="http://stackoverflow.com/a/18830427/2811109">this</a> question. It's called <a href="https://github.com/jiaaro/pydub" rel="nofollow">pydub</a> and I hope I can use a sample snippet from author (I updated it with more info from wiki):</p>
<pre><code>from pydub import AudioSegment
sound = AudioSegment.from_mp3("test.mp3")
# get raw audio data as a bytestring
raw_data = sound.raw_data
# get the frame rate
sample_rate = sound.frame_rate
# get amount of bytes contained in one sample
sample_size = sound.sample_width
# get channels
channels = sound.channels
</code></pre>
<p>Note that <code>raw_data</code> is 'on air' at this point ;). Now it's up to you how do you want to use gathered data, but this module seems to give you everything you need.</p>
| 0 | 2016-08-05T21:45:02Z | [
"python",
"audio",
"matplotlib",
"mp3",
"pyaudio"
] |
Multiprocessing, what does pool.ready do? | 38,797,938 | <p>Suppose I have a pool with a few processes inside of a class that I use to do some processing, like this:</p>
<pre><code>class MyClass:
def __init_(self):
self.pool = Pool(processes = NUM_PROCESSES)
self.pop = []
self.finished = []
def gen_pop(self):
self.pop = [ self.pool.apply_async(Item.test, (Item(),)) for _ in range(NUM_PROCESSES) ]
while (not self.check()):
continue
# Do some other stuff
def check(self):
self.finished = filter(lambda t: self.pop[t].ready(), range(NUM_PROCESSES))
new_pop = []
for f in self.finished:
new_pop.append(self.pop[f].get(timeout = 1))
self.pop[f] = None
# Do some other stuff
</code></pre>
<p>When I run this code I get a <code>cPickle.PicklingError</code> which states that a <code><type 'function'></code> can't be pickled. What this tells me is that one of the <code>apply_async</code> functions has not returned yet so I am attempting to append a running function to another list. But this shouldn't be happening because all running calls should have been filtered out using the <code>ready()</code> function.</p>
<p>On a related note, the actual nature of the <code>Item</code> class is unimportant but what is important is that at the top of my <code>Item.test</code> function I have a print statement which is supposed to fire for debugging purposes. However, that does not occur. This tells me that that the function has been initiated but has not actually started execution.</p>
<p>So then, it appears that <code>ready()</code> does not actually tell me whether or not a call has finished execution or not. What exactly does <code>ready()</code> do and how should I edit my code so that I can filter out the processes that are still running?</p>
| 2 | 2016-08-05T21:22:22Z | 38,798,333 | <p>Multiprocessing uses <code>pickle</code> module internally to pass data between processes,
so your data must be <em>picklable</em>. See <a href="https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled" rel="nofollow">the list of what is considered picklable</a>, object method is not in that list.<br>
To solve this quickly just use a wrapper function around the method:</p>
<pre><code>def wrap_item_test(item):
item.test()
class MyClass:
def gen_pop(self):
self.pop = [ self.pool.apply_async(wrap_item_test, (Item(),)) for _ in range(NUM_PROCESSES) ]
while (not self.check()):
continue
</code></pre>
| 3 | 2016-08-05T22:02:34Z | [
"python",
"multiprocessing",
"pool"
] |
Multiprocessing, what does pool.ready do? | 38,797,938 | <p>Suppose I have a pool with a few processes inside of a class that I use to do some processing, like this:</p>
<pre><code>class MyClass:
def __init_(self):
self.pool = Pool(processes = NUM_PROCESSES)
self.pop = []
self.finished = []
def gen_pop(self):
self.pop = [ self.pool.apply_async(Item.test, (Item(),)) for _ in range(NUM_PROCESSES) ]
while (not self.check()):
continue
# Do some other stuff
def check(self):
self.finished = filter(lambda t: self.pop[t].ready(), range(NUM_PROCESSES))
new_pop = []
for f in self.finished:
new_pop.append(self.pop[f].get(timeout = 1))
self.pop[f] = None
# Do some other stuff
</code></pre>
<p>When I run this code I get a <code>cPickle.PicklingError</code> which states that a <code><type 'function'></code> can't be pickled. What this tells me is that one of the <code>apply_async</code> functions has not returned yet so I am attempting to append a running function to another list. But this shouldn't be happening because all running calls should have been filtered out using the <code>ready()</code> function.</p>
<p>On a related note, the actual nature of the <code>Item</code> class is unimportant but what is important is that at the top of my <code>Item.test</code> function I have a print statement which is supposed to fire for debugging purposes. However, that does not occur. This tells me that that the function has been initiated but has not actually started execution.</p>
<p>So then, it appears that <code>ready()</code> does not actually tell me whether or not a call has finished execution or not. What exactly does <code>ready()</code> do and how should I edit my code so that I can filter out the processes that are still running?</p>
| 2 | 2016-08-05T21:22:22Z | 38,799,262 | <p>To answer the question you asked, <code>.ready()</code> is really telling you whether <code>.get()</code> <em>may</em> block: if <code>.ready()</code> returns <code>True</code>, <code>.get()</code> will <em>not</em> block, but if <code>.ready()</code> returns <code>False</code>, <code>.get()</code> <em>may</em> block (or it may not: quite possible the async call will complete before you get around to calling <code>.get()</code>).</p>
<p>So, e.g., the <code>timeout = 1</code> in your <code>.get()</code> serves no purpose: since you only call <code>.get()</code> if <code>.ready()</code> returned <code>True</code>, you already know for a fact that <code>.get()</code> won't block.</p>
<p>But <code>.get()</code> not blocking does <em>not</em> imply the async call was successful, or even that a worker process even started working on an async call: as the docs say,</p>
<blockquote>
<p>If the remote call raised an exception then that exception will be reraised by <code>get()</code>.</p>
</blockquote>
<p>That is, e.g., if the async call couldn't be performed <em>at all</em>, <code>.ready()</code> will return <code>True</code> and <code>.get()</code> will (re)raise the exception that prevented the attempt from working.</p>
<p>That appears to be what's happening in your case, although we have to guess because you didn't post runnable code, and didn't include the traceback.</p>
<p>Note that if what you really want to know is whether the async call completed normally, after already getting <code>True</code> back from <code>.ready()</code>, then <code>.successful()</code> is the method to call.</p>
<p>It's pretty clear that, whatever <code>Item.test</code> may be, it's flatly impossible to pass it as a callable to <code>.apply_async()</code>, due to pickle restrictions. That explains why <code>Item.test</code> never prints anything (it's never actually called!), why <code>.ready()</code> returns <code>True</code> (the <code>.apply_async()</code> call failed), and why <code>.get()</code> raises an exception (because <code>.apply_async()</code> encountered an exception while trying to pickle one of its arguments - probably <code>Item.test</code>).</p>
| 2 | 2016-08-06T00:00:57Z | [
"python",
"multiprocessing",
"pool"
] |
Creating pairs of objects in a list | 38,797,998 | <p>I have a list of objects:</p>
<pre><code>object_list = ['distance', 'alpha1', 'alpha2', 'gamma']
</code></pre>
<p>I want to obtain a new list with a pair combination of those objects, such as:</p>
<pre><code>new_list = [ ['distance', 'alpha1'], ['distance', 'alpha2'], ['distance', 'gamma'],['alpha1', 'alpha2'], [ 'alpha1', 'gamma'] ... ]
</code></pre>
<p>Generally I will obtain 24 sublists(cases). </p>
| 0 | 2016-08-05T21:28:21Z | 38,798,051 | <p>Have a look at <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow">itertools</a> - <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow">itertools.combinations</a> seems to be the thing you are looking for. use like</p>
<pre><code>import itertools
object_list = ['distance', 'alpha1', 'alpha2', 'gamma']
new_list = list(itertools.combinations(object_list, 2))
</code></pre>
| 0 | 2016-08-05T21:34:03Z | [
"python"
] |
Creating pairs of objects in a list | 38,797,998 | <p>I have a list of objects:</p>
<pre><code>object_list = ['distance', 'alpha1', 'alpha2', 'gamma']
</code></pre>
<p>I want to obtain a new list with a pair combination of those objects, such as:</p>
<pre><code>new_list = [ ['distance', 'alpha1'], ['distance', 'alpha2'], ['distance', 'gamma'],['alpha1', 'alpha2'], [ 'alpha1', 'gamma'] ... ]
</code></pre>
<p>Generally I will obtain 24 sublists(cases). </p>
| 0 | 2016-08-05T21:28:21Z | 38,798,085 | <p><a href="https://docs.python.org/dev/library/itertools.html#itertools.combinations" rel="nofollow">itertools.combinations</a> if order isn't important or <a href="https://docs.python.org/dev/library/itertools.html#itertools.permutations" rel="nofollow">itertools.permutations</a> if order matters</p>
<h2>itertools.combinations</h2>
<pre><code>>>> a = ['a', 'b', 'c']
>>> list(itertools.combinations(a, 2))
('a', 'b'), ('a', 'c'), ('b', 'c')] # Order isn't important
</code></pre>
<h2>itertools.permutations</h2>
<pre><code>>>> a = ['a', 'b', 'c']
>>> list(itertools.permutations(a, 2))
[('a', 'b'), ('a', 'c'), ('b', 'a'), ('b', 'c'), ('c', 'a'), ('c', 'b')] #Order matters
</code></pre>
| 1 | 2016-08-05T21:37:51Z | [
"python"
] |
translate Growing Recursion to Iteration? | 38,798,004 | <p>I am writing a program for a process in Minecraft, it's supposed to edit the world and "clean it up" by replacing the blocks you can't see.</p>
<p>So the situation is a 3D world of cubic blocks. The program needs to identify a body of air and propagate in all directions from each block to see if there's more air touching it. I wrote a recursive function in jython used in conjunction with MCEdit (jython is basically a bridge between java and python).</p>
<p>The source of the problem is that each call creates 5 new ones.
An example function:</p>
<pre><code>def checkAir(coordinate):
#check if there's air and if so, add to a list
for direction in directions:
nextCoordinate = direction.increment(coordinate)
checkAir(nextCoordinate)
</code></pre>
<p>The function is much more complicated in reality. Among other things, before moving on, it will make sure it doesn't go back to the coordinate it just came from, and it checks a list containing the coordinates of the air body to see if it's already there. If so, it will not make more recursive calls.</p>
<p>So the source of the problem is a RuntimeError: maximum recursion depth exceeded. AKA StackOverflow.</p>
<p>I want to know how I could write this program with a more iterative approach, to prevent the stackoverflow error. If you don't know python, I don't mind java at all. I can translate it myself. Thanks for the help in advance!</p>
| 0 | 2016-08-05T21:28:52Z | 38,798,534 | <p>You could use data structures such as stacks to pass from recursive algorithms to iterative algorithms:</p>
<pre><code>Stack<Object> stack;
stack.push(first_object);
while( !stack.isEmpty() ) {
// Do something
my_object = stack.pop();
// Push other objects on the stack.
}
</code></pre>
| 0 | 2016-08-05T22:26:33Z | [
"java",
"python",
"recursion"
] |
Fast query in formatted data | 38,798,033 | <p>In my program I need to query through metadata.</p>
<p>I read data into <code>numpy</code> record array <code>A</code> from csv-like text file ** without duplicate rows**.</p>
<pre><code>var1|var2|var3|var4|var5|var6
'a1'|'b1'|'c1'|1.2|2.2|3.4
'a1'|'b1'|'c4'|3.2|6.2|3.2
'a2'|''|'c1'|1.4|5.7|3.8
'a2'|'b1'|'c2'|1.2|2.2|3.4
'a3'|''|'c2'|1.2|2.2|3.4
'a1'|'b2'|'c4'|7.2|6.2|3.2
...
</code></pre>
<p>There are <strong>millions</strong> of rows and the query in nested loops can be up to <strong>billion</strong> times (mostly matching the first 3 columns), so the efficiency becomes critical.</p>
There are 3 types of queries and the first one is the most frequent.
<ul>
<li><p>Get rows matching one or more of the first 3 columns with given strings, e.g.,</p>
<ul>
<li><p>To match a record where <code>var1='a2'</code> and <code>var2='b1'</code>, </p>
<pre><code> ind = np.logical_and(A['var1']=='a2', A['var2']=='b1')
</code></pre></li>
<li><p>To match a record where <code>var1='a2'</code>, <code>var2='b1'</code> and <code>var3='c1'</code>,</p>
<pre><code> ind = np.logical_and(np.logical_and(A['var1']=='a2', A['var2']=='b1'), A['var3']=='c1')
</code></pre></li>
</ul></li>
</ul>
<p>As one can see, each time we compare the all elements of columns with given strings. </p>
<p>I thought mapping could be a more efficient way for indexing, so I converted the recarray <code>A</code> to a dict <code>D = {'var1_var2_var3</code>: [var4, var5, var6], ...}<code>, and search through the keys by</code>fnmatch(keys, pat)`. I'm not sure it's a better way.</p>
<p>Or I can make a hierachical dict <code>{'var1':{'var2':{'var3':[],...},...},...}</code> or in-memory hdf5 <code>/var1/var2/var3</code> and just try to get the item if exists. This looks the fastest way?</p>
<p>The latter two types of queries are not very frequent and I can accept the way of numpy recarray comparison.</p>
<ul>
<li><p>Get all rows the numeric values in the latter columns in a specific range, e.g.,</p>
<ul>
<li><p>get rows where '1
<pre><code>ind = np.logical_and(1<A['var4']<3), 0<A['var5']<3)
</code></pre></li>
</ul></li>
<li><p>A combination of the above two, e.g.,</p>
<ul>
<li><p>get rows where <code>var2='b1'</code>, '1
<pre><code>ind = np.logical_and(np.logical_and(A['var2']=='b1', 1<A['var4']<3), 0<A['var5']<3)
</code></pre></li>
</ul></li>
</ul>
<p><code>SQL</code> could be a good way but it looks too heavy to use database for this small task. And I don't have authority to install database support everywhere.</p>
<p>Any suggestions for data structure for fast in-memory query? (If it is hard to have a simple customed implementation, <code>sqlite</code> and <code>pandas.dateframe</code> seem to be possible solutions, as suggested.)</p>
| 2 | 2016-08-05T21:31:51Z | 38,799,154 | <p>Use <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a>, it is built for tasks like this:</p>
<pre><code># Import
import pandas as pd
# Read CSV
df = pd.read_csv('/path/to/file.csv')
# Selection criteria
# using `.query` method:
df.query('var1 == "a2" & var3 == "c1"')
df.query('var2 == "b1" & 1 < var4 < 3 & 0 < var5 < 3')
# using indexing:
df[(df['var1'] == 'a2') & (df['var3'] == 'c1')]
df[(df['var2'] == 'b1') & df['var4'].between(1,3) & df['var5'].between(0,3)]
# using `.where` method:
df.where((df['var1'] == 'a2') & (df['var3'] == 'c1'))
df.where(df['var2'] == 'b1') & df['var4'].between(1,3) & df['var5'].between(0,3))
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/version/0.18.1/indexing.html" rel="nofollow">More indexing and selecting information</a></p>
| 0 | 2016-08-05T23:42:17Z | [
"python",
"database",
"numpy",
"indexing",
"mapping"
] |
Fast query in formatted data | 38,798,033 | <p>In my program I need to query through metadata.</p>
<p>I read data into <code>numpy</code> record array <code>A</code> from csv-like text file ** without duplicate rows**.</p>
<pre><code>var1|var2|var3|var4|var5|var6
'a1'|'b1'|'c1'|1.2|2.2|3.4
'a1'|'b1'|'c4'|3.2|6.2|3.2
'a2'|''|'c1'|1.4|5.7|3.8
'a2'|'b1'|'c2'|1.2|2.2|3.4
'a3'|''|'c2'|1.2|2.2|3.4
'a1'|'b2'|'c4'|7.2|6.2|3.2
...
</code></pre>
<p>There are <strong>millions</strong> of rows and the query in nested loops can be up to <strong>billion</strong> times (mostly matching the first 3 columns), so the efficiency becomes critical.</p>
There are 3 types of queries and the first one is the most frequent.
<ul>
<li><p>Get rows matching one or more of the first 3 columns with given strings, e.g.,</p>
<ul>
<li><p>To match a record where <code>var1='a2'</code> and <code>var2='b1'</code>, </p>
<pre><code> ind = np.logical_and(A['var1']=='a2', A['var2']=='b1')
</code></pre></li>
<li><p>To match a record where <code>var1='a2'</code>, <code>var2='b1'</code> and <code>var3='c1'</code>,</p>
<pre><code> ind = np.logical_and(np.logical_and(A['var1']=='a2', A['var2']=='b1'), A['var3']=='c1')
</code></pre></li>
</ul></li>
</ul>
<p>As one can see, each time we compare the all elements of columns with given strings. </p>
<p>I thought mapping could be a more efficient way for indexing, so I converted the recarray <code>A</code> to a dict <code>D = {'var1_var2_var3</code>: [var4, var5, var6], ...}<code>, and search through the keys by</code>fnmatch(keys, pat)`. I'm not sure it's a better way.</p>
<p>Or I can make a hierachical dict <code>{'var1':{'var2':{'var3':[],...},...},...}</code> or in-memory hdf5 <code>/var1/var2/var3</code> and just try to get the item if exists. This looks the fastest way?</p>
<p>The latter two types of queries are not very frequent and I can accept the way of numpy recarray comparison.</p>
<ul>
<li><p>Get all rows the numeric values in the latter columns in a specific range, e.g.,</p>
<ul>
<li><p>get rows where '1
<pre><code>ind = np.logical_and(1<A['var4']<3), 0<A['var5']<3)
</code></pre></li>
</ul></li>
<li><p>A combination of the above two, e.g.,</p>
<ul>
<li><p>get rows where <code>var2='b1'</code>, '1
<pre><code>ind = np.logical_and(np.logical_and(A['var2']=='b1', 1<A['var4']<3), 0<A['var5']<3)
</code></pre></li>
</ul></li>
</ul>
<p><code>SQL</code> could be a good way but it looks too heavy to use database for this small task. And I don't have authority to install database support everywhere.</p>
<p>Any suggestions for data structure for fast in-memory query? (If it is hard to have a simple customed implementation, <code>sqlite</code> and <code>pandas.dateframe</code> seem to be possible solutions, as suggested.)</p>
| 2 | 2016-08-05T21:31:51Z | 38,808,983 | <p>With your file sample ('b' for py3)</p>
<pre><code>In [51]: txt=b"""var1|var2|var3|var4|var5|var6
...: 'a1'|'b1'|'c1'|1.2|2.2|3.4
...: 'a1'|'b1'|'c4'|3.2|6.2|3.2
...: 'a2'|''|'c1'|1.4|5.7|3.8
...: 'a2'|'b1'|'c2'|1.2|2.2|3.4
...: 'a3'|''|'c2'|1.2|2.2|3.4
...: 'a1'|'b2'|'c4'|7.2|6.2|3.2"""
</code></pre>
<p>A simple read leaves me with the double layer of quoting </p>
<pre><code>data = np.genfromtxt(txt.splitlines(), names=True, delimiter='|', dtype=None)
array([(b"'a1'", b"'b1'", b"'c1'", 1.2, 2.2, 3.4), ...
dtype=[('var1', 'S4'), ('var2', 'S4'), ('var3', 'S4'), ('var4', '<f8'), ('var5', '<f8'), ('var6', '<f8')])
</code></pre>
<p>So I'll define a converter to strip those (a <code>csv</code> reader might do it as well):</p>
<pre><code>def foo(astr):
return eval(astr)
In [55]: A = np.genfromtxt(txt.splitlines(), names=True, delimiter='|', dtype='U3,U3,U3,f8,f8,f8', converters={0:foo,1:foo,2:foo})
In [56]: A
Out[56]:
array([('a1', 'b1', 'c1', 1.2, 2.2, 3.4),
('a1', 'b1', 'c4', 3.2, 6.2, 3.2),
('a2', '', 'c1', 1.4, 5.7, 3.8),
('a2', 'b1', 'c2', 1.2, 2.2, 3.4),
('a3', '', 'c2', 1.2, 2.2, 3.4),
('a1', 'b2', 'c4', 7.2, 6.2, 3.2)],
dtype=[('var1', '<U3'), ('var2', '<U3'), ('var3', '<U3'), ('var4', '<f8'), ('var5', '<f8'), ('var6', '<f8')])
</code></pre>
<p>and I can write tests like</p>
<pre><code>In [57]: (A['var1']=='a2')&(A['var2']=='b1')
Out[57]: array([False, False, False, True, False, False], dtype=bool)
In [58]: (1<A['var4'])&(A['var4']<3)
Out[58]: array([ True, False, True, True, True, False], dtype=bool)
</code></pre>
<p>The tests over all records of <code>A</code> are being done in compile <code>numpy</code> code, so they shouldn't be that slow. </p>
<p>This data could also be viewed as 2 multicolumn fields</p>
<pre><code>In [59]: dt = np.dtype([('labels', '<U3', (3,)), ('data', '<f8', (3,))])
In [60]: A1 = A.view(dt)
In [61]: A1
Out[61]:
array([(['a1', 'b1', 'c1'], [1.2, 2.2, 3.4]),
(['a1', 'b1', 'c4'], [3.2, 6.2, 3.2]),
(['a2', '', 'c1'], [1.4, 5.7, 3.8]),
(['a2', 'b1', 'c2'], [1.2, 2.2, 3.4]),
(['a3', '', 'c2'], [1.2, 2.2, 3.4]),
(['a1', 'b2', 'c4'], [7.2, 6.2, 3.2])],
dtype=[('labels', '<U3', (3,)), ('data', '<f8', (3,))])
</code></pre>
<p>Or loaded directly with</p>
<pre><code>A = np.genfromtxt(txt.splitlines(), skip_header=1, delimiter='|', dtype='(3)U3,(3)f8', converters={0:foo,1:foo,2:foo})
</code></pre>
<p>Then tests could be written as:</p>
<pre><code>In [64]: (A1['labels'][:,0]=='a1') & (A1['labels'][:,1]=='b2') & ((A1['data']<6).any(axis=1))
Out[64]: array([False, False, False, False, False, True], dtype=bool)
In [65]: (A1['labels'][:,[0,1]]==['a1','b2']).all(axis=1)
Out[65]: array([False, False, False, False, False, True], dtype=bool)
</code></pre>
<p>Sometimes it might be clearer to give individual columns their own id:</p>
<pre><code>var1 = A1['labels'][:,0] # or A['var1']
....
(var1=='a1')&(var2='b1')&...
</code></pre>
<p>Repeated queries, or combinations could be saved.</p>
<p>I believe <code>pandas</code> stores its series in <code>numpy</code> arrays, with different dtype for each column (and object dtype if types varies within a column). But I haven't seen discussion of <code>pandas</code> speed and speed tricks. I don't expect much speed improvement unless it provides for some sort of indexing.</p>
<p>I can imagine writing this data to a database. <code>sqlite3</code> is builtin and has a <code>memory</code> mode so you don't need file access. But I'm sufficiently out of practice with that code that I'll pass on demonstrating it. Nor do I have a sense of how easy or fast it is to do these sorts of queries.</p>
<p><a href="https://mail.scipy.org/pipermail/scipy-user/2007-August/013350.html" rel="nofollow">https://mail.scipy.org/pipermail/scipy-user/2007-August/013350.html</a> has some code that can save a structured array to a sqlite3 database. It includes a function that converts a <code>dtype</code> into a table creation statement.</p>
<p>====================</p>
<p>I've got that <code>pipermail</code> example working with <code>python3</code>. The test example has 11 fields. With 5000 records, </p>
<pre><code>data[np.where(data['id']=='id2000')]
</code></pre>
<p>is 6x faster than a corresponding <code>sqlite3</code> query (with an existing <code>cursor</code>):</p>
<pre><code>cursor.execute('select * from data where id=?',('id2000',))
cursor.fetchone()
</code></pre>
| 0 | 2016-08-06T21:45:17Z | [
"python",
"database",
"numpy",
"indexing",
"mapping"
] |
drop first and last row from within each group | 38,798,058 | <p>This is a follow up question to <a href="http://stackoverflow.com/q/38797271/2336654">get first and last values in a groupby</a></p>
<p>How do I drop first and last rows within each group?</p>
<p>I have this <code>df</code></p>
<pre><code>df = pd.DataFrame(np.arange(20).reshape(10, -1),
[['a', 'a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'd'],
['a', 'a', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']],
['X', 'Y'])
df
</code></pre>
<p>I intentionally made the second row have the same index value as the first row. I won't have control over the uniqueness of the index.</p>
<pre><code> X Y
a a 0 1
a 2 3
c 4 5
d 6 7
b e 8 9
f 10 11
g 12 13
c h 14 15
i 16 17
d j 18 19
</code></pre>
<p>I want this</p>
<pre><code> X Y
a b 2.0 3
c 4.0 5
b f 10.0 11
</code></pre>
<p>Because both groups at level 0 equal to 'c' and 'd' have less than 3 rows, all rows should be dropped.</p>
| 3 | 2016-08-05T21:34:37Z | 38,798,076 | <p>I'd apply a similar technique to what I did for the other question:</p>
<pre><code>def first_last(df):
return df.ix[1:-1]
df.groupby(level=0, group_keys=False).apply(first_last)
</code></pre>
<p><a href="http://i.stack.imgur.com/AJZLH.png" rel="nofollow"><img src="http://i.stack.imgur.com/AJZLH.png" alt="enter image description here"></a></p>
| 3 | 2016-08-05T21:36:45Z | [
"python",
"pandas",
"group-by"
] |
Rasterising a TTF font | 38,798,080 | <p>I'm on the Raspberry Pi with a screen attached.</p>
<p>Rather than using X, I'm writing pixel data directly to the frame buffer. I've been able to draw images and primitive shapes, blend, use double buffering, etc...</p>
<p>Where I'm hitting a problem is drawing text. The screen is just a byte array from this level, so I need a way to take the font, size, text, etc. and convert it into a bitmap (actually, a <code>bool[]</code> and <code>width</code>/<code>height</code> would be preferable as it saves additional read/writes.</p>
<p>I have no idea how to approach this.</p>
<p>Things I've considered so far...</p>
<ul>
<li>Using a fixed-width font and an atlas/spritemap. Should work, I can already read images, however monospaced fonts have limited visual appeal. Also means adding more fonts is arduous.</li>
<li>Using a fixed-width font, an atlas and a mask to indicate where each character is. Would support variable-width fonts, however, scaling would be lossy and it seems like a maintenance nightmare unless I can automate the atlas/mask generation.</li>
</ul>
<p>Has anyone managed to do anything like this before?</p>
<p>If a library is required, I can live with that but as this is more an exercise in understanding my Pi than it is a serious project, I'd prefer an explanation/tutorial.</p>
| 0 | 2016-08-05T21:37:09Z | 38,798,177 | <p>Consider using the <a href="https://cairographics.org/" rel="nofollow">Cairo</a> graphics library, either for all your graphics, or as a tool to generate the font atlases. Cairo has extensive support for rendering fonts using TTF fonts, as well as for other useful graphics operations.</p>
<p>At a lower level, you could also use the <a href="https://www.freetype.org/" rel="nofollow">Freetype</a> library to load fonts and render characters from them directly. It's more difficult to work with, though.</p>
| 0 | 2016-08-05T21:46:15Z | [
"python",
"python-3.x",
"fonts",
"true-type-fonts",
"rasterizing"
] |
Python/Pandas - creating new variable based on several variables and if/elif/else function | 38,798,115 | <p>I am trying to create a new variable that is conditional based on values from several other values. I'm writing here because I've tried writing this as a nested ifelse() statement in R, but it had too many nested ifelse's so it threw an error, and I think there should be an easier way to sort this out in Python. </p>
<p>I have a dataframe (called df) that looks roughly like this (although in reality it's much bigger with many more month/year variables) that I've read in as a pandas DataFrame: </p>
<pre><code> ID Sept_2015 Oct_2015 Nov_2015 Dec_2015 Jan_2016 Feb_2016 Mar_2016 \
0 1 0 0 0 0 1 1 1
1 2 0 0 0 0 0 0 0
2 3 0 0 0 0 1 1 1
3 4 0 0 0 0 0 0 0
4 5 1 1 1 1 1 1 1
grad_time
0 240
1 218
2 236
3 0
4 206
</code></pre>
<p>I'm trying to create a new variable that depends on values from all these variables, but values from "earlier" variables need to have precedent, so the if/elif/else condition would like something like this:</p>
<pre><code>if df['Sept_2015'] > 0 & df['grad_time'] <= 236:
return 236
elif df['Oct_2015'] > 0 & df['grad_time'] <= 237:
return 237
elif df['Nov_2015'] > 0 & df['grad_time'] <= 238:
return 238
elif df['Dec_2015'] > 0 & df['grad_time'] <= 239:
return 239
elif df['Jan_2016'] > 0 & df['grad_time'] <= 240:
return 240
elif df['Feb_2016'] > 0 & df['grad_time'] <= 241:
return 241
elif df['Mar_2016'] > 0 & df['grad_time'] <= 242:
return 242
else:
return 0
</code></pre>
<p>And based on this, I'd like it to return a new variable that looks like this:</p>
<pre><code> trisk
0 240
1 0
2 240
3 0
4 236
</code></pre>
<p>I've tried writing a function like this:</p>
<pre><code>def test_func(df):
""" Test Function for generating new value"""
if df['Sept_2015'] > 0 & df['grad_time'] <= 236:
return 236
elif df['Oct_2015'] > 0 & df['grad_time'] <= 237:
return 237
...
else:
return 0
</code></pre>
<p>and mapping it to the dataframe to create new variable like this:</p>
<pre><code>new_df = pd.DataFrame(map(test_func, df))
</code></pre>
<p>However, when I run it, I get the following TypeError</p>
<pre><code> Traceback (most recent call last):
File "<ipython-input-83-19b45bcda45a>", line 1, in <module>
new_df = pd.DataFrame(map(new_func, test_df))
File "<ipython-input-82-a2eb6f9d7a3a>", line 3, in new_func
if df['Sept_2015'] > 0 & df['grad_time'] <= 236:
TypeError: string indices must be integers, not str
</code></pre>
<p>So I can see it's not wanting the column name here. But I've tried this a number of other ways and can't get it to work. Also, I understand this might not be the best way to write this (mapping the function) so I am open to new ways to attempt to solve the problem of generating the trisk variable. Thanks in advance and apologies if I haven't provided something. </p>
| 3 | 2016-08-05T21:40:20Z | 38,798,255 | <h3>Setup</h3>
<pre><code>df = pd.DataFrame([[0, 0, 0, 0, 1, 1, 1, 240],
[0, 0, 0, 0, 0, 0, 0, 218],
[0, 0, 0, 0, 1, 1, 1, 236],
[0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 206]],
pd.Index(range(1, 6), name='ID'),
['Sept_2015', 'Oct_2015', 'Nov_2015', 'Dec_2015',
'Jan_2016', 'Feb_2016', 'Mar_2016', 'grad_time'])
</code></pre>
<p>I used mostly numpy for this</p>
<pre><code>a = np.array([236, 237, 238, 239, 240, 241, 242])
b = df.values[:, :-1]
g = df.values[:, -1][:, None] <= a
a[(b & g).argmax(1)] * (b & g).any(1)
</code></pre>
<p>Assigning it to new column</p>
<pre><code>df['trisk'] = a[(b != 0).argmax(1)] * (b != 0).any(1)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/io5ha.png" rel="nofollow"><img src="http://i.stack.imgur.com/io5ha.png" alt="enter image description here"></a></p>
| 3 | 2016-08-05T21:54:42Z | [
"python",
"pandas",
"numpy"
] |
Python/Pandas - creating new variable based on several variables and if/elif/else function | 38,798,115 | <p>I am trying to create a new variable that is conditional based on values from several other values. I'm writing here because I've tried writing this as a nested ifelse() statement in R, but it had too many nested ifelse's so it threw an error, and I think there should be an easier way to sort this out in Python. </p>
<p>I have a dataframe (called df) that looks roughly like this (although in reality it's much bigger with many more month/year variables) that I've read in as a pandas DataFrame: </p>
<pre><code> ID Sept_2015 Oct_2015 Nov_2015 Dec_2015 Jan_2016 Feb_2016 Mar_2016 \
0 1 0 0 0 0 1 1 1
1 2 0 0 0 0 0 0 0
2 3 0 0 0 0 1 1 1
3 4 0 0 0 0 0 0 0
4 5 1 1 1 1 1 1 1
grad_time
0 240
1 218
2 236
3 0
4 206
</code></pre>
<p>I'm trying to create a new variable that depends on values from all these variables, but values from "earlier" variables need to have precedent, so the if/elif/else condition would like something like this:</p>
<pre><code>if df['Sept_2015'] > 0 & df['grad_time'] <= 236:
return 236
elif df['Oct_2015'] > 0 & df['grad_time'] <= 237:
return 237
elif df['Nov_2015'] > 0 & df['grad_time'] <= 238:
return 238
elif df['Dec_2015'] > 0 & df['grad_time'] <= 239:
return 239
elif df['Jan_2016'] > 0 & df['grad_time'] <= 240:
return 240
elif df['Feb_2016'] > 0 & df['grad_time'] <= 241:
return 241
elif df['Mar_2016'] > 0 & df['grad_time'] <= 242:
return 242
else:
return 0
</code></pre>
<p>And based on this, I'd like it to return a new variable that looks like this:</p>
<pre><code> trisk
0 240
1 0
2 240
3 0
4 236
</code></pre>
<p>I've tried writing a function like this:</p>
<pre><code>def test_func(df):
""" Test Function for generating new value"""
if df['Sept_2015'] > 0 & df['grad_time'] <= 236:
return 236
elif df['Oct_2015'] > 0 & df['grad_time'] <= 237:
return 237
...
else:
return 0
</code></pre>
<p>and mapping it to the dataframe to create new variable like this:</p>
<pre><code>new_df = pd.DataFrame(map(test_func, df))
</code></pre>
<p>However, when I run it, I get the following TypeError</p>
<pre><code> Traceback (most recent call last):
File "<ipython-input-83-19b45bcda45a>", line 1, in <module>
new_df = pd.DataFrame(map(new_func, test_df))
File "<ipython-input-82-a2eb6f9d7a3a>", line 3, in new_func
if df['Sept_2015'] > 0 & df['grad_time'] <= 236:
TypeError: string indices must be integers, not str
</code></pre>
<p>So I can see it's not wanting the column name here. But I've tried this a number of other ways and can't get it to work. Also, I understand this might not be the best way to write this (mapping the function) so I am open to new ways to attempt to solve the problem of generating the trisk variable. Thanks in advance and apologies if I haven't provided something. </p>
| 3 | 2016-08-05T21:40:20Z | 38,798,340 | <p>Without getting into streamlining your logic (which @piRSquared gets into): you can apply your <code>test_func</code> to the rows by issuing <code>.apply(test_func, axis=1)</code> to your dataframe.</p>
<pre><code>import io
import pandas as pd
data = io.StringIO('''\
ID Sept_2015 Oct_2015 Nov_2015 Dec_2015 Jan_2016 Feb_2016 Mar_2016 grad_time
0 1 0 0 0 0 1 1 1 240
1 2 0 0 0 0 0 0 0 218
2 3 0 0 0 0 1 1 1 236
3 4 0 0 0 0 0 0 0 0
4 5 1 1 1 1 1 1 1 206
''')
df = pd.read_csv(data, delim_whitespace=True)
def test_func(df):
""" Test Function for generating new value"""
if df['Sept_2015'] > 0 & df['grad_time'] <= 236:
return 236
elif df['Oct_2015'] > 0 & df['grad_time'] <= 237:
return 237
elif df['Nov_2015'] > 0 & df['grad_time'] <= 238:
return 238
elif df['Dec_2015'] > 0 & df['grad_time'] <= 239:
return 239
elif df['Jan_2016'] > 0 & df['grad_time'] <= 240:
return 240
elif df['Feb_2016'] > 0 & df['grad_time'] <= 241:
return 241
elif df['Mar_2016'] > 0 & df['grad_time'] <= 242:
return 242
else:
return 0
trisk = df.apply(test_func, axis=1)
trick.name = 'trisk'
print(trisk)
</code></pre>
<p>Output:</p>
<pre><code>0 240
1 0
2 240
3 0
4 236
Name: trisk, dtype: int64
</code></pre>
| 2 | 2016-08-05T22:03:15Z | [
"python",
"pandas",
"numpy"
] |
Python packet sniffer using ctypes crashes when copying socket buffer | 38,798,135 | <p>I'm trying to capture the first 20 bytes (full packet minus the options) of an IP packet, populate a <code>struct</code> of <code>ctype</code> members, and print the information I want to the screen (protocol, source and destination address). My IP class is as follows:</p>
<pre><code>import socket
import os
import struct
from ctypes import *
# host to listen on
host = "192.168.0.187"
# our IP header
class IP(Structure):
_fields_ = [
("ihl", c_ubyte, 4),
("version", c_ubyte, 4),
("tos", c_ubyte),
("len", c_ushort),
("id", c_ushort),
("offset", c_ushort),
("ttl", c_ubyte),
("protocol_num", c_ubyte),
("sum", c_ushort),
("src", c_ulong),
("dst", c_ulong)
]
def __new__(self, socket_buffer=None):
return self.from_buffer_copy(socket_buffer)
def __init__(self, socket_buffer=None):
# map protocol constants to their names
self.protocol_map = {1:"ICMP", 6:"TCP", 17:"UDP"}
# human readable IP addresses
self.src_address = socket.inet_ntoa(struct.pack("<L",self.src))
self.dst_address = socket.inet_ntoa(struct.pack("<L",self.dst))
# human readable protocol
try:
self.protocol = self.protocol_map[self.protocol_num]
except:
self.protocol = str(self.protocol_num)
</code></pre>
<p>Now, I create the socket, bind it to the host, and loop to get the packets:</p>
<pre><code># create socket and bind to public interface (os dependent)
if os.name == "nt":
socket_protocol = socket.IPPROTO_IP
else:
socket_protocol = socket.IPPROTO_ICMP
# create raw socket
sniffer = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket_protocol)
sniffer.bind((host, 0))
# include header information
sniffer.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1)
if os.name == "nt":
sniffer.ioctl(socket.SIO_RCVALL, socket.RCVALL_ON)
try:
while True:
# read in a packet
raw_buffer = sniffer.recvfrom(65565)[0]
# create an IP header from the first 20 bytes of the buffer
ip_header = IP(raw_buffer[0:20])
# print out the protocol that was detected and the hosts
print "Protocol: %s %s -> %s" % (ip_header.protocol, ip_header.src_¬
address, ip_header.dst_address)
# handle CTRL-C
except KeyboardInterrupt:
# if we're using Windows, turn off promiscuous mode
if os.name == "nt":
sniffer.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF)
</code></pre>
<p>When run using <code>c_ulong</code> as data type for the <code>"src"</code> and <code>"dst"</code> <code>_fields_</code> <code>struct</code> members, I get the following error (running in one terminal while pinging nostarch.com in another):</p>
<p><a href="http://i.stack.imgur.com/oMsC9.png" rel="nofollow"><img src="http://i.stack.imgur.com/oMsC9.png" alt="enter image description here"></a></p>
<p>I postulated that perhaps the size of the <code>c_ulong</code> was larger than a byte, thus throwing off my requirement for the first 20 bytes (I'm very new to python). I then changed the <code>c_ulong</code> to <code>c_ushort</code> and ran it again:</p>
<p><a href="http://i.stack.imgur.com/q39Op.png" rel="nofollow"><img src="http://i.stack.imgur.com/q39Op.png" alt="enter image description here"></a></p>
<p>Actual ping path:</p>
<p><a href="http://i.stack.imgur.com/w0Ptd.png" rel="nofollow"><img src="http://i.stack.imgur.com/w0Ptd.png" alt="enter image description here"></a></p>
<p>So, while the script ran without error, it's cutting off the <code>src</code> and <code>dst</code> addresses.</p>
<p>Why is it asking for at least 32 bytes when I'm telling it I only want the first 20?</p>
<p>(I'm in Kali64 VBox VM, running on Win7 host, using Python 2.7)</p>
<p>Any help is appreciated.</p>
| 0 | 2016-08-05T21:42:26Z | 38,801,790 | <p>The size of <code>IP</code> should be verified, i.e.</p>
<pre><code>print(sizeof(IP))
</code></pre>
<p>should return 20 bytes. Since <code>ctypes.u_long</code> is 8 in case of 64bit linux, the size will be 32 bytes (4 bytes extra due to padding, 8 bytes due to integer sizes). Either use <code>ctypes.u_int</code> or explicit sizes as follows:</p>
<pre><code>from ctypes import *
class IP(Structure):
_fields_ = [ ("version", c_uint8, 4),
("ihl", c_uint8, 4),
("tos", c_uint8),
("len", c_uint16),
("id", c_uint16),
("offset", c_uint16),
("ttl", c_uint8),
("protocol_num", c_uint8),
("sum", c_uint16),
("src", c_uint32),
("dst", c_uint32) ]
print(sizeof(IP))
</code></pre>
| 1 | 2016-08-06T07:39:32Z | [
"python",
"sockets",
"buffer",
"ctypes",
"packet-sniffers"
] |
iPython- Is there a way to answer "y" to (or ignore) all y/n prompts? | 38,798,181 | <p>So I have an ipython notebook that has a lot of large variables, and at one point I want to get rid of all the ones I'm done with. I'm using %reset_selective variablename to clear each one, but there's 60 of these variables and when I run the block that has all 60 prompts, it asks me to enter y/n for every clear.</p>
<p>"Once deleted, variables cannot be recovered. Proceed (y/[n])?"</p>
<p>Is there a way I can answer "y" for all of them at once, or to skip the prompt altogether?</p>
| 2 | 2016-08-05T21:46:45Z | 38,798,497 | <p>Reading from <a href="https://ipython.org/ipython-doc/2/api/generated/IPython.core.magics.namespace.html#IPython.core.magics.namespace.NamespaceMagics.reset_selective" rel="nofollow">here</a>:</p>
<blockquote>
<p>%reset_selective [-f] regex</p>
<p>No action is taken if regex is not included</p>
<p>Options
<strong>-f</strong> : force reset without asking for confirmation.</p>
</blockquote>
<p>Seems you're able to pass <code>-f</code> flag to <code>%reset_selective</code> to have it force without asking.</p>
| 3 | 2016-08-05T22:21:29Z | [
"python",
"ipython",
"command-prompt",
"jupyter-notebook"
] |
How to do Python , PostgreSQL integration? | 38,798,238 | <p>I want to call PostgreSQL queries and return results for python APIs?
Basically , do a python and PostgreSQL integration/Connectivity.
So, for specific Python API /calls want to execute the queries n return result.</p>
<p>Also, want to achieve abstraction of PostgreSQL DB.</p>
<p>Thanks.</p>
| -3 | 2016-08-05T21:53:10Z | 38,798,357 | <p>To add to klin's comment:</p>
<p><a href="https://pypi.python.org/pypi/psycopg2" rel="nofollow">psycopg2</a> -
This is the most popular psql adapter for python. It was build to address heavy concurrency issues with psql database usage. Several extensions are available for added functionality with the DB API.</p>
<p><a href="http://magic.io/blog/asyncpg-1m-rows-from-postgres-to-python/" rel="nofollow">asyncpg</a> -
More recent psql adapter which seeks to address shortfalls in functionality and performance that exist with psycopg2. Doubles the speed of psycopg's text based data exchange protocol by using binary I/O (which adds generic support for container types). A Major plus is that it has zero dependencies. No personal experience with this adapter but will test soon.</p>
| 0 | 2016-08-05T22:05:03Z | [
"python",
"postgresql"
] |
Is python smart enough to replace function calls with constant result? | 38,798,244 | <p>Coming from the beautiful world of <a href="/questions/tagged/c" class="post-tag" title="show questions tagged 'c'" rel="tag">c</a>, I am trying understand this behavior:</p>
<pre><code>In [1]: dataset = sqlContext.read.parquet('indir')
In [2]: sizes = dataset.mapPartitions(lambda x: [len(list(x))]).collect()
In [3]: for item in sizes:
...: if(item == min(sizes)):
...: count = count + 1
...:
</code></pre>
<p>would <em>not</em> even finish after 20 <em>minutes</em>, and I know that the list <code>sizes</code> is not that big, less than 205k in length. However this executed <em>instantly</em>:</p>
<pre><code>In [8]: min_item = min(sizes)
In [9]: for item in sizes:
if(item == min_item):
count = count + 1
...:
</code></pre>
<p>So what happened?</p>
<p><sub> My guess: <a href="/questions/tagged/python" class="post-tag" title="show questions tagged 'python'" rel="tag">python</a> could not understand that <code>min(sizes)</code> will be always constant, thus replace after the first few calls with its return value..since Python uses the interpreter..</p>
<hr>
<p>Ref of <a href="http://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=min#pyspark.RDD.min" rel="nofollow">min()</a> doesn't say anything that would explain the matter to me, but what I was thinking is that it may be that it needs to look at the partitions to do that, but that shouldn't be the case, since <code>sizes</code> is a <em><code>list</code></em>, not an <em><code>RDD</code></em>!</sub></p>
<hr>
<p>Edit:</p>
<p>Here is the source of my confusion, I wrote a similar program in C:</p>
<pre><code>for(i = 0; i < SIZE; ++i)
if(i == mymin(array, SIZE))
++count;
</code></pre>
<p>and got these timings:</p>
<pre><code>C02QT2UBFVH6-lm:~ gsamaras$ gcc -Wall main.c
C02QT2UBFVH6-lm:~ gsamaras$ ./a.out
That took 98.679177000 seconds wall clock time.
C02QT2UBFVH6-lm:~ gsamaras$ gcc -O3 -Wall main.c
C02QT2UBFVH6-lm:~ gsamaras$ ./a.out
That took 0.000000000 seconds wall clock time.
</code></pre>
<p>and for timings, I used Nomimal Animal's approach from my <a href="https://gsamaras.wordpress.com/code/1651-2/" rel="nofollow">Time measurements</a>.</p>
| 6 | 2016-08-05T21:53:43Z | 38,798,786 | <p>I'm by no means an expert on the inner workings of python, but from my understanding thus far you'd like to compare the speed of</p>
<pre><code>for item in sizes:
if(item == min(sizes)):
count = count + 1
</code></pre>
<p>and</p>
<pre><code>min_item = min(sizes)
for item in sizes:
if(item == min_item):
count = count + 1
</code></pre>
<p>Now someone correct me if I have any of this wrong but,</p>
<p><strong>In python lists are mutable and do not have a fixed length</strong>, and are treated as such, while in C an array has a fixed size. From <a href="http://stackoverflow.com/questions/176011/python-list-vs-array-when-to-use">this question</a>: </p>
<blockquote>
<p>Python lists are very flexible and can hold completely heterogeneous, arbitrary data, and they can be appended to very efficiently, in amortized constant time. If you need to shrink and grow your array time-efficiently and without hassle, they are the way to go. But they use a lot more space than C arrays.</p>
</blockquote>
<p>Now take this example</p>
<pre><code>for item in sizes:
if(item == min(sizes)):
new_item = item - 1
sizes.append(new_item)
</code></pre>
<p>Then the value of <code>item == min(sizes)</code> would be different on the next iteration. Python doesn't cache the resulting value of <code>min(sizes)</code> since it would break the above example, or require some logic to check if the list has been changed. Instead it leaves that up to you. By defining <code>min_item = min(sizes)</code> you are essentially caching the result yourself.</p>
<p>Now since the array is a fixed size in C, it can find the min value with less overhead than a python list, thus why I <em>think</em> it has no problem in C (as well as C being a much lower level language).</p>
<p>Again, I don't fully understand the underlying code and compilation for python, and I'm certain if you analyzed the process of the loops in python, you'd see python repeatedly computing <code>min(sizes)</code>, causing the extreme amount of lag. I'd love to learn more about the inner workings of python (for example, are any methods cached in a loop for python, or is everything computed again for each iteration?) so if anyone has more info and/or corrections, let me know!</p>
| 5 | 2016-08-05T22:55:51Z | [
"python",
"c",
"performance",
"optimization",
"apache-spark"
] |
Django Rest Framework does not decode JSON fields in Multipart requests like it does with standard POST requests | 38,798,251 | <p>When I'm submitting a POST request with a <code>Content-Type</code> of <code>application/json</code>, my data at the server is decoded into native Python as it should - JSON objects are presented as dicts, arrays as arrays etc. That's great.<br>
However, when I'm doing a MultiPart post request to my API, which of course also contains a file, any field containing JSON/objects is not decoded at the server and I'm left with strings I need to decode myself. The nature of my app means that I cannot always know which fields I'm about to get. </p>
<p>My question is - how can I submit a multipart request with a file and yet retain DRF's ability to decode JSON data in some of the fields?
I've tried using all 3 major parsers together but that didn't work (by setting the API view's parser_classes to them:
<code>
parser_classes = (MultiPartParser, JSONParser, FormParser)
</code></p>
<p>Here are some example requests sent (via Chrome's dev tools):</p>
<p>Standard Post (not multipart, no file):
<code>
{"first_name":"Test","last_name":"Shmest","email":[{"channel_type":"Email","value":"test@example.com","name":null,"default":false}],"company":{"position":"Manager","id":"735d2b5f-e032-4ca8-93e4-c7773872d0cc","name":"The Compapa"},"access":{"private":true,"users":[10,1]},"description":"Nice guy!!","address":{"city":"San Fanfanfo","zip":"39292","country":"United States of America","state":"CA","map_url":null,"country_code":"US","address":"123 This street"},"phone":[{"default":false,"type":"Phone","id":"70e2b437-6841-4536-9acf-f6a55cc372f6","value":"+141512312345","name":null}],"position":"","department":"","supervisor":"","assistant":"","referred_by":"","status":"","source":"","category":"Lead","do_not_call":false,"do_not_text":false,"do_not_email":false,"birthday":null,"identifier":""}
</code>
This payload gets read by DRF just fine, and all values are set to their native equivalent. </p>
<p>Multipart with file and one of the fields is JSON encoded object:</p>
<pre><code>------WebKitFormBoundaryPfKUrmBd9vRwp5Rb
Content-Disposition: form-data; name="file"; filename="image.png"
Content-Type: image/png
------WebKitFormBoundaryPfKUrmBd9vRwp5Rb
Content-Disposition: form-data; name="first_name"
Test
------WebKitFormBoundaryPfKUrmBd9vRwp5Rb
Content-Disposition: form-data; name="last_name"
Shmest
------WebKitFormBoundaryPfKUrmBd9vRwp5Rb
Content-Disposition: form-data; name="email"
[{"channel_type":"Email","value":"test@example.com","name":null,"default":false}]
------WebKitFormBoundaryPfKUrmBd9vRwp5Rb
Content-Disposition: form-data; name="company"
{"position":"Manager","id":"735d2b5f-e032-4ca8-93e4-c7773872d0cc","name":"The Compapa"}
------WebKitFormBoundaryPfKUrmBd9vRwp5Rb
Content-Disposition: form-data; name="access"
{"private":true,"users":[10,1]}
------WebKitFormBoundaryPfKUrmBd9vRwp5Rb
Content-Disposition: form-data; name="description"
Nice guy!!
------WebKitFormBoundaryPfKUrmBd9vRwp5Rb
Content-Disposition: form-data; name="address"
</code></pre>
<p>What I'm looking for is to see if there is some way to have the JSON decoding be automatic in multipart request like it is in regular POST, before I dive into manually decoding all fields to check if they are JSON or not. Most fields I'll be getting will be unknown to me until the request is made as each user might have a different combination of fields. </p>
| 0 | 2016-08-05T21:54:09Z | 38,808,608 | <p>I created a new Parser object to deal with this scenario of MultiPart file upload with fields containing JSON. Code is below if anyone ever needs it. </p>
<pre><code>import json
from rest_framework.parsers import BaseParser, DataAndFiles
from django.conf import settings
from django.http.multipartparser import MultiPartParser as DjangoMultiPartParser, MultiPartParserError
from django.utils import six
from rest_framework.exceptions import ParseError
class MultiPartJSONParser(BaseParser):
"""
Parser for multipart form data which might contain JSON values
in some fields as well as file data.
This is a variation of MultiPartJSONParser, which goes through submitted fields
and attempts to decode them as JSON where a value exists. It is not to be used as a replacement
for MultiPartParser, only in cases where MultiPart AND JSON data are expected.
"""
media_type = 'multipart/form-data'
def parse(self, stream, media_type=None, parser_context=None):
"""
Parses the incoming bytestream as a multipart encoded form,
and returns a DataAndFiles object.
`.data` will be a `QueryDict` containing all the form parameters, and JSON decoded where available.
`.files` will be a `QueryDict` containing all the form files.
"""
parser_context = parser_context or {}
request = parser_context['request']
encoding = parser_context.get('encoding', settings.DEFAULT_CHARSET)
meta = request.META.copy()
meta['CONTENT_TYPE'] = media_type
upload_handlers = request.upload_handlers
try:
parser = DjangoMultiPartParser(meta, stream, upload_handlers, encoding)
data, files = parser.parse()
for key in data:
if data[key]:
try:
data[key] = json.loads(data[key])
except ValueError:
pass
return DataAndFiles(data, files)
except MultiPartParserError as exc:
raise ParseError('Multipart form parse error - %s' % six.text_type(exc))
</code></pre>
<p>The parser can be used within the API view like any other:</p>
<p><code>
parser_classes = (MultiPartJSONParser, JSONParser , FormParser)
</code></p>
| 0 | 2016-08-06T20:54:00Z | [
"python",
"json",
"django",
"django-rest-framework"
] |
python multiprocessing.Array: huge temporary memory overhead | 38,798,330 | <p>If I use python's multiprocessing.Array to create a 1G shared array, I find that the python process uses around 30G of memory during the call to multiprocessing.Array and then decreases memory usage after that. I'd appreciate any help to figure out why this is happening and to work around it. </p>
<p>Here is code to reproduce it on Linux, with memory monitored by smem:</p>
<pre><code>import multiprocessing
import ctypes
import numpy
import time
import subprocess
import sys
def get_smem(secs,by):
for t in range(secs):
print subprocess.check_output("smem")
sys.stdout.flush()
time.sleep(by)
def allocate_shared_array(n):
data=multiprocessing.Array(ctypes.c_ubyte,range(n))
print "finished allocating"
sys.stdout.flush()
n=10**9
secs=30
by=5
p1=multiprocessing.Process(target=get_smem,args=(secs,by))
p2=multiprocessing.Process(target=allocate_shared_array,args=(n,))
p1.start()
p2.start()
print "pid of allocation process is",p2.pid
p1.join()
p2.join()
p1.terminate()
p2.terminate()
</code></pre>
<p>Here is output:</p>
<pre><code>pid of allocation process is 2285
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1080 4566 11924
2286 ubuntu /usr/bin/python /usr/bin/sm 0 4688 5573 7152
2276 ubuntu python test.py 0 4000 8163 16304
2285 ubuntu python test.py 0 137948 141431 148700
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2287 ubuntu /usr/bin/python /usr/bin/sm 0 4696 5560 7160
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 13260064 13263536 13270752
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2288 ubuntu /usr/bin/python /usr/bin/sm 0 4692 5556 7156
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 21692488 21695960 21703176
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2289 ubuntu /usr/bin/python /usr/bin/sm 0 4696 5560 7160
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 30115144 30118616 30125832
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 771 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2527 2700
2284 ubuntu python test.py 0 1192 4808 12052
2290 ubuntu /usr/bin/python /usr/bin/sm 0 4700 5481 7164
2276 ubuntu python test.py 0 4092 8267 16304
2285 ubuntu python test.py 0 31823696 31827043 31834136
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 771 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2527 2700
2284 ubuntu python test.py 0 1192 4808 12052
2291 ubuntu /usr/bin/python /usr/bin/sm 0 4700 5481 7164
2276 ubuntu python test.py 0 4092 8267 16304
2285 ubuntu python test.py 0 31823696 31827043 31834136
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "test.py", line 17, in allocate_shared_array
data=multiprocessing.Array(ctypes.c_ubyte,range(n))
File "/usr/lib/python2.7/multiprocessing/__init__.py", line 260, in Array
return Array(typecode_or_type, size_or_initializer, **kwds)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 115, in Array
obj = RawArray(typecode_or_type, size_or_initializer)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 88, in RawArray
result = _new_value(type_)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 63, in _new_value
wrapper = heap.BufferWrapper(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 243, in __init__
block = BufferWrapper._heap.malloc(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 223, in malloc
(arena, start, stop) = self._malloc(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 120, in _malloc
arena = Arena(length)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 82, in __init__
self.buffer = mmap.mmap(-1, size)
error: [Errno 12] Cannot allocate memory
</code></pre>
| 1 | 2016-08-05T22:02:13Z | 38,798,376 | <p>From the format of your print statements, you are using python 2</p>
<p>Replace <code>range(n)</code> by <code>xrange(n)</code> to save some memory.</p>
<pre><code>data=multiprocessing.Array(ctypes.c_ubyte,xrange(n))
</code></pre>
<p>(or use python 3)</p>
<p>1 billion range takes roughly 8GB (well I just tried that on my windows PC and it froze: just don't do that!)</p>
<p>Tried with 10**7 instead just to be sure:</p>
<pre><code>>>> z=range(int(10**7))
>>> sys.getsizeof(z)
80000064 => 80 Megs! you do the math for 10**9
</code></pre>
<p>A generator function like <code>xrange</code> takes no memory since it provides the values one by one when iterated upon.</p>
<p>In Python 3, they must have been fed up by those problems, figured out that most people used <code>range</code> because they wanted generators, killed <code>xrange</code> and turned <code>range</code> into a generator. Now if you really want to allocate all the numbers you have to to <code>list(range(n))</code>. At least you don't allocate one terabyte by mistake!</p>
<p>Edit:</p>
<p>The OP comment means that my explanation does not solve the problem. I have made some simple tests on my windows box:</p>
<pre><code>import multiprocessing,sys,ctypes
n=10**7
a=multiprocessing.RawArray(ctypes.c_ubyte,range(n)) # or xrange
z=input("hello")
</code></pre>
<p>Ramps up to 500Mb then stays at 250Mb with python 2
Ramps up to 500Mb then stays at 7Mb with python 3 (which is strange since it should at least be 10Mb...)</p>
<p>Conclusion: ok, it peaks at 500Mb, so that's not sure it will help, but can you try your program on Python 3 and see if you have less overall memory peaks?</p>
| 3 | 2016-08-05T22:06:54Z | [
"python",
"arrays",
"memory",
"multiprocessing",
"overhead"
] |
python multiprocessing.Array: huge temporary memory overhead | 38,798,330 | <p>If I use python's multiprocessing.Array to create a 1G shared array, I find that the python process uses around 30G of memory during the call to multiprocessing.Array and then decreases memory usage after that. I'd appreciate any help to figure out why this is happening and to work around it. </p>
<p>Here is code to reproduce it on Linux, with memory monitored by smem:</p>
<pre><code>import multiprocessing
import ctypes
import numpy
import time
import subprocess
import sys
def get_smem(secs,by):
for t in range(secs):
print subprocess.check_output("smem")
sys.stdout.flush()
time.sleep(by)
def allocate_shared_array(n):
data=multiprocessing.Array(ctypes.c_ubyte,range(n))
print "finished allocating"
sys.stdout.flush()
n=10**9
secs=30
by=5
p1=multiprocessing.Process(target=get_smem,args=(secs,by))
p2=multiprocessing.Process(target=allocate_shared_array,args=(n,))
p1.start()
p2.start()
print "pid of allocation process is",p2.pid
p1.join()
p2.join()
p1.terminate()
p2.terminate()
</code></pre>
<p>Here is output:</p>
<pre><code>pid of allocation process is 2285
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1080 4566 11924
2286 ubuntu /usr/bin/python /usr/bin/sm 0 4688 5573 7152
2276 ubuntu python test.py 0 4000 8163 16304
2285 ubuntu python test.py 0 137948 141431 148700
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2287 ubuntu /usr/bin/python /usr/bin/sm 0 4696 5560 7160
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 13260064 13263536 13270752
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2288 ubuntu /usr/bin/python /usr/bin/sm 0 4692 5556 7156
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 21692488 21695960 21703176
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2289 ubuntu /usr/bin/python /usr/bin/sm 0 4696 5560 7160
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 30115144 30118616 30125832
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 771 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2527 2700
2284 ubuntu python test.py 0 1192 4808 12052
2290 ubuntu /usr/bin/python /usr/bin/sm 0 4700 5481 7164
2276 ubuntu python test.py 0 4092 8267 16304
2285 ubuntu python test.py 0 31823696 31827043 31834136
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 771 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2527 2700
2284 ubuntu python test.py 0 1192 4808 12052
2291 ubuntu /usr/bin/python /usr/bin/sm 0 4700 5481 7164
2276 ubuntu python test.py 0 4092 8267 16304
2285 ubuntu python test.py 0 31823696 31827043 31834136
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "test.py", line 17, in allocate_shared_array
data=multiprocessing.Array(ctypes.c_ubyte,range(n))
File "/usr/lib/python2.7/multiprocessing/__init__.py", line 260, in Array
return Array(typecode_or_type, size_or_initializer, **kwds)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 115, in Array
obj = RawArray(typecode_or_type, size_or_initializer)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 88, in RawArray
result = _new_value(type_)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 63, in _new_value
wrapper = heap.BufferWrapper(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 243, in __init__
block = BufferWrapper._heap.malloc(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 223, in malloc
(arena, start, stop) = self._malloc(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 120, in _malloc
arena = Arena(length)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 82, in __init__
self.buffer = mmap.mmap(-1, size)
error: [Errno 12] Cannot allocate memory
</code></pre>
| 1 | 2016-08-05T22:02:13Z | 38,809,592 | <p>Unfortunately, the problem is not so much with the range, as I just put that in as a simple illustration. In reality, that data will be read from disk. I could also use n*["a"] and specify c_char in multiprocessing.Array as another example. That still uses around 16G when I'm only have 1G of data in the list I'm passing to multiprocessing.Array. I'm wondering if there is some inefficient pickling going on or something like that. </p>
<p>I seem to have found a workaround for what I need by using tempfile.SpooledTemporaryFile and numpy.memmap . I can open a memory map to a temp file in memory, which is spooled to disk when necessary, and share that among different processes by passing it as an argument to multiprocessing.Process. </p>
<p>I'm still wondering what is going on with multiprocessing.Array though. I don't know why it would use 16G for a 1G array of data.</p>
| 1 | 2016-08-06T23:28:10Z | [
"python",
"arrays",
"memory",
"multiprocessing",
"overhead"
] |
Unable to replace python string using the Template module | 38,798,351 | <p>I am trying to replace the following using the python Template module.</p>
<p><code>start_date BETWEEN DATEADD(days,-14,'${DATE_YYYY-MM-DD}') AND '${DATE_YYYY-MM-DD}'</code></p>
<p>I am using the ConfigParser to read the values and store it into a dictionary(config_params) which i am successfully able to do. Then I am doing a safe substitute to substitute into the above template but it doesnt seem to do anything. It replaces lots of other parameters in the file but just does not seem to replace ${DATE_YYYY-MM-DD}</p>
<p>The code that I am using is below:</p>
<pre><code>with open(templateFile, 'r+') as f:
temp = Template(f.read())
resultFile = temp.safe_substitute(config_params)
</code></pre>
<p>Any help on why this is happening ? Is it the () that it does not like ?</p>
<p>TemplateFile:</p>
<pre><code>SELECT
x
,y
,z
,a
,b
INTO ${od}.${tab}
FROM
mphd.${pd} as h
WHERE
a BETWEEN DATEADD(month,-12,'${DATE_YYYY-MM-DD}') AND '${DATE_YYYY-MM- DD}'
;
</code></pre>
<p>Config File:</p>
<pre><code>[GeneralParams]
od = sandbox
tab = abcd
pd = hierarchy_expanded
DATE_YYYY-MM-DD = 2016-08-05
</code></pre>
<p>config_params:</p>
<pre><code>{'od': 'sandbox', 'DATE_YYYY-MM-DD': '2016-08-05', 'pd': 'hierarchy_expanded', 'tab': 'abcd'}
</code></pre>
<p>Result File:</p>
<pre><code>SELECT
x
,y
,z
,a
,b
INTO sandbox.abcd
FROM
mphd.hierarchy_expanded as h
WHERE
a BETWEEN DATEADD(month,-12,'${DATE_YYYY-MM-DD}') AND '${DATE_YYYY-MM- DD}'
;
</code></pre>
| 0 | 2016-08-05T22:04:33Z | 38,799,116 | <p>According to <a href="https://docs.python.org/3.4/library/string.html#template-strings" rel="nofollow">the doc</a>, the default pattern that identifiers should follow is: <code>[_a-z][_a-z0-9]*</code>. Using <code>substitute</code> instead of <code>safe_substitute</code> warns about this.</p>
<p>However, you can fix that, by creating a new template class:</p>
<pre><code>class MyTemplate(Template):
idpattern = '[_a-z][_a-z0-9-]*' # Note the extra -
</code></pre>
<p>Then us <code>MyTemplate</code> (or whatever you want to name it) in your code instead of <code>Template</code>).</p>
| 1 | 2016-08-05T23:37:16Z | [
"python"
] |
Call xcopy with parameters? | 38,798,404 | <p>I'm trying to call <code>xcopy</code> with <code>subprocess</code> which will be equal to bat command.</p>
<p>Every time I get errors: "invalid number of parameters" or "file not found".</p>
<p>How can I do this?</p>
<p><strong>PYTHON</strong></p>
<pre><code>subprocess.call([
"xcopy",
str(C:\appFolder\appFile.txt),
str(F:\appFolder\appFile.txt),
"/s /y /q"
])
</code></pre>
<p><strong>CMD</strong></p>
<pre><code>xcopy "C:\appFolder\appFile.txt" "F:\appFolder\appFile.txt" /s /y /q >nul
</code></pre>
| 1 | 2016-08-05T22:09:23Z | 38,798,528 | <p>Can you do something similar to this, by using sys and env cmds instead?:-</p>
<pre><code>import sys,os
PATHTOFILE1="Some_path"
PATHTOFILE2="some other path"
os.environ['PATHTOFILE1'] = PATHTOFILE1
os.environ['PATHTOFILE2'] = PATHTOFILE2
os.system('xcopy "$PATHTOFILE1" "$PATHTOFILE2" /s /y /q >nul')
</code></pre>
| 0 | 2016-08-05T22:25:39Z | [
"python",
"windows",
"xcopy"
] |
How do I make a primary key without auto-increment? | 38,798,411 | <p>I've been trying to create a model that has a primary key, but I don't want that primary key to auto increment.</p>
<p>I know I can specify the value each time, but I want the field to be required that I specify it (hopefully enforced by the database and django), and fail fast if I forget.</p>
<p>It seemed logical that I would be able to say <code>auto_increment=False</code> on my field, but that isn't supported by the field :(</p>
| 1 | 2016-08-05T22:10:40Z | 38,798,432 | <p>Just create <code>id</code> field with <code>primary_key=True</code> explicitly in your model:</p>
<pre><code>class SomeModel(models.Model):
id = models.IntegerField(primary_key=True)
</code></pre>
<p>That way it won't be auto-incremented, but it will still be an primary key.</p>
| 3 | 2016-08-05T22:13:29Z | [
"python",
"django",
"auto-increment"
] |
Machine Learning - Stratified K-Fold CV | 38,798,415 | <p>I've unbalanced binary classifier data and want to Stratified K-Fold CV. I'm getting the below error:</p>
<pre><code>data = DataFrame(df,columns=names)
train,test = cross_validation.train_test_split(df,test_size=0.20)
train_data,test_data = pd.DataFrame(train,columns=names),pd.DataFrame(test,columns=names)
y = test_data['Classifier'].values
k_fold = StratifiedKFold(y, n_folds=3, shuffle=False, random_state=None)
scores = []
for train_indices, test_indices in k_fold:
print(train_indices)
print(test_indices)
train_text = train.iloc[train_indices]
train_y = train.iloc[train_indices]
print(train_y)
test_text = test.iloc[test_indices]
test_y = test.iloc[test_indices]
pipeline.fit(train_text, train_y)
</code></pre>
<p>Here, pipeline is:</p>
<pre><code>pipeline = Pipeline([
('count_vectorizer', CountVectorizer(ngram_range=(1, 2))),
('tfidf_transformer', TfidfTransformer()),
('classifier', MultinomialNB()) ]) . The error is occurring in pipeline.Below is the error.
C:\SMS\Anaconda32bit\lib\site-packages\sklearn\utils\validation.pyc in column_or_1d(y, warn)
549 return np.ravel(y)
--> 551 raise ValueError("bad input shape {0}".format(shape))
ValueError: bad input shape (54, 3)
</code></pre>
| 0 | 2016-08-05T22:11:24Z | 38,809,881 | <p>You are not passing valid <strong>labels</strong>, in fact in your code labels and data is the same thing:</p>
<pre><code>train_text = train.iloc[train_indices]
train_y = train.iloc[train_indices]
</code></pre>
<p>while probably you wanted something among the lines of</p>
<pre><code>train_y = y[train_indices]
</code></pre>
<p>and the same for test.</p>
| 0 | 2016-08-07T00:37:12Z | [
"python",
"machine-learning"
] |
Hex to signed Decimal | 38,798,427 | <p>If I have a log file named log1 with hex values in the format below: </p>
<pre><code>D8 D4 D4 D2
D6 D4 D4 D2
D6 D4 D4 D2
D6 D4 D4 D1
...............etc
</code></pre>
<p>how can i convert this values to signed decimal ( see format below) and then save them to another file named log2?</p>
<pre><code>-40 -44 -44 -46
-42 -44 -44 -46
-42 -44 -44 -46
-42 -44 -44 -47
....................etc
with open("log1.log","r") as f:
data = f.read()
def s16(value):
return -(value & 0x80) | (value & 0x7f)
new_data = s16(int(data[0:2], 16)), s16(int(data[3:5], 16)), s16(int(data[6:8], 16)), s16(int(data[9:11], 16))
with open("log2.log","w") as f:
f.write(new_data)
</code></pre>
<p>That's what I have so far, with this code i am able to print the first line</p>
<pre><code>(-40, -44, -44, -46)
</code></pre>
<p>but I am not sure how to make it print all the lines and not just the first line Thank you.</p>
| 0 | 2016-08-05T22:12:46Z | 38,798,520 | <p>Assuming those are two's complement bytes:</p>
<pre><code>return value - 256 if value > 127 else value
</code></pre>
| 0 | 2016-08-05T22:24:05Z | [
"python",
"python-2.7"
] |
Do I need install appium client in python virtualenv for deployment in Amazon Device Farm (ADF)? | 38,798,526 | <p>Instructions at <a href="http://docs.aws.amazon.com/es_es/devicefarm/latest/developerguide/test-types-android-appium-python.html" rel="nofollow">http://docs.aws.amazon.com/es_es/devicefarm/latest/developerguide/test-types-android-appium-python.html</a> does not tell anything about adding appium (appium wheel) into the virtualenv needed to build the test_bundle.zip.</p>
<p>If it's not added "py.test --collect-only tests/" run from the virtualenv will obviously fail and test_bundle.zip built without appium will fail on ADF.</p>
<p>So, first, I want to double check that after we install py.test in the virtualenv - "pip install pytest" we need also install appium client - "pip install Appium-Python-Client". </p>
<p>Then tests will run in ADF, but take amazing amount of time just for a single basic test that runs seconds on a physical device. With ADF I need to wait for about 20 minutes for the test to complete and then it shows 5 "Total minutes" for the test run. Does it look right?</p>
<p>Thanks.</p>
| 0 | 2016-08-05T22:25:13Z | 38,959,105 | <p>I work for the AWS Device Farm team.</p>
<p>Short Answer to main question in subject line: Yes</p>
<p><strong>Explanation:</strong></p>
<p><strong>Python virtualenv usage</strong></p>
<p>The confusion seems to around that the virtual environment is needed "just" for packaging. Our recommendation is to actually make sure that your tests run in the virtualenv rather than use it to just package the tests. </p>
<p>This way you will always have all the needed dependencies in your virtualenv and do not have to track dependencies individually. </p>
<p>From our documentation,
"We strongly recommend that you set up Python virtualenv for <strong>developing and packaging tests</strong> so that unnecessary dependencies are not included in your app package."</p>
<p>I will try to highlight this fact in a better way if this was not clear. </p>
<p><strong>Test execution timings</strong></p>
<p>On Device Farm we setup the device and make sure you get a device that is completely clean. We also run a new instance of Appium server for each test. This can add up time when the tests are being executed especially if the tests are really small which takes few seconds which is more than the setup time. If you average out the timings for such tests it can seem to make a difference although you are not charged for the time when we do the cleanup. The device minutes are accounted only after the app is installed and tests are ready to start. </p>
| 0 | 2016-08-15T16:33:23Z | [
"python",
"amazon-web-services",
"python-appium"
] |
Can VirtualEnv create symlinks in the current directory? | 38,798,568 | <p>I'm trying to use VirtualEnv more and more, and I'm coming across a few projects that invoke python via something like <code>popen()</code>, and for whatever reason the scripts aren't finding the site packages correctly.</p>
<p>For such projects, symlinking the site package in a similar manner as</p>
<pre><code>ln -s env/lib/python2.7/site-packages/<package> <package>
</code></pre>
<p>in the root of the directory seems to work.</p>
<p>Is there a way to have VirtualEnv do this for me, or am I going to have to wrap a script for that?</p>
| 0 | 2016-08-05T22:30:56Z | 38,801,779 | <p>I am not 100% sure that I am answering your exact question, because it is not explicit about what type of packages. However, having been working in VirtualEnv all week, I have had to use a number of modules that live elsewhere (for me in anaconda), so I will give it a go.</p>
<p>What I have been doing is, when I create an environment and activate it from the command line, I then pip install the packages into the virtual environment. It is much quicker than the original install because you are just making your environment aware if the module you want to use that already exists on your machine. </p>
<p>And I started a text file that has all of the modules I need in it so I can re-run it from cmd in a new environment and it installs everything at once.</p>
<p>Within Anaconda & VirutalEnv, my modules remain after deactivation and reactivation. But I am not sure how VirtualEnv works in a free-standing version of python 2.7.</p>
| 0 | 2016-08-06T07:38:18Z | [
"python",
"virtualenv"
] |
Need to get timed input to break out of loop | 38,798,645 | <p>I have a loop that times user input, when the loop is done I want to break out of it:</p>
<pre><code>import sys
import time
import msvcrt
from random import randint
import random
def time_input(caption, timeout=3):
start_time = time.time()
sys.stdout.write('%s: ' % (caption))
input = ''
while True:
if msvcrt.kbhit():
chr = msvcrt.getche()
if ord(chr) == 13:
break
elif ord(chr) >= 32:
input += chr
if len(input) == 0 and (time.time() - start_time) > timeout:
break
</code></pre>
<p>Where this loop is called, is here:</p>
<pre><code>def battle(animal, weapon, health):
print "To try to kill the {};" \
" you must press the correct key" \
" when it appears on the screen." \
" Press enter when ready".format(animal)
raw_input()
keys = 'abcdefghijklmnopqrstuvwxyz'
animal_health = 100
while health > 0:
while True:
if animal == 'mountain lion':
animal_damage = randint(27, 97)
animal_output = "The mountain lion slashes at you doing {} damage!".format(animal_damage)
else:
animal_damage = randint(10, 25)
animal_output = "The {} slashes at you doing {} damage!".format(animal, animal_damage)
correct_key = random.choice(keys)
print "Enter: {}".format(correct_key)
to_eval = time_input('> ')
if to_eval == correct_key:
damage = weapon_info(weapon, animal_health)
print damage
animal_health -= damage
print "The {} has {} health remaining".format(animal, animal_health)
if animal_health <= 0:
win_battle(animal, weapon)
else:
print animal_output
health -= animal_damage
print "You have {} health remaining".format(health)
if health <= 0:
lose_battle(animal)
break
battle('mountain lion', 'knife', 100)
</code></pre>
<p>How can I get this loop to break out when all the correct keys have been pressed and the health is 0 or less?</p>
<p>As of now it does this:</p>
<p>To try to kill the mountain lion; you must press the correct key when it appears
on the screen. Press enter when ready</p>
<pre><code>Enter: h
h: h
You slash at them with your knife doing 20
20
The mountain lion has 80 health remaining
Enter: a
a: a
You slash at them with your knife doing 24
24
The mountain lion has 56 health remaining
Enter: j
j: j
You slash at them with your knife doing 23
23
The mountain lion has 33 health remaining
Enter: p
p: p
You slash at them with your knife doing 10
10
The mountain lion has 23 health remaining
Enter: k
k: k
You slash at them with your knife doing 26
26
The mountain lion has -3 health remaining
You pick up the dead mountain lion and start walking back to camp.
You found extra meat!
Enter: h # Just keeps going
h: Traceback (most recent call last):
File "battle.py", line 97, in <module>
battle('mountain lion', 'knife', 100)
File "battle.py", line 31, in battle
to_eval = time_input(correct_key)
File "C:\Users\thomas_j_perkins\bin\python\game\settings.py", line 36, in time
_input
if msvcrt.kbhit():
KeyboardInterrupt
</code></pre>
| 0 | 2016-08-05T22:38:14Z | 38,798,722 | <p>Use <code>return</code> to escape the <code>battle</code>-function. My guess is right here</p>
<pre><code>if animal_health <= 0:
win_battle(animal, weapon)
return <------------------------------
</code></pre>
<p>Maybe you want to return your current hp if the battle was successful? This should be easy by simply returning health instead of nothing, i.e. <code>return healt</code> instead of just <code>return</code> </p>
<p>You probably want to do the same thing after loosing a battle as well, but without returning any health, i.e. <code>return 0</code>?</p>
| 0 | 2016-08-05T22:47:10Z | [
"python",
"loops",
"timeout"
] |
web2py form with list:reference not working with {{=form}} | 38,798,741 | <p>Using web2py, I am having trouble submitting (?) a form based on a table using list:reference. I have a table <em>db.game</em> that references to <em>db.game_events</em> in one of its columns called <em>game_event</em>.</p>
<p>The form for <em>db.game</em> is accepted, but when I try to reach the data in the column <em>game_events</em>, that uses list:reference reffering to db.game_events, the column is empty according to the built in web2py grid.</p>
<p><strong>I can see that the information is correctly posted to the database,</strong> showing the items in brackets in the supposedly empty column. Since I am using the built in web2py grid, I am assuming the collection of the rows is correct and that the problem lies elsewhere.</p>
<p>If I use the "Add record to database"-titled button in the web2py console (the black button with the plus-sign), and use the form there, the game_events column shows the items. </p>
<p><strong>So; if I try to use {{=form}} in the application, the <em>game_event</em>-column is treated as empty, but if I use the built in "Add record to database", the information is there. The question is simply; why can I not use {{=form}} for <em>db.game</em> anywhere in the application, when the built in form works fine?</strong> I have tried to simply use {{=form}} and not custom.</p>
<p>To make it even more confusing, if I edit any game in <em>db.game</em> in the web2py grid, and press "submit" without altering any information, the <em>game_event</em> column in the rows for <em>db.game</em> correctly show the <em>game_events</em>.</p>
<p>I have been stuck on this forever, I really would appreciate help! Thanks.</p>
<p>Code in db.py </p>
<pre><code>db.define_table(
'game',
Field('name', label='Tävlingsnamn'),
Field('status', requires=IS_IN_SET(define_game_status),default='started'),
Field('registration_start_date', 'date', requires = IS_DATE(format=('%Y-%m-%d')),label=T('Registrering öppnar')),
Field('registration_end_date', 'date', requires = IS_DATE(format=('%Y-%m-%d')),label=T('Registrering stänger')),
Field('start_date','date',requires = IS_DATE(format=('%Y-%m-%d')),label=T('Start date')),
Field('end_date','date',requires = IS_DATE(format=('%Y-%m-%d')),label=T('End date')),
Field('tracs_available','integer', requires=IS_IN_SET(define_track_amount), widget=div_radio_widget, label=T('Tracks')),
Field('tracs_available_sprint','integer', requires=IS_IN_SET(define_track_amount), widget=div_radio_widget, label=T('Sprint tracks')),
Field('game_type', requires=IS_IN_SET(define_game_type),default='Inactive', label=T('Type of event')),
Field('description','text',label=T('Description')),
Field('game_event',type='list:reference db.game_events', label='Tävlingsgren'),
format = '%(name)s')
db.game.game_event.requires=IS_IN_DB(db,'game_events.id','%(name)s',multiple=True)
db.define_table(
'event_class',
Field('name'),
format = '%(name)s')
db.define_table(
'game_events',
Field('name'),
Field('class_name', requires=IS_IN_DB(db,db.event_class.name,'%(name)s')),
Field('event_type', requires=IS_IN_SET(define_game_event_types)),
format ='%(id)s')
</code></pre>
<p>Code in the controller registration.py</p>
<pre><code>#FORM GAMES
def create_game():
#Form handling
#FORM
form = SQLFORM(db.game)
request.vars._formname = 'game'
form.custom.widget.name.update(_placeholder="ex Skelleftespelen")
#Registration of results in view
if form.accepts(request.vars, session, formname='game'):
print("accepted")
response.flash = 'Tävlingen har skapats!'
#game_rows = db(db.game).select(orderby=db.game.name)
return dict(form=form)
elif form.errors:
response.flash = 'form has errors'
return dict(form=form)
</code></pre>
<p>Code in the view create_game.html</p>
<pre><code> <div class="game_name">
<h4>
Tävling
</h4>
{{=form.custom.begin}}
Namn <div>{{=form.custom.widget.name}}</div>
Första anmälningsdag <div>{{=form.custom.widget.registration_start_date}}</div>
Sista anmälningsdag <div>{{=form.custom.widget.registration_end_date}}</div>
Första tävlingsdag <div>{{=form.custom.widget.start_date}}</div>
Sista tävlingsdag <div>{{=form.custom.widget.end_date}}</div>
Sort <div>{{=form.custom.widget.game_type}}</div>
Sort <div>{{=form.custom.widget.status}}</div>
Löparbanor <div>{{=form.custom.widget.tracs_available}}</div>
Sprintbanor <div>{{=form.custom.widget.tracs_available_sprint}}</div>
Beskrivning och/eller information <div>{{=form.custom.widget.description}}</div>
Grenar</br></br>
<p style="background:#FFE066; font-weight:bold;">
Notera: för att välja grenar måste samtliga önskade grenar att markeras med ctrl + musklick.
</p>
<div>{{=form.custom.widget.game_event}}</div>
<span id="submit_result">{{=form.custom.submit}}</span>
{{=form.custom.end}}
</div>
</code></pre>
| 0 | 2016-08-05T22:50:06Z | 38,804,216 | <p>When you define a <code>reference</code> or <code>list:reference</code> field, if you don't specify the <code>requires</code> attribute within the call to <code>Field()</code>, you will get a default <code>requires</code> attribute (i.e., validator) as well as a default <code>represent</code> attribute that controls how the field is displayed in forms and the grid. However, in order to get the default <code>requires</code> and <code>represent</code> attributes, you must define the referenced table <em>before</em> defining the referencing field (otherwise, the referencing field does not have enough information to create the validator and <code>represent</code> attributes, both of which incorporate the <code>format</code> attribute of the referenced table).</p>
<p>So, just move the definition of <code>db.game_events</code> so it comes before the definition of <code>db.game</code>. Also, in that case, there is no need to explicitly set the value of <code>db.game.game_event.requires</code>, as it will be assigned a default value exactly the same as the one you are assigning.</p>
<p>More generally, if you don't like the default representation of a field's values in forms/grids, you can always set the field's <code>represent</code> attribute to control the display.</p>
<p>As an aside, prefer <code>type='list:reference game_events'</code> over <code>type='list:reference db.game_events'</code> (the latter works but is not officially supported).</p>
| 0 | 2016-08-06T12:16:35Z | [
"python",
"forms",
"web2py",
"foreign-key-relationship",
"web2py-modules"
] |
Issue with split function | 38,798,763 | <p>I have a list variable <code>name</code>:</p>
<pre><code>name = ['Ny-site-1-145237890-service']
</code></pre>
<p>I want to split this list in a way so I can get <code>name = ['Ny-site-1']</code>.
To do this I am using the below code:</p>
<pre><code>import re
name = ['Ny-site-1-145237890-service']
site_name = re.split('(-[0-9]-service)')[0]
</code></pre>
<p>But the above code is not give me the output I am looking for. How do I get the desired result?</p>
| 0 | 2016-08-05T22:52:39Z | 38,798,783 | <p>First of all, <a href="https://docs.python.org/2/library/re.html#re.split" rel="nofollow"><code>re.split()</code></a> requires <em>2 arguments</em>, you are providing a single one.</p>
<p>Also, you need to add <code>+</code> quantifier (means "1 or more") for the <code>[0-9]</code> set of characters:</p>
<pre><code>>>> import re
>>>
>>> name = ['Ny-site-1-145237890-service']
>>> re.split(r'-[0-9]+-service', name[0])[0]
'Ny-site-1'
</code></pre>
<p>I would also add the <code>maxsplit=1</code> argument to avoid unnecessary splits:</p>
<pre><code>>>> re.split(r'-[0-9]+-service', name[0], maxsplit=1)[0]
'Ny-site-1'
</code></pre>
<hr>
<p>You may also add an <em>end of a string check</em> to make the expression more reliable:</p>
<pre><code>-[0-9]+-service$
</code></pre>
<p>And, you can also solve it with <a href="https://docs.python.org/2/library/re.html#re.sub" rel="nofollow"><code>re.sub()</code></a>:</p>
<pre><code>>>> re.sub(r'-[0-9]+-service$', '', name[0])
'Ny-site-1'
</code></pre>
| 2 | 2016-08-05T22:55:31Z | [
"python"
] |
Issue with split function | 38,798,763 | <p>I have a list variable <code>name</code>:</p>
<pre><code>name = ['Ny-site-1-145237890-service']
</code></pre>
<p>I want to split this list in a way so I can get <code>name = ['Ny-site-1']</code>.
To do this I am using the below code:</p>
<pre><code>import re
name = ['Ny-site-1-145237890-service']
site_name = re.split('(-[0-9]-service)')[0]
</code></pre>
<p>But the above code is not give me the output I am looking for. How do I get the desired result?</p>
| 0 | 2016-08-05T22:52:39Z | 38,798,785 | <p>Try adding a <code>+</code> in your regex to match multiple numbers.</p>
<pre><code>name = 'Ny-site-1-145237890-service'
site_name = re.split('(-[0-9]+-service)', name)[0]
</code></pre>
<p>If you want to use an array to store your name(s), you can use</p>
<pre><code>name = ['Ny-site-1-145237890-service']
site_name = re.split('(-[0-9]+-service)', name[0])[0]
</code></pre>
<p>If you have multiple names and you want to print them all, you can use </p>
<pre><code>for i in names:
print(re.split('(-[0-9]+-service)', i)[0])
</code></pre>
| 0 | 2016-08-05T22:55:33Z | [
"python"
] |
Pyspark command not recognised | 38,798,816 | <p>I have anaconda installed and also I have downloaded Spark 1.6.2. I am using the following instructions from this answer to configure spark for Jupyter <a href="http://stackoverflow.com/questions/33064031/link-spark-with-ipython-notebook">enter link description here</a></p>
<p>I have downloaded and unzipped the spark directory as </p>
<pre><code>~/spark
</code></pre>
<p>Now when I cd into this directory and into bin I see the following</p>
<pre><code>SFOM00618927A:spark $ cd bin
SFOM00618927A:bin $ ls
beeline pyspark run-example.cmd spark-class2.cmd spark-sql sparkR
beeline.cmd pyspark.cmd run-example2.cmd spark-shell spark-submit sparkR.cmd
load-spark-env.cmd pyspark2.cmd spark-class spark-shell.cmd spark-submit.cmd sparkR2.cmd
load-spark-env.sh run-example spark-class.cmd spark-shell2.cmd spark-submit2.cmd
</code></pre>
<p>I have also added the environment variables as mentioned in the above answer to my .bash_profile and .profile </p>
<p>Now in the spark/bin directory first thing I want to check is if pyspark command works on shell first. </p>
<p>So I do this after doing cd spark/bin</p>
<pre><code>SFOM00618927A:bin $ pyspark
-bash: pyspark: command not found
</code></pre>
<p>As per the answer after following all the steps I can just do </p>
<pre><code>pyspark
</code></pre>
<p>in terminal in any directory and it should start a jupyter notebook with spark engine. But even the pyspark within the shell is not working forget about making it run on juypter notebook</p>
<p>Please advise what is going wrong here. </p>
<p>Edit: </p>
<p>I did </p>
<pre><code>open .profile
</code></pre>
<p>at home directory and this is what is stored in the path. </p>
<pre><code>export PATH=/Users/854319/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/Users/854319/spark/bin
export PYSPARK_DRIVER_PYTHON=ipython
export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark
</code></pre>
| 1 | 2016-08-05T22:58:37Z | 38,827,548 | <p>1- You need to set <code>JAVA_HOME</code> and spark paths for the shell to find them. After setting them in your <code>.profile</code> you may want to</p>
<pre><code>source ~/.profile
</code></pre>
<p>to activate the setting in the current session. From your comment I can see you're already having the <code>JAVA_HOME</code> issue.</p>
<p>Note if you have <code>.bash_profile</code> or <code>.bash_login</code>, <code>.profile</code> will not work as described <a href="http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_01.html" rel="nofollow">here</a></p>
<p>2- When you are in <code>spark/bin</code> you need to run </p>
<pre><code>./pyspark
</code></pre>
<p>to tell the shell that the target is in the current folder.</p>
| 2 | 2016-08-08T11:02:34Z | [
"python",
"apache-spark",
"pyspark"
] |
Pyspark command not recognised | 38,798,816 | <p>I have anaconda installed and also I have downloaded Spark 1.6.2. I am using the following instructions from this answer to configure spark for Jupyter <a href="http://stackoverflow.com/questions/33064031/link-spark-with-ipython-notebook">enter link description here</a></p>
<p>I have downloaded and unzipped the spark directory as </p>
<pre><code>~/spark
</code></pre>
<p>Now when I cd into this directory and into bin I see the following</p>
<pre><code>SFOM00618927A:spark $ cd bin
SFOM00618927A:bin $ ls
beeline pyspark run-example.cmd spark-class2.cmd spark-sql sparkR
beeline.cmd pyspark.cmd run-example2.cmd spark-shell spark-submit sparkR.cmd
load-spark-env.cmd pyspark2.cmd spark-class spark-shell.cmd spark-submit.cmd sparkR2.cmd
load-spark-env.sh run-example spark-class.cmd spark-shell2.cmd spark-submit2.cmd
</code></pre>
<p>I have also added the environment variables as mentioned in the above answer to my .bash_profile and .profile </p>
<p>Now in the spark/bin directory first thing I want to check is if pyspark command works on shell first. </p>
<p>So I do this after doing cd spark/bin</p>
<pre><code>SFOM00618927A:bin $ pyspark
-bash: pyspark: command not found
</code></pre>
<p>As per the answer after following all the steps I can just do </p>
<pre><code>pyspark
</code></pre>
<p>in terminal in any directory and it should start a jupyter notebook with spark engine. But even the pyspark within the shell is not working forget about making it run on juypter notebook</p>
<p>Please advise what is going wrong here. </p>
<p>Edit: </p>
<p>I did </p>
<pre><code>open .profile
</code></pre>
<p>at home directory and this is what is stored in the path. </p>
<pre><code>export PATH=/Users/854319/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/Users/854319/spark/bin
export PYSPARK_DRIVER_PYTHON=ipython
export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark
</code></pre>
| 1 | 2016-08-05T22:58:37Z | 39,070,669 | <p>Here's my environment vars, hope it will help you:</p>
<pre><code># path to JAVA_HOME
export JAVA_HOME=$(/usr/libexec/java_home)
#Spark
export SPARK_HOME="/usr/local/spark" #version 1.6
export PATH=$PATH:$SPARK_HOME/bin
export PYSPARK_SUBMIT_ARGS="--master local[2]"
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS='notebook'
</code></pre>
<p>^^ Remove the Pyspark_driver_python_opts option if you don't want the notebook to launch, otherwise you can leave this out entirely and use it on your command line when you need it.</p>
<p>I have anaconda vars in another line to append to the PATH.</p>
| 0 | 2016-08-22T02:37:13Z | [
"python",
"apache-spark",
"pyspark"
] |
parallel program in python using Threads | 38,798,910 | <p>Generating the sum from adding integer numbers successively up to n where n = 2000 given by the following formula: n(n+1)/2
so far i have don it in serial.I need help on how to make it compute in parallel such that it adaptively make use of all the available processors/cores on the host computer.</p>
<pre><code>#!/usr/bin/env python3
from datetime import datetime
n=1
v=0
start_time = datetime.now()
while n<=10:
(n*(n+1)/2)
b=(n*(n+1)/2)
n = n+1
end_time =datetime.now()
print (b)
print('Time taken : {}'. format(end_time-start_time))
</code></pre>
| 0 | 2016-08-05T23:11:52Z | 38,799,424 | <p>To do this, you need to use <code>multiprocessing</code>, which lets you create processes and assign procedures to them. Here's a code snippet that does part of what you want:</p>
<pre><code>#!/usr/bin/env python3
from datetime import datetime
MAX_NUM = 10000000
NUMPROCS = 1
# LINEAR VERSION
start_time = datetime.now()
my_sum = 0
counter = 1
while counter <= MAX_NUM:
my_sum += counter
counter += 1
end_time =datetime.now()
print (my_sum)
print('Time taken : {}'. format(end_time-start_time))
# THREADING VERSION
from multiprocessing import Process, Queue
start_time = datetime.now()
def sum_range(start,stop,out_q):
i = start
counter = 0
while i < stop:
counter += i
i += 1
out_q.put(counter)
mysums = Queue()
mybounds = [1+i for i in range(0,MAX_NUM+1,int(MAX_NUM/NUMPROCS))]
myprocs = []
for i in range(NUMPROCS):
p = Process(target=sum_range, args=(mybounds[i],mybounds[i+1],mysums))
p.start()
myprocs.append(p)
mytotal = 0
for i in range(NUMPROCS):
mytotal += mysums.get()
for i in range(NUMPROCS):
myprocs[i].join()
print(mytotal)
end_time =datetime.now()
print('Time taken : {}'. format(end_time-start_time))
</code></pre>
<p>Although the code doesn't adaptively use processors, it does divide the task into a prespecified number of processes.</p>
| 0 | 2016-08-06T00:30:50Z | [
"python"
] |
Write Nested Dictionary to CSV | 38,798,987 | <p>I am trying to write a nested dictionary to CSV in Python. I looked at the csv.DictWriter documentation on python.org and some of the examples here on stackoverflow but I can't figure out the last part. Here is a representative data set: </p>
<pre><code>data = {u'feeds': [{u'feed_code': u'free', u'feed_name': u'Free'}, {u'feed_code': u'paid', u'feed_name': u'Paid'}, {u'feed_code': u'grossing', u'feed_name': u'Grossing'}], u'code': 200}
ColTitle = ['code','feed_code','feed_name']
with open('test.csv','wb') as f:
w = csv.DictWriter(f, ColTitle)
w.writeheader()
for item in data:
w.writerow({field: data[item]}) ## Part I am stuck on
</code></pre>
<p>This is what I would like to write to my CSV file </p>
<pre><code>code feed_code feed_name
200 free Free
200 paid Paid
200 grossing Grossing
</code></pre>
| 1 | 2016-08-05T23:20:48Z | 38,799,131 | <p>The problem in your code is the loop.
You want to loop over all the feeds, but you were actually looping over the data.
Your for:</p>
<pre><code>for item in data:
print item
w.writerow({field: data[item]}) ## Part I am stuck on
</code></pre>
<p>This would give you</p>
<pre><code>feeds
code
</code></pre>
<p>What you want is to loop over the feeds, like so:</p>
<pre><code>for feed in data[u'feeds']:
w.writerow(feed)
</code></pre>
<p>Yet this isn't enough, because the code isn't in every field, but only declared once in the data, so you should change it to also include the code in every row written:</p>
<pre><code>for feed in data[u'feeds']:
w.writerow(dict(feed, code=data[u'code']))
</code></pre>
| 0 | 2016-08-05T23:39:14Z | [
"python",
"csv",
"dictionary"
] |
Write Nested Dictionary to CSV | 38,798,987 | <p>I am trying to write a nested dictionary to CSV in Python. I looked at the csv.DictWriter documentation on python.org and some of the examples here on stackoverflow but I can't figure out the last part. Here is a representative data set: </p>
<pre><code>data = {u'feeds': [{u'feed_code': u'free', u'feed_name': u'Free'}, {u'feed_code': u'paid', u'feed_name': u'Paid'}, {u'feed_code': u'grossing', u'feed_name': u'Grossing'}], u'code': 200}
ColTitle = ['code','feed_code','feed_name']
with open('test.csv','wb') as f:
w = csv.DictWriter(f, ColTitle)
w.writeheader()
for item in data:
w.writerow({field: data[item]}) ## Part I am stuck on
</code></pre>
<p>This is what I would like to write to my CSV file </p>
<pre><code>code feed_code feed_name
200 free Free
200 paid Paid
200 grossing Grossing
</code></pre>
| 1 | 2016-08-05T23:20:48Z | 38,799,941 | <p>This tricky part about what you want to do is that the dictionaries in the list of <code>u'feeds'</code> in your data structure do not have the <code>u'code'</code> value in each of them. This can be easily remedied by updating each of them which will allow you to write all of them out at one time (although it does change the data structure):</p>
<pre><code>import csv
data = {u'code': 200,
u'feeds': [{u'feed_code': u'free', u'feed_name': u'Free'},
{u'feed_code': u'paid', u'feed_name': u'Paid'},
{u'feed_code': u'grossing', u'feed_name': u'Grossing'}]}
COL_TITLES = ['code', 'feed_code', 'feed_name']
with open('test.csv', 'wb') as f:
w = csv.DictWriter(f, COL_TITLES, delimiter=' ')
w.writeheader()
code = data['code']
for feed in data['feeds']:
feed.update(code=code)
w.writerows(data['feeds'])
</code></pre>
| 0 | 2016-08-06T02:06:14Z | [
"python",
"csv",
"dictionary"
] |
Executing Terminal Command from Python (cURL) | 38,799,007 | <p>I know this has been asked before, but I'm not finding the answer I'm looking for. Here's what I'm trying to do.. and before you respond, <em>full disclosure</em>: this is my first python script.</p>
<p><strong>Big picture:</strong>
Concatenate 2 text strings with clipboard text, and run this concatenated string as a command in Terminal in OSX. </p>
<p>Down the line, I'd like the pick apart the results of the command into a file, but first things first. </p>
<p>My current script has no problems concatenating the strings, and copying the concatenation to the clipboard. I'm not successful in having the terminal execute this command. Here's what I've got:</p>
<hr>
<pre><code>import pyperclip
import os
reqPayload= pyperclip.paste()
fullstring=('curl -HreqPayload:')+reqPayload+(' http://howdy.com/decrypt')
print(fullstring)
pyperclip.copy(fullstring)
os.system(fullstring)
</code></pre>
<hr>
<p>ps. There might be a much smarter way of doing a curl command, so please advise if I should rethink my approach. </p>
<p>Thanks!</p>
| 0 | 2016-08-05T23:22:38Z | 38,799,032 | <p>use the python requests library</p>
<pre><code>import requests
x = requests.post('http://something', json={})
</code></pre>
<p><a href="http://docs.python-requests.org/en/master/" rel="nofollow">http://docs.python-requests.org/en/master/</a></p>
| 0 | 2016-08-05T23:25:18Z | [
"python",
"osx",
"curl"
] |
utf-8 encoding and getting a string slice | 38,799,045 | <p>I'm parsing a twitter and there's a need to encode the text since in case there is no encoding, there is an exception. But when I use 'utf-8' it doesn't only add b symbol to the console output, but also makes it impossible to access parts of the string. What can I do to fix it or what other encoding should I try?</p>
<p>Here is an example of what happens.</p>
<pre><code>>>> a="newyear"
>>> b=a.encode("utf-8")
>>> a
'newyear'
>>> b
b'newyear'
>>> a[0]
'n'
>>> b[0]
110
</code></pre>
<p>My parser code is the following:</p>
<pre><code>tweets=soup.findAll("p", {"class":"TweetTextSize"})
n=0
for tweet in tweets:
n+=1;
print(n)
a=tweet.text
b=a.encode("utf-8")
print(b) #works fine, but returns bytestring, extra b character,
#and I can't get b[0]
print(b.decode("utf-8")) #doesn't work -
#UnicodeEncodeError: âcharmapâ code canât encode character '\u2026'
#uncommented try section works, but it replaces "bad" tweets with ops,
#which I'd rather avoid
# try:
# print(tweet.text)
# except:
# print("OPS")
</code></pre>
<p>So I can handle the exception with try, but I was wondering if there is some other way.</p>
<p>I'm using Python 3.</p>
| 0 | 2016-08-05T23:27:44Z | 38,799,101 | <p>you are confused about when to <code>encode</code> and when to <code>decode</code></p>
<p>if you have a bytestring then you <code>decode</code> it into unicode</p>
<pre><code>a="a string"
b = a.decode('utf8')
#b is now UNICODE
</code></pre>
<p>if you have unicode you <code>encode</code> it to an encoded bytestring</p>
<pre><code>a=u"\u00b0C"
b = a.encode('utf8')
#b is now decoded back to a byte string
</code></pre>
<p>I suspect you are getting a bytestring back from twitter so you probably need</p>
<pre><code>b = a.decode('utf8')
</code></pre>
| 1 | 2016-08-05T23:35:14Z | [
"python",
"encoding",
"utf-8",
"beautifulsoup"
] |
In pygame/python, is there a way to detect which key are pressed earlier or latter? | 38,799,175 | <p>For example, if I hold down w then hold down a, is there a way for pygame to tell me which key i holded down first? </p>
<p>I need to know this because for the game im trying to make using pygame, the character doesnt have smooth movements. The example code of movement is below.</p>
<p>I first detect the and set the direction, then set the xchange and y change for each direction. </p>
<p>Then I add it to the x and y of the player which is then blit to the screen. </p>
<p>The problem is, if I hold down(s) then hold right(d), I want the character to move down and then moveright, but I have to release the down(s) button for that to happen. This is because in my code, the if keys[k_s] is placed at the bottom of the four directions and evaluated last, which will replace the direction value into down. The movement is smooth however if i hold down right(d) then down(s) to change direction because of the same reason.</p>
<p>Thanks for the help!</p>
<pre><code> keys = pygame.key.get_pressed()
if keys[K_a] or keys[K_d] or keys[K_w] or keys[K_s]:
if keys[K_d] and keys[K_a]:
direction = "none"
if keys[K_w] and keys[K_s]:
direction = "none"
else:
#if direction == "none":
if keys[K_a]:
direction = "left"
if keys[K_d]:
direction = "right"
if keys[K_w]:
direction = "up"
if keys[K_s]:
direction = "down"
else:
direction = "none"
currentarea.putbackground()
currentarea.putdoor()
if direction == "none":
ychange = 0
xchange = 0
elif direction == "up":
xchange = 0
ychange = -3
elif direction == "down":
xchange = 0
ychange = 3
elif direction == "left":
xchange = -3
ychange = 0
elif direction == "right":
xchange = 3
ychange = 0
</code></pre>
| 0 | 2016-08-05T23:46:31Z | 38,799,498 | <p>If you loop through the event loop you'll receive the events in the order they were created. What you then could do is create a list that queues the keys you press and removes them when you release them. The first element in the list will always be the first key you've pressed and not yet released. </p>
<pre><code>pressed_keys = []
while True:
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_a:
pressed_keys.append("left")
elif event.key == pygame.K_d:
pressed_keys.append("right")
elif event.key == pygame.K_w:
pressed_keys.append("up")
elif event.key == pygame.K_s:
pressed_keys.append("down")
if event.type == pygame.KEYUP:
if event.key == pygame.K_a:
pressed_keys.remove("left")
elif event.key == pygame.K_d:
pressed_keys.remove("right")
elif event.key == pygame.K_w:
pressed_keys.remove("up")
elif event.key == pygame.K_s:
pressed_keys.remove("down")
try:
print(pressed_keys[0]) # Will give IndexError if list is empty
# print(pressed_keys) # Uncomment to see it in action
except IndexError:
pass
</code></pre>
| 1 | 2016-08-06T00:44:09Z | [
"python",
"pygame"
] |
Why are my manual PCA reconstructions not matching python's sklearn's reconstructions? | 38,799,205 | <p>I was trying to check my implementation of PCA to see if I understood it and I tried to do PCA with 12 components on the MNIST data set (which I got using the tensorflow interface that normalized it for me). I obtained the principal components given by sklearn and then made reconstructions as follow:</p>
<pre><code>pca = PCA(n_components=k)
pca = pca.fit(X_train)
X_pca = pca.transform(X_train)
# do manual PCA
U = pca.components_
my_reconstruct = np.dot( U.T , np.dot(U, X_train.T) ).T
</code></pre>
<p>then I used the reconstruction interface given by sklearn to try to reconstruct as follow:</p>
<pre><code>pca = PCA(n_components=k)
pca = pca.fit(X_train)
X_pca = pca.transform(X_train)
X_reconstruct = pca.inverse_transform(X_pca)
</code></pre>
<p>and then checked the error as follow (since the rows are a data point and columns features):</p>
<pre><code>print 'X_recon - X_my_reconstruct', (1.0/X_my_reconstruct.shape[0])*LA.norm(X_my_reconstruct - X_reconstruct)**2
#X_recon - X_my_reconstruct 1.47252586279
</code></pre>
<p>the error as you can see is non-zero and actually quite noticeable. Why is it? How is their reconstruction different from mine?</p>
| 3 | 2016-08-05T23:50:51Z | 38,805,942 | <p>I see a couple of issues:</p>
<ol>
<li><p>The dot product should be <code>X_pca.dot(pca.components_)</code>. <code>PCA</code> factorizes your <code>X_train</code> matrix using SVD:</p>
<p><em>X<sub>train</sub> = U·S·Váµ</em>.</p>
<p>Here, <code>pca.components_</code> corresponds to <em>Váµ</em> (a <code>(k, n_features)</code> matrix), not <em>U</em> (an <code>(n_datapoints, k)</code> matrix).</p>
<p>The sklearn implementation of PCA is quite readable, and can be found <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/pca.py#L353" rel="nofollow">here</a>. I also wrote a pure numpy example in <a href="http://stackoverflow.com/a/12273032/1461210">this previous answer</a>.</p></li>
<li><p>Did you center <code>X_train</code> by subtracting the mean value for each column before doing the fitting?</p>
<p>The <code>PCA</code> class automatically centers your data and stores the original mean vector in its <code>.mean_</code> attribute. If the mean vector for your input features was nonzero then you would need to add the mean to your reconstructions, i.e. <code>my_reconstruct += pca.mean_</code>.</p></li>
</ol>
| 0 | 2016-08-06T15:38:27Z | [
"python",
"machine-learning",
"scipy",
"scikit-learn",
"pca"
] |
Python for loop appending to every key in dictionary | 38,799,212 | <p>I'm iterating over a list of tuples and a list of strings. The strings are identifiers for the items in the list. I have a dictionary that has the strings identifiers as keys and has an initially empty list for each value. I want to append something from the tuple list to each key. A simplified version of what I'm doing is:</p>
<pre><code>tupleList = [("A","a"),("B","b")]
stringList = ["Alpha", "Beta"]
dictionary = dict.fromkeys(stringList, []) # dictionary = {'Alpha': [], 'Beta': []}
for (uppercase, lowercase), string in zip(tupleList, stringList):
dictionary[string].append(lowercase)
</code></pre>
<p>I would expect this to give <code>dictionary = {'Alpha': ['a'], 'Beta': ['b']}</code>, but instead I find that <code>{'Alpha': ['a', 'b'], 'Beta': ['a', 'b']}</code>. Does anyone have any idea what I'm doing wrong?</p>
| 3 | 2016-08-05T23:52:17Z | 38,799,301 | <p>The problem is that when you call <code>dict.fromkeys</code> and pass it a list as a default item for each key, python uses the same list, lists are not immutable so one change to a list affects it everywhere it is referenced, what you can do to get around this is to call <code>dict.fromkeys</code> without any argument, this sets the default items as None, then you have an if statement to check if it's None and initialize two different lists. and then subsequently you append to that list if it's not None (when it already exists).</p>
<pre><code>tupleList = [("A","a"),("B","b")]
stringList = ["Alpha", "Beta"]
dictionary = dict.fromkeys(stringList) # dictionary = {'Alpha': [], 'Beta': []}
for (uppercase, lowercase), string in zip(tupleList, stringList):
#print(id(dictionary[string])) uncomment this with your previous code
if dictionary[string] is None:
dictionary[string] = [lowercase]
else:
dictionary[string].append(lowercase)
</code></pre>
| 2 | 2016-08-06T00:07:49Z | [
"python",
"list",
"dictionary",
"iteration"
] |
Python for loop appending to every key in dictionary | 38,799,212 | <p>I'm iterating over a list of tuples and a list of strings. The strings are identifiers for the items in the list. I have a dictionary that has the strings identifiers as keys and has an initially empty list for each value. I want to append something from the tuple list to each key. A simplified version of what I'm doing is:</p>
<pre><code>tupleList = [("A","a"),("B","b")]
stringList = ["Alpha", "Beta"]
dictionary = dict.fromkeys(stringList, []) # dictionary = {'Alpha': [], 'Beta': []}
for (uppercase, lowercase), string in zip(tupleList, stringList):
dictionary[string].append(lowercase)
</code></pre>
<p>I would expect this to give <code>dictionary = {'Alpha': ['a'], 'Beta': ['b']}</code>, but instead I find that <code>{'Alpha': ['a', 'b'], 'Beta': ['a', 'b']}</code>. Does anyone have any idea what I'm doing wrong?</p>
| 3 | 2016-08-05T23:52:17Z | 38,799,314 | <p>Your problem is that you share the list between the two keys by reference.</p>
<p>What happens is that <code>dict.fromkeys</code> doesn't create a new list for each key, but gives the reference to the same list to all the keys. the rest of your code looks correct :)</p>
<p>Instead of doing that you should use a <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow">defaultdict</a>, basically it is a dict, which creates new values if they don't exist, and retrieves them if they do (and removes the need for the if / else when inserting an item to check if it already exists). It's really useful in these kinds of situations:</p>
<pre><code>from collections import defaultdict
tupleList = [("A","a"),("B","b")]
stringList = ["Alpha", "Beta"]
dictionary = defaultdict(list) # Changed line
for (uppercase, lowercase), string in zip(tupleList, stringList):
dictionary[string].append(lowercase)
</code></pre>
| 4 | 2016-08-06T00:09:18Z | [
"python",
"list",
"dictionary",
"iteration"
] |
Parse string to identify kwargs and args | 38,799,223 | <p>I am new to python and looking for a elegant way to do the below job.</p>
<p>I have a string say:</p>
<pre><code>s = u'(éå="AAA", last = "BBB", abcd)'
</code></pre>
<p>I am thinking of a function which can parse the above string and give output in the following format.</p>
<pre><code>arg, kwarg = foo(s)
def foo():
# the implementation I dont know.
</code></pre>
<p>How shall I perform this in python?</p>
| -3 | 2016-08-05T23:54:01Z | 38,799,275 | <pre><code>def foo(s):
data={"kwargs":[],"args":[]}
for item in s:
if "=" in item: data['kwargs'].append(item)
else: data['args'].append(item)
return data
s = u'(éå="AAA", last = "BBB", abcd)'
s = s[1:-1] # get rid of the parenthesis
print foo(s)
</code></pre>
| 0 | 2016-08-06T00:02:56Z | [
"python",
"string",
"parsing"
] |
Parse string to identify kwargs and args | 38,799,223 | <p>I am new to python and looking for a elegant way to do the below job.</p>
<p>I have a string say:</p>
<pre><code>s = u'(éå="AAA", last = "BBB", abcd)'
</code></pre>
<p>I am thinking of a function which can parse the above string and give output in the following format.</p>
<pre><code>arg, kwarg = foo(s)
def foo():
# the implementation I dont know.
</code></pre>
<p>How shall I perform this in python?</p>
| -3 | 2016-08-05T23:54:01Z | 38,799,618 | <p>A nice way to parse a string that follows some grammar rules is the 3rd party <a href="http://pyparsing.wikispaces.com/" rel="nofollow">pyparsing</a> library. This is very generic lacking a formal grammar definition of allowed user input:</p>
<pre><code>#coding:utf8
from pyparsing import *
# Names for symbols
_lparen = Suppress('(')
_rparen = Suppress(')')
_quote = Suppress('"')
_eq = Suppress('=')
# Parsing grammar definition
data = (_lparen + # left parenthesis
delimitedList( # Zero or more comma-separated items
Group( # Group the contained unsuppressed tokens in a list
Regex(u'[^=,)\s]+') + # Grab everything up to an equal, comma, endparen or whitespace as a token
Optional( # Optionally...
_eq + # match an =
_quote + # a quote
Regex(u'[^"]*') + # Grab everything up to another quote as a token
_quote) # a quote
) # EndGroup - will have one or two items.
) + # EndList
_rparen) # right parenthesis
def process(s):
items = data.parseString(s).asList()
args = [i[0] for i in items if len(i) == 1]
kwargs = {i[0]:i[1] for i in items if len(i) == 2}
return args,kwargs
s = u'(éå="AAA", last = "BBB", abcd)'
args,kwargs = process(s)
for a in args:
print a
for k,v in kwargs.items():
print k,v
</code></pre>
<p>Output:</p>
<pre><code>abcd
éå AAA
last BBB
</code></pre>
| 2 | 2016-08-06T01:04:31Z | [
"python",
"string",
"parsing"
] |
Change default Django REST Framework home page title | 38,799,315 | <p>I am following the <a href="http://www.django-rest-framework.org/tutorial/quickstart/" rel="nofollow">article</a> to set up a new Djangon REST framework project. I got it working but I would like to change the default home page title from <code>Django REST Framework v3.3.2 to</code> my own, I am sure it's just a setting somewhere but it didn't seem obvious which one, any insights will be appreciated. Thanks. </p>
<p><strong>UPDATE</strong>
Based on the hints from @macro and this <a href="http://stackoverflow.com/questions/25991081/cant-modify-django-rest-framework-base-html-file">article</a>, I got it to work with <code>api.html</code>. Thanks. </p>
| 0 | 2016-08-06T00:09:25Z | 38,799,809 | <p>From <a href="https://github.com/tomchristie/django-rest-framework/blob/3.3.2/rest_framework/templates/rest_framework/base.html#L38" rel="nofollow">the code</a>, it looks like it's actually not a setting. You'll need to override the 'branding' block in the base template with your own content.</p>
<p>Basically you will need to make a copy of Django REST Framework's 'base.html' template file in your project's <a href="https://docs.djangoproject.com/en/1.10/ref/settings/#dirs" rel="nofollow">template directory</a> with the same relative path, which will cause it to be loaded instead of DRF's template, and replace the content of that <a href="https://docs.djangoproject.com/es/1.10/ref/templates/language/#template-inheritance" rel="nofollow">block template tag</a> with your branding.</p>
| 2 | 2016-08-06T01:40:44Z | [
"python",
"django",
"django-rest-framework"
] |
buffering of process output causing truncation? | 38,799,347 | <p>I have a sample file with 100 lines which I am reading using subprocess with <code>cat</code>. However, the output in a queue is always truncated. I suspect it might be due to <code>cat</code> buffering its output because it detects a pipe.</p>
<pre><code>p = subprocess.Popen("cat file.txt",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
shell=True,
bufsize=0)
</code></pre>
<p>I am using separate threads to read from the stdout and stderr pipes of <code>cat</code>:</p>
<pre><code>def StdOutThread():
while not p.stdout.closed and running:
line = ""
while not line or line[-1] != "\n":
r = p.stdout.read(1)
if not r:
break
line += r
pending_line["line"] = line
if line and line[-1] == "\n":
line = line[:-1]
if line:
queue.put(("out", line))
</code></pre>
<p>These threads are started and dump what they read into a queue. The main thread reads from this queue while <code>cat</code> is alive.</p>
<pre><code>with CancelFunction(p.kill):
try:
stdout_thread = threading.Thread(target=StdOutThread)
stdout_thread.start()
while p.poll() is None:
ReadFromQueue()
while not queue.empty():
ReadFromQueue()
finally:
running = False
stdout_thread.join()
</code></pre>
<p>I have considered using pexpect to overcome this issue but at the same time also want to distinguish stdout and stderr which does not seem possible with pexpect. Help would be much appreciated.</p>
| 0 | 2016-08-06T00:16:01Z | 38,799,637 | <p>I'm sure your main thread is exiting the try block before all of the output from <code>cat</code> has been read and placed on the queue.</p>
<p>Note that <code>cat</code> can exit even if you haven't read all of its output.
Consider this sequence of events:</p>
<ol>
<li><code>cat</code> writes out its last line</li>
<li><code>cat</code> exits</li>
<li>Before the reader threads have a change to read the last bit of output from <code>cat</code> the main thread detects that <code>cat</code> has exited (via <code>p.poll()</code>)</li>
<li>The main thread then exits the try block and sets <code>running</code> to false</li>
<li>The reader threads exit because <code>running</code> is false, but before that
last input has been read.</li>
</ol>
<p>Below is a simpler approach which uses <em>sentinel</em> values in the queue
to inform the main thread that a reader thread has exited.</p>
<p>If <code>cat</code> exits then eventually it will reach EOF on the pipe it is
monitoring. And when that happens it will place None onto the queue
to inform the main thread it is finished. When both reader threads have
finished the main thread can safely stop monitoring the queue and
join the threads.</p>
<pre><code>import threading
import subprocess
import os
import time
import Queue
import sys
def pipe_thread(queue, name, handle):
print "in handlehandle"
for line in handle:
if line[-1] == "\n":
line = line[:-1]
queue.put( (name, line) )
queue.put(None)
def main():
p = subprocess.Popen("cat file.txt",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
shell=True,
bufsize=0)
queue = Queue.Queue()
t1 = threading.Thread(target = pipe_thread,
args = [queue, "stdout", p.stdout])
t2 = threading.Thread(target = pipe_thread,
args = [queue, "stderr", p.stderr])
t1.start()
t2.start()
alive = 2
count = 0
while alive > 0:
item = queue.get()
if item == None:
alive = alive - 1
else:
(which, line) = item
count += 1
print count, "got from", which, ":", line
print "joining..."
t1.join()
t2.join()
main()
</code></pre>
| 1 | 2016-08-06T01:06:42Z | [
"python",
"multithreading",
"multiprocessing",
"cat"
] |
Printing indices within two dimensional array relative to specific array coordinates? | 38,799,430 | <p>Let's say I have an array consisting of multiple lists of <code>.</code>'s, that represent something like a world map.
<code>world_array = [['.' * 50 for _ in xrange(50)]</code> </p>
<p>Inside of this array, at index <code>(0, 0)</code>, a player variable exists -- a letter <code>p</code>. </p>
<p>How can I go about displaying specific parts of this array to the user based on where their current location is inside of the map? For instance, if I want to display the tile that the player is on, plus the 5-10 tiles around him / her, how could I go about doing that? </p>
<pre><code>P . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
</code></pre>
<p>Also, how do I go about print the array in a fashion that removes the brackets and apostrophes associated with lists, like shown above? </p>
<p>Thanks for any answers. </p>
| 0 | 2016-08-06T00:31:59Z | 38,799,505 | <p>Break down your problem. Take a 2D array <code>array[x][y]</code>. If you're wanting to display the tiles up to 5 positions around the player then you must iterate over and print all of those tiles. So, starting from 5 before the player's position and ending 5 afterwards. Logically, it would go as follows (x is first index of player's position, y is second index):</p>
<pre><code>for i = (x - 5) to (x + 5):
for j = (y - 5) to (y + 5):
print value of array[i][j]
end
end
</code></pre>
<p>Now, this isn't complete. The reason for that is because, as you may or may not have noticed, this has potential to go out of bounds. This means that if x - 5 < 0 you'll want to start at 0 instead. If x + 5 > array's first dimension's size, you'll want to use the size. There are functions for this - max and min. This means your logic will end up like this:</p>
<pre><code>for i = max(0, (x - 5)) to min(1st dimension's size, (x + 5)):
for j = max(0, (y - 5)) to min(2nd dimension's size, (y + 5)):
print value of array[i][j]
end
end
</code></pre>
| 0 | 2016-08-06T00:45:12Z | [
"python",
"arrays"
] |
Printing indices within two dimensional array relative to specific array coordinates? | 38,799,430 | <p>Let's say I have an array consisting of multiple lists of <code>.</code>'s, that represent something like a world map.
<code>world_array = [['.' * 50 for _ in xrange(50)]</code> </p>
<p>Inside of this array, at index <code>(0, 0)</code>, a player variable exists -- a letter <code>p</code>. </p>
<p>How can I go about displaying specific parts of this array to the user based on where their current location is inside of the map? For instance, if I want to display the tile that the player is on, plus the 5-10 tiles around him / her, how could I go about doing that? </p>
<pre><code>P . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
</code></pre>
<p>Also, how do I go about print the array in a fashion that removes the brackets and apostrophes associated with lists, like shown above? </p>
<p>Thanks for any answers. </p>
| 0 | 2016-08-06T00:31:59Z | 38,799,626 | <p>First, be careful if you want to modify your map, you should not use strings for the rows because they're immutable. You won't be able to modify your map - unles you rewrite entirely each line.</p>
<p>So I suggest you use this: (A 2D array of chars)</p>
<pre><code>world_map = [['.']*n_cols for _ in xrange(n_rows)]
</code></pre>
<p>To print the array as it is:</p>
<pre><code>for row in world_map:
print ''.join(row)
</code></pre>
<p>Now the exploration. If you want to hide the map with dots, then it's not the dots you should store in the 2D array, it is the content of the map.</p>
<p>So let's say I create this map (and these variables):</p>
<pre><code>n_rows = 3
n_cols = 5
world_array = [
['0', '0', '1', '1', '1'],
['0', '2', '0', '1', '0'],
['1', '1', '0', '0', '0']
]
exp_radius = 1 #the player can see 1 square from him
x,y = 1,0 #position of the player
</code></pre>
<p>To display the whole map with a visible circle around the player - in (x,y) - and dots elsewhere, then it's like this:</p>
<pre><code>for r in xrange(n_rows):
row=''
for c in xrange(n_cols):
if c=x and r=y:
row += 'P'
elif abs(c-x)<=exp_radius and abs(r-y)<=exp_radius:
row += world_array[r][c]
else:
row += '.'
print row
</code></pre>
<p>This would give you:</p>
<pre><code>0P1..
020..
.....
</code></pre>
<p>Note that if you prefer a diamond shape rather than a square:</p>
<pre><code>0P1..
.2...
.....
</code></pre>
<p>So for the sake of clarity:</p>
<pre><code>..0..
.000.
00P00
.000.
..0..
</code></pre>
<p>You should replace the condition:</p>
<pre><code>elif abs(c-x)<=exp_radius and abs(r-y)<=exp_radius:
</code></pre>
<p>by:</p>
<pre><code>elif abs(c-x)+abs(r-y)<=exp_radius:
</code></pre>
<p>You've got all the clues now, have fun! ;)</p>
<p>EDIT :</p>
<p>If you want to display only the dots for a given width and height around the player, then just modify the range of the for loops like this:</p>
<pre><code>width = 5 # half of the width of the displayed map
height = 3 # half of the height of the displayed map
for r in xrange(max(0,y-height), min(n_rows, y+height)):
row=''
for c in xrange(max(0,x-width), min(n_cols, x+width)):
if c=x and r=y:
row += 'P'
elif abs(c-x)<=exp_radius and abs(r-y)<=exp_radius:
row += world_array[r][c]
else:
row += '.'
print row
</code></pre>
<p>So the lines and columns printed will go from the position (x-width, y-height) for top left corner to the position (x+width, y+height) for the bottom right corner and crop if it goes beyond the map. The displayed area is therefore 2*width * 2*height if not cropped.</p>
| 1 | 2016-08-06T01:05:28Z | [
"python",
"arrays"
] |
how to replace the uncommon characters between two strings with 'x'? | 38,799,440 | <p>I have a dictionary of string counts: <code>{"abcd12efgh":1,"abcd23efgh":1,"abcd567efgh":1,"abcdkljefgh":1, "dog":1, "cat":1}</code></p>
<p>I need to group together similar strings and aggregate the counts to get something like: <code>{"abcdxxxefgh":4,"dog":1,"cat":1}.</code></p>
<p>Which is the most elegant way to accomplish this in Python?</p>
| -2 | 2016-08-06T00:34:55Z | 38,799,572 | <p>The answer depends on how you assumed two keys match, however you can have a separate function decide that. I have written one that might be what you are looking for: check if the key has certain prefix and suffix. You can add more constraints like, the sub string in between has certian lengths or another pattern.</p>
<pre><code>def transform(key):
prefix, suffix = 'abcd', 'efgh'
transformed = key
if key.startswith(prefix) and key.endswith(suffix):
transformed = prefix + 'X' + suffix
return transformed
new_d = {}
for k in d:
new_d[transform(k)] = new_d.get(transform(k), 0) + d[k]
#{'abcdXefgh': 4, 'cat': 1, 'dog': 1}
</code></pre>
| 0 | 2016-08-06T00:56:27Z | [
"python",
"string",
"pattern-matching"
] |
Flip left-right Plotly Horizontal Histogram | 38,799,469 | <p>I've made a horizontal histogram as shown <a href="https://plot.ly/python/histograms/#horizontal-histogram" rel="nofollow">here</a>. Is it possible to flip the x-axis, such that the base of the bars is on the right, and the bars extend out to the left, like the blue histogram <a href="http://blogs.sas.com/content/graphicallyspeaking/files/2013/11/MirrorHistogramHorz2.png" rel="nofollow">here</a>. </p>
| 0 | 2016-08-06T00:40:43Z | 38,800,469 | <p>Look at the xanchor (options are 'left', 'right' 'center') of layout.</p>
<pre><code>https://plot.ly/python/reference/#bar
</code></pre>
<p>By setting the xanchor on a horizontal bar chart to right, you should be able to get that effect.</p>
| 0 | 2016-08-06T04:12:47Z | [
"python",
"plotly"
] |
How to do a search using sqlalchemy from chinese character Column? | 38,799,483 | <p>code </p>
<pre><code>result=Minicomputer.query.filter_by(u'åç§°'='CC670a').first()
</code></pre>
<p>error</p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 15, in <module>
app = create_app(os.getenv('FLASK_CONFIG') or 'default')
File "/mnt/hgfs/python/flask/Project/__init__.py", line 34, in create_app
from .main import main as main_blueprint
File "/mnt/hgfs/python/flask/Project/main/__init__.py", line 3, in <module>
from . import views, errors
File "/mnt/hgfs/python/flask/Project/main/views.py", line 95
result=Minicomputer.query.filter_by(u'åç§°'='CC670a').first()
SyntaxError: keyword can't be an expression
</code></pre>
<p><strong>minicomputer</strong> is a table using chinese character Column name</p>
<pre><code>engine = create_engine('mysql://root:1qaz2wsx@localhost/chhai?charset=utf8', convert_unicode=True, echo=False)
Base = declarative_base()
Base.metadata.reflect(engine)
db_session = scoped_session(sessionmaker(bind=engine))
Base.query = db_session.query_property()
class Storage(Base):
__table__ = Base.metadata.tables['storage']
def __repr__(self):
return '<Storage %r>' % self.Storage_Name
class Minicomputer(Base):
__table__ = Base.metadata.tables['minicomputer']
def __repr__(self):
name = u'åç§°'
return '<Minicomputer %r>' % self.ID
</code></pre>
| 0 | 2016-08-06T00:42:09Z | 38,799,749 | <p><code>filter_by</code> is a function taking in keyword arguments. <code>filter</code> is a function that takes in an expression. So, either</p>
<pre><code>Minicomputer.query.filter_by(åç§°='CC670a')
</code></pre>
<p>or</p>
<pre><code>Minicomputer.query.filter(Minicomputer.åç§° == 'CC670a')
</code></pre>
<p>You'll need to be on Python 3 for this to work, by the way, since Python 2 does not allow non-ASCII identifiers.</p>
| 0 | 2016-08-06T01:30:30Z | [
"python",
"flask",
"sqlalchemy"
] |
How to do a search using sqlalchemy from chinese character Column? | 38,799,483 | <p>code </p>
<pre><code>result=Minicomputer.query.filter_by(u'åç§°'='CC670a').first()
</code></pre>
<p>error</p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 15, in <module>
app = create_app(os.getenv('FLASK_CONFIG') or 'default')
File "/mnt/hgfs/python/flask/Project/__init__.py", line 34, in create_app
from .main import main as main_blueprint
File "/mnt/hgfs/python/flask/Project/main/__init__.py", line 3, in <module>
from . import views, errors
File "/mnt/hgfs/python/flask/Project/main/views.py", line 95
result=Minicomputer.query.filter_by(u'åç§°'='CC670a').first()
SyntaxError: keyword can't be an expression
</code></pre>
<p><strong>minicomputer</strong> is a table using chinese character Column name</p>
<pre><code>engine = create_engine('mysql://root:1qaz2wsx@localhost/chhai?charset=utf8', convert_unicode=True, echo=False)
Base = declarative_base()
Base.metadata.reflect(engine)
db_session = scoped_session(sessionmaker(bind=engine))
Base.query = db_session.query_property()
class Storage(Base):
__table__ = Base.metadata.tables['storage']
def __repr__(self):
return '<Storage %r>' % self.Storage_Name
class Minicomputer(Base):
__table__ = Base.metadata.tables['minicomputer']
def __repr__(self):
name = u'åç§°'
return '<Minicomputer %r>' % self.ID
</code></pre>
| 0 | 2016-08-06T00:42:09Z | 38,800,311 | <p>do a column mapping solved this error</p>
<pre><code>class Minicomputer(Base):
__table__ = Base.metadata.tables['minicomputer']
name = __table__.c[u'åç§°']
def __repr__(self):
return '<Minicomputer %r>' % self.name
</code></pre>
| 0 | 2016-08-06T03:35:38Z | [
"python",
"flask",
"sqlalchemy"
] |
Find parent with certain combination of child rows - SQLite with Python | 38,799,560 | <p>There are several parts to this question. I am working with sqlite3 in Python 2.7, but I am less concerned with the exact syntax, and more with the methods I need to use. I think the best way to ask this question is to describe my current database design, and what I am trying to accomplish. I am new to databases in general, so I apologize if I don't always use correct nomenclature.</p>
<p>I am modeling refrigeration systems (using Modelica--not really important to know), and I am using the database to manage input data, results data, and models used for that data.</p>
<p>My top parent table is <code>Model</code>, which contains the columns: </p>
<pre><code>id, name, version, date_created
</code></pre>
<p>My child table under <code>Model</code> is called <code>Design</code>. It is used to create a unique id for each combination of design input parameters and the model used. the columns it contains are: </p>
<pre><code>id, model_id, date_created
</code></pre>
<p>I then have two child tables under <code>Design</code>, one called <code>Input</code>, and the other called <code>Result</code>. We can just look at Input for now, since one example should be enough. The columns for input are: </p>
<pre><code>id, value, design_id, parameter_id, component_id
</code></pre>
<p><code>parameter_id</code> and <code>component_id</code> are foreign keys to their own tables.The <code>Parameter</code> table has the following columns:</p>
<pre><code>id, name, units
</code></pre>
<p>Some example rows for <code>Parameter under</code> name are: length, width, speed, temperature, pressure (there are many dozens more). The Component table has the following columns: </p>
<pre><code>id, name
</code></pre>
<p>Some example rows for <code>Component</code> under name are: compressor, heat_exchanger, valve.</p>
<p>Ultimately, in my program I want to search the database for a specific design. I want to be able to search a specific design to be able to grab specific results for that design, or to know whether or not a model simulation with that design has already been run previously, to avoid re-running the same data point. </p>
<p>I also want to be able to grab all the parameters for a given design, and insert it into a class I have created in Python, which is then used to provide inputs to my models. In case it helps for solving the problem, the classes I have created are based on the components. So, for example, I have a compressor class, with attributes like compressor.speed, compressor.stroke, compressor.piston_size. Each of these attributes should have their own row in the Parameter table.</p>
<p>So, how would I query this database efficiently to find if there is a design that matches a long list (let's assume 100+) of parameters with specific values? Just as a side note, my friend helped me design this database. He knows databases, but not my application super well. It is possible that I designed it poorly for what I want to accomplish.</p>
<p>Here is a simple picture trying to map a certain combination of parameters with certain values to a design_id, where I have taken out component_id for simplicity:</p>
<p><a href="http://i.stack.imgur.com/M33DW.png" rel="nofollow">Picture of simplified tables</a></p>
| 0 | 2016-08-06T00:54:52Z | 38,800,413 | <p>Simply join the necessary tables. Your schema properly reflects normalization (separating tables into logical groupings) and can scale for one-to-many relationships. Specifically, to answer your question --<em>So, how would I query this database efficiently to find if there is a design that matches a long list (let's assume 100+) of parameters with specific values?</em>-- consider below approaches:</p>
<p><strong>Inner Join with Where Clause</strong> </p>
<p>For handful of parameters, use an inner join with a <code>WHERE...IN()</code> clause. Below returns <em>design</em> fields joined by <em>input</em> and <em>parameters</em> tables, filtered for specific parameter names where you can have Python pass as parameterized values even iteratively in a loop:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT d.id, d.model_id, d.date_created
FROM design d
INNER JOIN input i ON d.id = i.design_id
INNER JOIN parameters p ON p.id = i.parameter_id
WHERE p.name IN ('param1', 'param2', 'param3', 'param4', 'param5', ...)
</code></pre>
<p><strong>Inner Join with Temp Table</strong></p>
<p>Should values be over 100+ in a long list, consider a temp table that filters <em>parameters</em> table to specific parameter values:</p>
<pre><code># CREATE EMPTY TABLE (SAME STRUCTURE AS parameters)
sql = "CREATE TABLE tempparams AS SELECT id, name, units FROM parameters WHERE 0;"
cur.execute(sql)
db.commit()
# ITERATIVELY APPEND TO TEMP
for i in paramslist: # LIST OF 100+ ITEMS
sql = "INSERT INTO tempparams (id, name, units) \
SELECT p.id, p.name, p.units \
FROM parameters p \
WHERE p.name = ?;"
cur.execute(sql, i) # CURSOR OBJECT COMMAND PASSING PARAM
db.commit() # DB OBJECT COMMIT ACTION
</code></pre>
<p>Then, join main <em>design</em> and <em>input</em> tables with new temp table holding specific parameters:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT d.id, d.model_id, d.date_created
FROM design d
INNER JOIN input i ON d.id = i.design_id
INNER JOIN tempparams t ON t.id = i.parameter_id
</code></pre>
<p>Same process can work with <em>components</em> table as well.</p>
<p>*Moved picture to question section</p>
| 1 | 2016-08-06T04:01:10Z | [
"python",
"sql",
"database",
"sqlite"
] |
Can I asynchronously duplicate a webapp2.RequestHandler Request to a different url? | 38,799,566 | <p>For a percentage of production traffic, I want to <strong>duplicate</strong> the received request to a different version of my application. This needs to happen asynchronously so I don't double service time to the client.</p>
<p>The reason for doing this is so I can compare the responses generated by the prod version and a production candidate version. If their results are appropriately similar, I can be confident that the new version hasn't broken anything. (If I've made a functional change to the application, I'd filter out the necessary part of the response from this comparison.)</p>
<p>So I'm looking for an equivalent to:</p>
<pre><code>class Foo(webapp2.RequestHandler):
def post(self):
handle = make_async_call_to('http://other_service_endpoint.com/', self.request)
# process the user's request in the usual way
test_response = handle.get_response()
# compare the locally-prepared response and the remote one, and log
# the diffs
# return the locally-prepared response to the caller
</code></pre>
<p><strong><em>UPDATE</em></strong>
google.appengine.api.urlfetch was suggested as a potential solution to my problem, but it's synchronous <em>in the dev_appserver, though it behaves the way I wanted in production</em> (the request doesn't go out until get_response() is called, and it blocks). :</p>
<pre><code> start_time = time.time()
rpcs = []
print 'creating rpcs:'
for _ in xrange(3):
rpcs.append(urlfetch.create_rpc())
print time.time() - start_time
print 'making fetch calls:'
for rpc in rpcs:
urlfetch.make_fetch_call(rpc, 'http://httpbin.org/delay/3')
print time.time() - start_time
print 'getting results:'
for rpc in rpcs:
rpc.get_result()
print time.time() - start_time
creating rpcs:
9.51290130615e-05
0.000154972076416
0.000189065933228
making fetch calls:
0.00029993057251
0.000356912612915
0.000473976135254
getting results:
3.15417003632
6.31326603889
9.46627306938
</code></pre>
<p><strong><em>UPDATE2</em></strong></p>
<p>So, after playing with some other options, I found a way to make completely non-blocking requests:</p>
<pre><code>start_time = time.time()
rpcs = []
logging.info('creating rpcs:')
for i in xrange(10):
rpc = urlfetch.create_rpc(deadline=30.0)
url = 'http://httpbin.org/delay/{}'.format(i)
urlfetch.make_fetch_call(rpc, url)
rpc.callback = create_callback(rpc, url)
rpcs.append(rpc)
logging.info(time.time() - start_time)
logging.info('getting results:')
while rpcs:
rpc = apiproxy_stub_map.UserRPC.wait_any(rpcs)
rpcs.remove(rpc)
logging.info(time.time() - start_time)
</code></pre>
<p>...but the important point to note is that <em>none of the async fetch options in urllib work in the dev_appserver.</em> Having discovered this, I went back to try @DanCornilescu's solution and found that it only works properly in production, but not in the dev_appserver.</p>
| 1 | 2016-08-06T00:55:39Z | 38,800,284 | <p>The URL Fetch service supports asynchronous requests. From <a href="https://cloud.google.com/appengine/docs/python/issue-requests#issuing_an_asynchronous_request" rel="nofollow">Issuing an asynchronous request</a>:</p>
<blockquote>
<p>HTTP(S) requests are synchronous by default. To issue an asynchronous
request, your application must:</p>
<ol>
<li>Create a new RPC object using <a href="https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.urlfetch#google.appengine.api.urlfetch.create_rpc" rel="nofollow">urlfetch.create_rpc()</a>. This object represents your asynchronous call in subsequent method calls.</li>
<li>Call <a href="https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.urlfetch#google.appengine.api.urlfetch.make_fetch_call" rel="nofollow">urlfetch.make_fetch_call()</a> to make the request. This method takes your RPC object and the request target's URL as parameters.</li>
<li>Call the RPC object's <a href="https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.urlfetch#google.appengine.api.urlfetch.get_result" rel="nofollow">get_result()</a> method. This method returns the result object if the request is successful, and raises an exception if
an error occurred during the request.</li>
</ol>
<p>The following snippets demonstrate how to make a basic asynchronous
request from a Python application. First, import the urlfetch library
from the App Engine SDK:</p>
<pre><code>from google.appengine.api import urlfetch
</code></pre>
<p>Next, use urlfetch to make the asynchronous request:</p>
<pre><code>rpc = urlfetch.create_rpc()
urlfetch.make_fetch_call(rpc, "http://www.google.com/")
# ... do other things ...
try:
result = rpc.get_result()
if result.status_code == 200:
text = result.content
self.response.write(text)
else:
self.response.status_code = result.status_code
logging.error("Error making RPC request")
except urlfetch.DownloadError:
logging.error("Error fetching URL0")
</code></pre>
</blockquote>
<p><strong>Note:</strong> As per Sniggerfardimungus's experiment mentioned in the question's update the async calls might not work as expected on the development server - being serialized instead of concurrent, but they do so when deployed on GAE. Personally I didn't use the async calls yet, so I can't really say.</p>
<p>If the intent is not block <strong>at all</strong> waiting for the response from the production candidate app you could push a copy of the original request and the production-prepared response on a task queue then answer to the original request - with neglijible delay (that of enqueueing the task).</p>
<p>The handler for the respective task queue would, outside of the original request's critical path, make the request to the staging app using the copy of the original request (async or not, doesn't really matter from the point of view of impacting the production app's response time), get its response and compare it with the production-prepared response, log the deltas, etc. This can be nicely wrapped in a separate module for minimal changes to the production app and deployed/deleted as needed.</p>
| 1 | 2016-08-06T03:29:10Z | [
"python",
"google-app-engine",
"asynchronous",
"webapp2"
] |
Efficient merge for many huge csv files | 38,799,704 | <p>I have a script that takes all the csv files in a directory and merges them side-by-side, using an outer join. The problem is that my computer chokes (MemoryError) when I try to use it on the files I need to join (about two dozen files 6-12 Gb each). I am aware that itertools can be used to make loops more efficient, but I am unclear as to whether or how it could be applied to this situation. The other alternative I can think of is to install mySQL, learn the basics, and do this there. Obviously I'd rather do this in Python if possible because I'm already learning it. An R-based solution would also be acceptable.</p>
<p>Here is my code:</p>
<pre><code>import os
import glob
import pandas as pd
os.chdir("\\path\\containing\\files")
files = glob.glob("*.csv")
sdf = pd.read_csv(files[0], sep=',')
for filename in files[1:]:
df = pd.read_csv(filename, sep=',')
sdf = pd.merge(sdf, df, how='outer', on=['Factor1', 'Factor2'])
</code></pre>
<p>Any advice for how to do this with files too big for my computer's memory would be greatly appreciated. </p>
| 1 | 2016-08-06T01:21:32Z | 38,799,711 | <p>There is a chance <a href="http://dask.pydata.org" rel="nofollow">dask</a> will be well-suited to your use. It might depend on what you want to do after the merge.</p>
| 0 | 2016-08-06T01:24:26Z | [
"python",
"pandas",
"merge",
"large-files",
"itertools"
] |
Efficient merge for many huge csv files | 38,799,704 | <p>I have a script that takes all the csv files in a directory and merges them side-by-side, using an outer join. The problem is that my computer chokes (MemoryError) when I try to use it on the files I need to join (about two dozen files 6-12 Gb each). I am aware that itertools can be used to make loops more efficient, but I am unclear as to whether or how it could be applied to this situation. The other alternative I can think of is to install mySQL, learn the basics, and do this there. Obviously I'd rather do this in Python if possible because I'm already learning it. An R-based solution would also be acceptable.</p>
<p>Here is my code:</p>
<pre><code>import os
import glob
import pandas as pd
os.chdir("\\path\\containing\\files")
files = glob.glob("*.csv")
sdf = pd.read_csv(files[0], sep=',')
for filename in files[1:]:
df = pd.read_csv(filename, sep=',')
sdf = pd.merge(sdf, df, how='outer', on=['Factor1', 'Factor2'])
</code></pre>
<p>Any advice for how to do this with files too big for my computer's memory would be greatly appreciated. </p>
| 1 | 2016-08-06T01:21:32Z | 38,799,737 | <p>You should be able to do this with python but i don't think reading the csv's at once will be the most efficient use of your memory.</p>
<p><a href="http://stackoverflow.com/questions/6556078/how-to-read-a-csv-file-from-a-stream-and-process-each-line-as-it-is-written">How to read a CSV file from a stream and process each line as it is written?</a></p>
| 0 | 2016-08-06T01:28:21Z | [
"python",
"pandas",
"merge",
"large-files",
"itertools"
] |
Efficient merge for many huge csv files | 38,799,704 | <p>I have a script that takes all the csv files in a directory and merges them side-by-side, using an outer join. The problem is that my computer chokes (MemoryError) when I try to use it on the files I need to join (about two dozen files 6-12 Gb each). I am aware that itertools can be used to make loops more efficient, but I am unclear as to whether or how it could be applied to this situation. The other alternative I can think of is to install mySQL, learn the basics, and do this there. Obviously I'd rather do this in Python if possible because I'm already learning it. An R-based solution would also be acceptable.</p>
<p>Here is my code:</p>
<pre><code>import os
import glob
import pandas as pd
os.chdir("\\path\\containing\\files")
files = glob.glob("*.csv")
sdf = pd.read_csv(files[0], sep=',')
for filename in files[1:]:
df = pd.read_csv(filename, sep=',')
sdf = pd.merge(sdf, df, how='outer', on=['Factor1', 'Factor2'])
</code></pre>
<p>Any advice for how to do this with files too big for my computer's memory would be greatly appreciated. </p>
| 1 | 2016-08-06T01:21:32Z | 38,799,791 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#io-hdf5" rel="nofollow">HDF5</a>, that in my opinion would suit your needs very well. It also handles <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#querying" rel="nofollow">out-of-core queries</a>, so you won't have to face <code>MemoryError</code>.</p>
<pre><code>import os
import glob
import pandas as pd
os.chdir("\\path\\containing\\files")
files = glob.glob("*.csv")
hdf_path = 'my_concatenated_file.h5'
with pd.HDFStore(hdf_path, mode='w', complevel=5, complib='blosc') as store:
# This compresses the final file by 5 using blosc. You can avoid that or
# change it as per your needs.
for filename in files:
store.append('table_name', pd.read_csv(filename, sep=','), index=False)
# Then create the indexes, if you need it
store.create_table_index('table_name', columns=['Factor1', 'Factor2'], optlevel=9, kind='full')
</code></pre>
| 2 | 2016-08-06T01:37:56Z | [
"python",
"pandas",
"merge",
"large-files",
"itertools"
] |
Mocking a function with a certain argument in Python | 38,799,714 | <p>I have a module a.py:</p>
<pre><code>def add(x, y):
return x + y
def do_math(x):
sum1 = add(x, 1)
sum2 = add(x, 2)
sum3 = add(x, 3)
return sum1 + sum2 + sum3
</code></pre>
<p>Running the do_math function in a test results in the following:</p>
<pre><code>print a.do_math(1)
9
</code></pre>
<p>I want to mock the add function when y is 2. However, the following results in an infinite loop:</p>
<pre><code>def mock_add(*args, **kwargs):
x = args[0]
y = args[1]
if y == 2:
return 4
else:
a.add(x, y)
with patch('a.add', side_effect=mock_add):
a.do_math(1)
</code></pre>
<p>Here is a portion of my error message:</p>
<pre><code> File "E:\somepath\mock-1.3.0\mock\mock.py", line 1062, in __call__
return _mock_self._mock_call(*args, **kwargs)
File "E:\somepath\mock-1.3.0\mock\mock.py", line 1067, in _mock_call
self.called = True
RuntimeError: maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>I should have known better. I was already mocking add with mock_add. Any ideas on how to do this?</p>
| 0 | 2016-08-06T01:25:08Z | 38,799,967 | <p>Why not just save the function before the call?</p>
<pre><code>from mock.mock import patch
import a
original = a.add
def mock_add(*args, **kwargs):
x = args[0]
y = args[1]
if y == 2:
return 4
else:
return original(x, y)
with patch('a.add', side_effect=mock_add):
a.do_math(1)
</code></pre>
<p>If you want to keep the object orientated nature, you can store all of these inside a MagicMock object, but I think this answers your question.</p>
| 0 | 2016-08-06T02:12:49Z | [
"python",
"mocking"
] |
No module named 'model_utils' | 38,799,718 | <p>I'm using Python 3.4.3 and django 1.9.8.</p>
<p>In my models.py, I have</p>
<blockquote>
<p>from model_utils.managers import InheritanceManager</p>
</blockquote>
<p>But this error occurs:</p>
<blockquote>
<p>ImportError: No module named 'model_utils'</p>
</blockquote>
| 0 | 2016-08-06T01:25:35Z | 38,799,833 | <p>You need to install django-model-utils:</p>
<pre><code>pip install django-model-utils
</code></pre>
<p>(<a href="https://django-model-utils.readthedocs.io/en/latest/index.html" rel="nofollow">documentation</a>)</p>
| 0 | 2016-08-06T01:45:20Z | [
"python",
"django",
"python-3.x",
"django-model-utils"
] |
What is the correct way to use .apply with pandas? | 38,799,723 | <p>I'm working with a million-row CSV dataset that includes columns "latitude" and "longitude", and I want to create a new column based on that called "state", which is the US state that contains those coordinates.</p>
<pre><code>import pandas as pd
import numpy as np
import os
from uszipcode import ZipcodeSearchEngine
def convert_to_state(coord):
lat, lon = coord["latitude"], coord["longitude"]
res = search.by_coordinate(lat, lon, radius=1, returns=1)
state = res.State
return state
def get_state(path):
with open(path + "USA_downloads.csv", 'r+') as f:
data = pd.read_csv(f)
data["state"] = data.loc[:, ["latitude", "longitude"]].apply(convert_to_state, axis=1)
get_state(path)
</code></pre>
<p>I keep getting an error "DtypeWarning: Columns (4,5) have mixed types. Specify dtype option on import or set low_memory=False." Columns 4 and 5 correspond to the latitude and longitude. I don't understand how I would use .apply to complete this task, or if .apply is even the right method for the job. How should I proceed?</p>
| 2 | 2016-08-06T01:26:27Z | 38,799,846 | <p>I believe this will be a faster implementation of your program:</p>
<pre><code>import pandas as pd
import numpy as np
import os
from uszipcode import ZipcodeSearchEngine
def convert_to_state(lat, lon):
lat, lon = round(lat, 7), round(lon, 7)
res = search.by_coordinate(lat, lon, radius=1, returns=1)
state = res.State
return state
def get_state(path):
with open(path + "USA_downloads.csv", 'r+') as f:
data = pd.read_csv(f)
data["state"] = np.vectorize(convert_to_state)(data["latitude"].values, data["longitude"].values)
get_state(path)
</code></pre>
<p>It uses <code>numpy.vectorize</code> to speed things up a little (although it is still a loop), and then calls the function with the values obtained from the <code>'latitude'</code> and <code>'longitude'</code> columns of your DataFrame, converted to <code>numpy.ndarray</code> (the <code>.values</code> attribute does that).</p>
<hr>
<p>If you want to keep using <code>.apply()</code>, you can do:</p>
<pre><code>state = data.apply(lambda x: convert_to_state(x['latitude'], x['longitude']), axis=1)
data["state"] = state
</code></pre>
<hr>
<h1>Edit</h1>
<p>To avoid <code>uszipcode</code> from raising a <code>TypeError</code>, use this:</p>
<pre><code>def convert_to_state(lat, lon):
try:
res = search.by_coordinate(lat, lon, radius=1, returns=1)
state = res.State
except TypeError as TE:
state = None
return state
</code></pre>
<p>If you want to further debug <code>uszipcode</code>, and the reason it is causing the error, I recommend you ask another question, with the appropriate tags, and someone will help you. I have no experience with this package, so I may not be of much help.</p>
| 3 | 2016-08-06T01:49:28Z | [
"python",
"pandas"
] |
How to balance my data across the partitions? | 38,799,753 | <p><em>Edit</em>: The answer helps, but I described my solution in: <a href="https://gsamaras.wordpress.com/code/memoryoverhead-issue-in-spark/" rel="nofollow">memoryOverhead issue in Spark</a>.</p>
<hr>
<p>I have an RDD with 202092 partitions, which reads a dataset created by others. I can manually see that the data is not balanced across the partitions, for example some of them have 0 images and other have 4k, while the mean lies at 432. When processing the data, I got this error:</p>
<pre><code>Container killed by YARN for exceeding memory limits. 16.9 GB of 16 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
</code></pre>
<p>while memoryOverhead is already boosted. I feel that some spikes are happening which make Yarn kill my container, because that spike overflows the specified borders.</p>
<p><strong><em>So what should I do make sure that my data are</em></strong> (roughly) <strong><em>balanced across partitions?</em></strong></p>
<hr>
<p>My idea was that <a href="http://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=repartition#pyspark.RDD.repartition" rel="nofollow">repartition()</a> would work, it invokes shuffling:</p>
<pre><code>dataset = dataset.repartition(202092)
</code></pre>
<p>but I just got the very same error, despite the <a href="http://spark.apache.org/docs/latest/programming-guide.html" rel="nofollow">programming-guide</a>'s instructions:</p>
<blockquote>
<p><strong>repartition(numPartitions)</strong> </p>
<p>Reshuffle the data in the RDD randomly to create either more or fewer
partitions and <strong><em>balance it across them</em></strong>. This always shuffles all data
over the network.</p>
</blockquote>
<hr>
<p>Check my toy example though:</p>
<pre><code>data = sc.parallelize([0,1,2], 3).mapPartitions(lambda x: range((x.next() + 1) * 1000))
d = data.glom().collect()
len(d[0]) # 1000
len(d[1]) # 2000
len(d[2]) # 3000
repartitioned_data = data.repartition(3)
re_d = repartitioned_data.glom().collect()
len(re_d[0]) # 1854
len(re_d[1]) # 1754
len(re_d[2]) # 2392
repartitioned_data = data.repartition(6)
re_d = repartitioned_data.glom().collect()
len(re_d[0]) # 422
len(re_d[1]) # 845
len(re_d[2]) # 1643
len(re_d[3]) # 1332
len(re_d[4]) # 1547
len(re_d[5]) # 211
repartitioned_data = data.repartition(12)
re_d = repartitioned_data.glom().collect()
len(re_d[0]) # 132
len(re_d[1]) # 265
len(re_d[2]) # 530
len(re_d[3]) # 1060
len(re_d[4]) # 1025
len(re_d[5]) # 145
len(re_d[6]) # 290
len(re_d[7]) # 580
len(re_d[8]) # 1113
len(re_d[9]) # 272
len(re_d[10]) # 522
len(re_d[11]) # 66
</code></pre>
| 6 | 2016-08-06T01:31:37Z | 38,803,350 | <p>The memory overhead limit exceeding issue I think is due to DirectMemory buffers used during fetch. I think it's fixed in 2.0.0. (We had the same issue, but stopped digging much deeper when we found that upgrading to 2.0.0 resolved it. Unfortunately I don't have Spark issue numbers to back me up.)</p>
<hr>
<p>The uneven partitions after <code>repartition</code> are surprising. Contrast with <a href="https://github.com/apache/spark/blob/v2.0.0/core/src/main/scala/org/apache/spark/rdd/RDD.scala#L443" rel="nofollow">https://github.com/apache/spark/blob/v2.0.0/core/src/main/scala/org/apache/spark/rdd/RDD.scala#L443</a>. Spark even generates random keys in <code>repartition</code>, so it is not done with a hash that could be biased.</p>
<p>I tried your example and get the <em>exact</em> same results with Spark 1.6.2 and Spark 2.0.0. But not from Scala <code>spark-shell</code>:</p>
<pre><code>scala> val data = sc.parallelize(1 to 3, 3).mapPartitions { it => (1 to it.next * 1000).iterator }
data: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[6] at mapPartitions at <console>:24
scala> data.mapPartitions { it => Iterator(it.toSeq.size) }.collect.toSeq
res1: Seq[Int] = WrappedArray(1000, 2000, 3000)
scala> data.repartition(3).mapPartitions { it => Iterator(it.toSeq.size) }.collect.toSeq
res2: Seq[Int] = WrappedArray(1999, 2001, 2000)
scala> data.repartition(6).mapPartitions { it => Iterator(it.toSeq.size) }.collect.toSeq
res3: Seq[Int] = WrappedArray(999, 1000, 1000, 1000, 1001, 1000)
scala> data.repartition(12).mapPartitions { it => Iterator(it.toSeq.size) }.collect.toSeq
res4: Seq[Int] = WrappedArray(500, 501, 501, 501, 501, 500, 499, 499, 499, 499, 500, 500)
</code></pre>
<p>Such beautiful partitions!</p>
<hr>
<p><sub><em>(Sorry this is not a full answer. I just wanted to share my findings so far.)</em></sub></p>
| 3 | 2016-08-06T10:39:39Z | [
"python",
"hadoop",
"apache-spark",
"bigdata",
"distributed-computing"
] |
Ansible + 10.11.6 | 38,799,807 | <p>I'm having a weird issue with Ansible on a (very) clean install of 10.11.6. I've installed brew, zsh, oh-my-zsh, Lil' snitch and 1password (and literally nothing else). I installed ansible with...</p>
<p><code>brew install ansible</code></p>
<p>... which was successful. I then went to a preexisting (and crazy simple) Ansible project and did an...</p>
<p><code>ansible -m ping all</code></p>
<p>It then asked me to enter my SSH passphrase. I've reinstated the keys from my previous install but I hadn't previously ssh'd into the server. I entered the passphrase and ansible returned...</p>
<pre><code>$ ansible -m ping all
host1 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
</code></pre>
<p>I then ssh'd into the server to check all was well, and it connected without any problems.</p>
<p>I then re-ran...</p>
<p><code>$ ansible -m ping all</code></p>
<p>and it returned...</p>
<pre><code>host1 | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "",
"module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
"msg": "MODULE FAILURE",
"parsed": false
}
</code></pre>
<p>... which is a bit weird? It seems to be saying it can't find python anymore, despite it finding it first time around?</p>
<p><code>$ which python</code> returns <code>/usr/bin/python</code></p>
<p><code>$ python --version</code> returns <code>Python 2.7.10</code></p>
<p><code>$ which ansible</code> returns <code>/usr/local/bin/ansible</code></p>
<p><code>$ ansible --version</code> returns </p>
<pre><code>ansible 2.1.1.0
config file = /pathtoproject/ansible.cfg
configured module search path = Default w/o overrides
</code></pre>
<p>I've deliberately not installed pyenv, virtualenv etc. </p>
<p><code>/usr/bin/python</code> is definitely there, and I can run python without any problems.</p>
<p>Help?! :-) I'm a Ruby dev and I can't help but think I'm missing something obvious, but as I understand it all the versions check out and everything <em>should</em> be working. I tried changing my shell to sh and rerunning <code>ansible -m ping all</code> but it fails in the same way.</p>
<p>Any ideas?</p>
| 0 | 2016-08-06T01:40:33Z | 39,583,152 | <p>I had a similar issue, but the "fix" was stranger yet: remove Python3 from the host, as Ansible currently supports up to Python 2.7.</p>
| 1 | 2016-09-19T22:41:25Z | [
"python",
"osx",
"ansible",
"ansible-playbook"
] |
assign variables to an array | 38,799,855 | <p>I have the following line (note: the convert function returns an array):</p>
<pre><code>question, answer = convert(snippet, phrase)
</code></pre>
<p>Does this assign the first two values in the array to <code>question</code> and <code>answer</code> variables respectively?</p>
| 2 | 2016-08-06T01:51:20Z | 38,799,874 | <p>If the function returns a list of at least two values, you can do:</p>
<pre><code>question, answer = convert(snippet, phrase)[:2]
#or
question, answer, *_ = convert(snippet, phrase)
</code></pre>
<p>For example:</p>
<pre><code># valid multiple assignment/unpacking
x,y = 1, 2
x,y = [1,2,3][:2]
x,y, *z = [1, 2, 3, 4] # * -> put the rest as the list to z
x, y, *_z = [1, 2, 3, 4] # similar as above but, uses a 'throwaway' variable _
#invalid
x, y = 1, 2, 3 #ValueError: too many values to unpack (expected 2)
</code></pre>
| 0 | 2016-08-06T01:55:22Z | [
"python",
"list"
] |
assign variables to an array | 38,799,855 | <p>I have the following line (note: the convert function returns an array):</p>
<pre><code>question, answer = convert(snippet, phrase)
</code></pre>
<p>Does this assign the first two values in the array to <code>question</code> and <code>answer</code> variables respectively?</p>
| 2 | 2016-08-06T01:51:20Z | 38,799,912 | <p>This is referred to as <em><a href="https://www.python.org/dev/peps/pep-0448/" rel="nofollow">unpacking</a></em> in Python.</p>
<pre><code>a, b, c = 1, 2, 3
# a -> 1
# b -> 2
# c -> 3
</code></pre>
| 1 | 2016-08-06T02:01:34Z | [
"python",
"list"
] |
Creation of basic lists? | 38,799,901 | <p>I am trying to create a list in Python from a text file. I would like to open the file, read the lines, use the <code>split</code> method, append them to a list. This is what I have so far. All it does is print the text file:</p>
<pre><code>lines = []
folder = open("test.txt")
word = folder.readlines()
for line in word:
var = ()
for line in word:
lines.append(line.strip().split(","))
print (word)
</code></pre>
<p>My file looks like this: <code>fat cat hat mat sat bat lap</code> </p>
<p>I want this to come out: <code>['fat', 'cat', 'hat', 'mat', 'sat', 'bat', 'lap']</code> </p>
| -1 | 2016-08-06T02:00:16Z | 38,799,985 | <p>As other commentors have observed, variable naming should provide the <strong>context</strong> of what your variable is <em>assigned</em> to. Even though you can name a variable a multitude of names, they <em>should</em> be relevant! </p>
<p>You can use the <a href="https://docs.python.org/3/tutorial/inputoutput.html" rel="nofollow"><code>with</code></a> statement to open and close a file within the same scope, ensuring that the file object is <strong>closed</strong> (<em>generally good practice</em>). From then on, you can print the <code>lines</code> returned from the <code>readlines()</code> function as a <code>list</code> that is <code>split()</code> based on a <code>' '</code> delimiter. </p>
<pre><code>with open("test.txt") as file:
lines = file.readlines()
for line in lines:
print line.split(' ')
</code></pre>
<p><strong>Sample output:</strong></p>
<p>File: <code>fat cat hat mat sat bat lap</code></p>
<pre><code>>>> ['fat', 'cat', 'hat', 'mat', 'sat', 'bat', 'lap']
</code></pre>
| 4 | 2016-08-06T02:16:18Z | [
"python",
"python-3.x"
] |
Creation of basic lists? | 38,799,901 | <p>I am trying to create a list in Python from a text file. I would like to open the file, read the lines, use the <code>split</code> method, append them to a list. This is what I have so far. All it does is print the text file:</p>
<pre><code>lines = []
folder = open("test.txt")
word = folder.readlines()
for line in word:
var = ()
for line in word:
lines.append(line.strip().split(","))
print (word)
</code></pre>
<p>My file looks like this: <code>fat cat hat mat sat bat lap</code> </p>
<p>I want this to come out: <code>['fat', 'cat', 'hat', 'mat', 'sat', 'bat', 'lap']</code> </p>
| -1 | 2016-08-06T02:00:16Z | 38,800,050 | <p>If your file only consists of one line then you don't need to do nearly as much work as you seem to think.</p>
<p><code>str.split</code> returns a list, so there is no need to <code>append</code> the individual elements. When you call <code>.split()</code> without any arguments it will split by any whitespace (spaces, tabs, newlines etc) so to do what you want to do would literally just be:</p>
<pre><code>with open("test.txt","r") as f:
mywords = f.read().split()
print(mywords)
</code></pre>
<p>open the file, read the contents, split it up by whitespace, store the resulting list in a variable called <code>mywords</code>. (or whatever you want to call it)</p>
<p>Note that spliting by any whitespace means it will treat new lines the same as spaces, just another separation of words.</p>
| 1 | 2016-08-06T02:35:38Z | [
"python",
"python-3.x"
] |
From JSON file to CSV file | 38,799,928 | <p>I have a snippet of my json file below. Is there any way to use Python and transform this to a nice CSV file? So things like text and sentiment would have its own columns?</p>
<pre><code>{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"url":"http://well.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.456369",
"type":"positive"
}
}
]
}{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"warningMessage":"truncated-oversized-text-content",
"url":"http://www.times.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.678684",
"type":"positive"
}
}
]
}
</code></pre>
<p>If not I would like to pull specific info from it. I've tried this code but keep getting the following error. I suspect it has something to do with the brackets/formatting of the json? How can I fix this best?</p>
<pre><code>import json
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
pprint(data)
ValueError: Extra data: line 15 column 2 - line 30 column 2 (char 367 - 780)
</code></pre>
| -1 | 2016-08-06T02:04:45Z | 38,799,986 | <p>The JSON is indeed not valid, notice in line 15: <code>}{</code></p>
<p>You are ending the first / outer object and beginning a new one, basically concatenating 2 distinct JSON strings.</p>
<p>To fix this you could create an array out of them by surrounding them with square brackets and add a comma between the two objects:</p>
<pre><code>[{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"url":"http://well.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.456369",
"type":"positive"
}
}
]
},{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"warningMessage":"truncated-oversized-text-content",
"url":"http://www.times.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.678684",
"type":"positive"
}
}
]
}]
</code></pre>
<p>You can then iterate through this array:</p>
<pre><code>import json
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
for entry in data:
print "%s;%s" % (entry['url'], entry['status'])
</code></pre>
| 2 | 2016-08-06T02:16:58Z | [
"python",
"json",
"csv"
] |
From JSON file to CSV file | 38,799,928 | <p>I have a snippet of my json file below. Is there any way to use Python and transform this to a nice CSV file? So things like text and sentiment would have its own columns?</p>
<pre><code>{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"url":"http://well.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.456369",
"type":"positive"
}
}
]
}{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"warningMessage":"truncated-oversized-text-content",
"url":"http://www.times.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.678684",
"type":"positive"
}
}
]
}
</code></pre>
<p>If not I would like to pull specific info from it. I've tried this code but keep getting the following error. I suspect it has something to do with the brackets/formatting of the json? How can I fix this best?</p>
<pre><code>import json
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
pprint(data)
ValueError: Extra data: line 15 column 2 - line 30 column 2 (char 367 - 780)
</code></pre>
| -1 | 2016-08-06T02:04:45Z | 38,800,039 | <p>With corrected CSV as chrki pointed out, the full python source would be:</p>
<pre><code>#!/usr/bin/python
import json
with open('data.json') as data_file:
data = json.load(data_file)
out = open('out.csv', 'w+')
out.write("url;totalTransactions;language;status;text;score;type\n")
for entry in data:
out.write("%s;%s;%s;%s;%s;%s;%s\n" % (
entry['url'],
entry['totalTransactions'],
entry['language'],
entry['status'],
entry['results'][0]['text'],
entry['results'][0]['sentiment']['score'],
entry['results'][0]['sentiment']['type'])
)
</code></pre>
| 0 | 2016-08-06T02:32:58Z | [
"python",
"json",
"csv"
] |
From JSON file to CSV file | 38,799,928 | <p>I have a snippet of my json file below. Is there any way to use Python and transform this to a nice CSV file? So things like text and sentiment would have its own columns?</p>
<pre><code>{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"url":"http://well.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.456369",
"type":"positive"
}
}
]
}{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"warningMessage":"truncated-oversized-text-content",
"url":"http://www.times.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.678684",
"type":"positive"
}
}
]
}
</code></pre>
<p>If not I would like to pull specific info from it. I've tried this code but keep getting the following error. I suspect it has something to do with the brackets/formatting of the json? How can I fix this best?</p>
<pre><code>import json
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
pprint(data)
ValueError: Extra data: line 15 column 2 - line 30 column 2 (char 367 - 780)
</code></pre>
| -1 | 2016-08-06T02:04:45Z | 38,800,393 | <p>You are on the right track with loading the json into a python dictionary.
Consider using the python csv module to write the csv output file. </p>
<pre><code>https://docs.python.org/2/library/csv.html
</code></pre>
<p>The keys will become the header row and then for each item, you can create a row and input them into it.</p>
| 0 | 2016-08-06T03:55:59Z | [
"python",
"json",
"csv"
] |
From JSON file to CSV file | 38,799,928 | <p>I have a snippet of my json file below. Is there any way to use Python and transform this to a nice CSV file? So things like text and sentiment would have its own columns?</p>
<pre><code>{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"url":"http://well.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.456369",
"type":"positive"
}
}
]
}{
"status":"OK",
"totalTransactions":"1",
"language":"english",
"warningMessage":"truncated-oversized-text-content",
"url":"http://www.times.com",
"results":[
{
"text":"food",
"sentiment":{
"score":"0.678684",
"type":"positive"
}
}
]
}
</code></pre>
<p>If not I would like to pull specific info from it. I've tried this code but keep getting the following error. I suspect it has something to do with the brackets/formatting of the json? How can I fix this best?</p>
<pre><code>import json
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
pprint(data)
ValueError: Extra data: line 15 column 2 - line 30 column 2 (char 367 - 780)
</code></pre>
| -1 | 2016-08-06T02:04:45Z | 38,800,443 | <p>Here's an answer to the first part of your question that addresses the improperly formatted JSON file problem. It attempts to convert what's in the file into list of objects (dictionaries) and names the result <code>data</code>:</p>
<pre><code>import json
import tempfile
# Fix the invalid json data and load it.
with tempfile.TemporaryFile() as temp_file:
temp_file.write('[\n')
with open('json_to_csv.json', 'rb') as data_file:
for line in data_file:
temp_file.write(line.replace('}{', '},{'))
temp_file.write(']\n')
temp_file.seek(0) # rewind
data = json.load(temp_file)
print(json.dumps(data, indent=4)) # show result
</code></pre>
| 0 | 2016-08-06T04:06:55Z | [
"python",
"json",
"csv"
] |
Getting Error when trying to run python code on zapier | 38,799,935 | <p>I have the following address:</p>
<pre><code>556_StreetName_Ave_CityName_11111
</code></pre>
<p>I want to use code to trim off the last part, <code>_Ave_CityName_11111</code></p>
<p>so basically I want to remove anything with the last 3 underscores.</p>
<p>Here is my code:</p>
<pre><code>output = "_".join(input['street_name'].split("_")[:-3])
</code></pre>
<p>but I get an error:</p>
<pre class="lang-none prettyprint-override"><code>Bargle. We hit an error creating a run python. :-( Error:
'unicode' object has no attribute 'copy'
</code></pre>
<p>Here is what my setup looks like in Zapier:</p>
<p><a href="https://i.imgur.com/wg7RPqq.png" rel="nofollow"><img src="https://i.imgur.com/wg7RPqq.png" alt="screenshot"></a></p>
| 1 | 2016-08-06T02:05:27Z | 38,809,886 | <p>output = {'street_name': "_".join(input['street_name'].split("_")[:-3])}</p>
<p>This actually is the code that allowed it to work if anyone needed help. </p>
<p>Thanks to nedbat on IRC phython channel for help!!</p>
| 0 | 2016-08-07T00:38:15Z | [
"python",
"zapier"
] |
pyparsing removing some text and how to capture text with whitespace | 38,799,948 | <p>I am new to using pyparsing (python 2.7) and have a couple of questions about this code:</p>
<pre><code>import pyparsing as pp
openBrace = pp.Suppress(pp.Literal("{"))
closeBrace = pp.Suppress(pp.Literal("}"))
ident = pp.Word(pp.alphanums + "_" + ".")
otherStuff = pp.Suppress(pp.Word(pp.alphanums + "_" + "." + "-" + "+"))
comment = pp.Literal("//") + pp.restOfLine
messageName = ident
messageKw = pp.Suppress("msg")
messageExpr = pp.Forward()
messageExpr << (messageKw + messageName + openBrace +
pp.Optional(otherStuff) + pp.ZeroOrMore(messageExpr) +
pp.Optional(otherStuff) + closeBrace).ignore(comment)
print messageExpr.parseString("msg msgName1 { msg msgName2 { some text } }")
</code></pre>
<p>I don`t really understand why it removes the text "msg" in the inner msgName2. The output is:
['msgName1', 'Name2']
but I expected:
['msgName1', 'msgName2']</p>
<p>In addition, I was wondering how to capture all other text ("some text") including whitespace between the braces.</p>
<p>Thanks in advance</p>
| 1 | 2016-08-06T02:07:53Z | 38,800,083 | <p>To answer your first query:</p>
<pre><code>>>> import pyparsing as pp
>>>
>>> openBrace = pp.Suppress(pp.Literal("{"))
>>> closeBrace = pp.Suppress(pp.Literal("}"))
>>> ident = pp.Word(pp.alphanums + "_" + ".")
>>> otherStuff = pp.Suppress(pp.Word(pp.alphanums + "_" + "." + "-" + "+"))
>>> comment = pp.Literal("//") + pp.restOfLine
>>> messageName = ident
>>> messageKw = pp.Suppress("msg")
>>> messageExpr = pp.Forward()
>>> messageExpr << (messageKw + messageName + openBrace +
... pp.ZeroOrMore(messageExpr) + pp.ZeroOrMore(otherStuff) +
... closeBrace).ignore(comment)
Forward: ...
>>>
>>> print messageExpr.parseString("msg msgName1 { msg msgName2 { some text } }")
['msgName1', 'msgName2']
</code></pre>
| 2 | 2016-08-06T02:46:22Z | [
"python",
"python-2.7",
"parsing",
"pyparsing"
] |
pyparsing removing some text and how to capture text with whitespace | 38,799,948 | <p>I am new to using pyparsing (python 2.7) and have a couple of questions about this code:</p>
<pre><code>import pyparsing as pp
openBrace = pp.Suppress(pp.Literal("{"))
closeBrace = pp.Suppress(pp.Literal("}"))
ident = pp.Word(pp.alphanums + "_" + ".")
otherStuff = pp.Suppress(pp.Word(pp.alphanums + "_" + "." + "-" + "+"))
comment = pp.Literal("//") + pp.restOfLine
messageName = ident
messageKw = pp.Suppress("msg")
messageExpr = pp.Forward()
messageExpr << (messageKw + messageName + openBrace +
pp.Optional(otherStuff) + pp.ZeroOrMore(messageExpr) +
pp.Optional(otherStuff) + closeBrace).ignore(comment)
print messageExpr.parseString("msg msgName1 { msg msgName2 { some text } }")
</code></pre>
<p>I don`t really understand why it removes the text "msg" in the inner msgName2. The output is:
['msgName1', 'Name2']
but I expected:
['msgName1', 'msgName2']</p>
<p>In addition, I was wondering how to capture all other text ("some text") including whitespace between the braces.</p>
<p>Thanks in advance</p>
| 1 | 2016-08-06T02:07:53Z | 38,806,036 | <p>A couple of points:</p>
<ol>
<li><p><code>messageKw</code> should be defined using the pyparsing Keyword class. Right now you are just matching the literal "msg", so even when that is the leading part of "msgName2", it will match. Change this to:</p>
<pre><code>messageKw = pp.Suppress(pp.Keyword("msg"))
</code></pre></li>
<li><p><code>otherStuff</code> is a very greedy matcher, and will even match the leading "msg" keyword, which screws up your nested matching. All you need to add is a lookahead in <code>otherStuff</code> to make sure that what you are about to match is not the 'msg' keyword:</p>
<pre><code>otherStuff = ~messageKw + pp.Suppress(pp.Word(pp.alphanums + "_" + "." + "-" + "+"))
</code></pre></li>
</ol>
<p>I think with these changes, you should be able to make further progress.</p>
<p>Congratulations, btw, on writing a recursive parser (using the Forward class). This is generally a more advanced parsing topic.</p>
| 2 | 2016-08-06T15:47:21Z | [
"python",
"python-2.7",
"parsing",
"pyparsing"
] |
Why does matplotlib choose the wrong range in y using log scale? | 38,799,968 | <p>Using matplotlib version 1.5.1 and python 2.7.11 I noticed that I need to specify the limits in y manually or else only the largest y-value point is plotted. Arrays behave the same way.</p>
<p>If I remove the first point, I get a few more points, but not all of them.</p>
<p>I don't recall ever having to manually set limits like this before - why here?</p>
<p><a href="http://i.stack.imgur.com/uggRq.png" rel="nofollow"><img src="http://i.stack.imgur.com/uggRq.png" alt="enter image description here"></a></p>
<pre><code>import matplotlib.pyplot as plt
X = [0.997, 2.643, 0.354, 0.075, 1.0, 0.03, 2.39, 0.364, 0.221, 0.437]
Y = [15.487507, 2.320735, 0.085742, 0.303032, 1.0, 0.025435, 4.436435,
0.025435, 0.000503, 2.320735]
plt.figure()
plt.subplot(1,2,1)
plt.scatter(X, Y)
plt.xscale('log')
plt.yscale('log')
plt.subplot(1,2,2)
plt.scatter(X, Y)
plt.xscale('log')
plt.yscale('log')
plt.ylim(0.5*min(Y), 2.0*max(Y)) # why is this line necessary?
plt.title('added plt.ylim()')
plt.show()
</code></pre>
| 0 | 2016-08-06T02:12:56Z | 38,800,011 | <p>The problem arises because you have first drawn the scatter plot and then set the scales as logarithmic which results in a zooming in effect. This removes the problem:</p>
<pre><code>plt.xscale('log')
plt.yscale('log')
plt.scatter(X, Y)
</code></pre>
<p>This produces the intended result. (2nd subplot in your question.)</p>
| 3 | 2016-08-06T02:25:09Z | [
"python",
"python-2.7",
"matplotlib"
] |
Why does matplotlib choose the wrong range in y using log scale? | 38,799,968 | <p>Using matplotlib version 1.5.1 and python 2.7.11 I noticed that I need to specify the limits in y manually or else only the largest y-value point is plotted. Arrays behave the same way.</p>
<p>If I remove the first point, I get a few more points, but not all of them.</p>
<p>I don't recall ever having to manually set limits like this before - why here?</p>
<p><a href="http://i.stack.imgur.com/uggRq.png" rel="nofollow"><img src="http://i.stack.imgur.com/uggRq.png" alt="enter image description here"></a></p>
<pre><code>import matplotlib.pyplot as plt
X = [0.997, 2.643, 0.354, 0.075, 1.0, 0.03, 2.39, 0.364, 0.221, 0.437]
Y = [15.487507, 2.320735, 0.085742, 0.303032, 1.0, 0.025435, 4.436435,
0.025435, 0.000503, 2.320735]
plt.figure()
plt.subplot(1,2,1)
plt.scatter(X, Y)
plt.xscale('log')
plt.yscale('log')
plt.subplot(1,2,2)
plt.scatter(X, Y)
plt.xscale('log')
plt.yscale('log')
plt.ylim(0.5*min(Y), 2.0*max(Y)) # why is this line necessary?
plt.title('added plt.ylim()')
plt.show()
</code></pre>
| 0 | 2016-08-06T02:12:56Z | 38,800,032 | <p>It seems like <code>matplotlib</code> is creating the y-axis ticks before converting to a log scale, and then not recreating the ticks based on the change. The y-axis on your first subplot starts at 10e1, not 10e-3. So change the scales before you plot.</p>
<pre><code>plt.xscale('log')
plt.yscale('log')
plt.scatter(X, Y)
</code></pre>
<p>I think if you plot the original scale beside the log scale, you might be able to figure out the answer to the partial treatment of the axes by <code>matplotlib</code>. In a log scale, there is no true 0 -- because log(0) is undefined. So the coordinate has to start somewhere above 0, and that causes the problems. Your x axis ranges from 0 to 3, but y from 0 to 16. When converted to log, <code>matplotlib</code> correctly scales x axis, but since y has a factor of 10, it misses the scaling.</p>
| 1 | 2016-08-06T02:30:34Z | [
"python",
"python-2.7",
"matplotlib"
] |
Django assert failure: assertInHTML('hello', '<html>hello</html>') | 38,800,089 | <p>In Django shell:</p>
<pre><code>from django.test import SimpleTestCase
c = SimpleTestCase()
haystack = '<html><b>contribution</b></html>'
c.assertInHTML('<b>contribution</b>', haystack)
c.assertInHTML('contribution', haystack)
</code></pre>
<p>I don't understand why the first assertion passes, but the second one doesn't:</p>
<pre><code>AssertionError Traceback (most recent call last)
<ipython-input-15-20da22474686> in <module>()
5
6 c.assertInHTML('<b>contribution</b>', haystack)
----> 7 c.assertInHTML('contribution', haystack)
c:\...\lib\site-packages\django\test\testcases.py in assertInHTML(self, needle, haystack, count, msg_prefix)
680 else:
681 self.assertTrue(real_count != 0,
--> 682 msg_prefix + "Couldn't find '%s' in response" % needle)
683
684 def assertJSONEqual(self, raw, expected_data, msg=None):
C:\...\Programs\Python\Python35-32\lib\unittest\case.py in assertTrue(self, expr, msg)
675 if not expr:
676 msg = self._formatMessage(msg, "%s is not true" % safe_repr(expr))
--> 677 raise self.failureException(msg)
678
679 def _formatMessage(self, msg, standardMsg):
AssertionError: False is not true : Couldn't find 'contribution' in response
</code></pre>
<p>The Django <a href="https://docs.djangoproject.com/en/1.8/topics/testing/tools/#django.test.SimpleTestCase.assertInHTML" rel="nofollow">docs</a> just say "The passed-in arguments must be valid HTML." I don't think that is the problem, because the call to <code>assert_and_parse_html</code> on the first line doesn't raise:</p>
<pre><code>def assertInHTML(self, needle, haystack, count=None, msg_prefix=''):
needle = assert_and_parse_html(self, needle, None,
'First argument is not valid HTML:')
haystack = assert_and_parse_html(self, haystack, None,
'Second argument is not valid HTML:')
real_count = haystack.count(needle)
if count is not None:
self.assertEqual(real_count, count,
msg_prefix + "Found %d instances of '%s' in response"
" (expected %d)" % (real_count, needle, count))
else:
self.assertTrue(real_count != 0,
msg_prefix + "Couldn't find '%s' in response" % needle)
</code></pre>
<p>I'm using Python 3.5.1 and Django 1.8.8.</p>
| 4 | 2016-08-06T02:49:20Z | 38,801,320 | <p>This is a <a href="https://code.djangoproject.com/ticket/24112" rel="nofollow">bug in Django</a>:</p>
<blockquote>
<p><code>assertInHTML(needle, haystack)</code> has the following behaviour</p>
<p><code>assertInHTML('<p>a</p>', '<div><p>a</p><p>b</p></div>')</code> passes: clearly correct</p>
<p><code>assertInHTML('<p>a</p><p>b</p>', '<p>a</p><p>b</p>')</code> passes: possibly correct</p>
<p><code>assertInHTML('<p>a</p><p>b</p>', '<div><p>a</p><p>b</p></div>')</code> fails with an assertion error.</p>
</blockquote>
<p>The problem occurs when the needle does not have a unique root element that wraps everything else. </p>
<p>The <a href="https://github.com/django/django/pull/4041/files" rel="nofollow">proposed fix</a> (which has been languishing for some time!) is to raise an exception if you try to do this - i.e., the needle must have a HTML tag that wraps everything inside it.</p>
| 3 | 2016-08-06T06:37:03Z | [
"python",
"django",
"assert"
] |
The difference between these 2 strings? | 38,800,125 | <p>I have recently started to learn Python and I am hoping that you will be able to help me with a question that has been bothering me. I have been learning Python online with <a href="http://learnpythonthehardway.org/book/" rel="nofollow">Learn Python The Hard Way</a>. In Exercise 6, I came across a problem where I was using the <code>%r</code> string formatting operation and it was resulting in two different strings. When I printed one string, I got the string with the single quotes (<code>' '</code>). With another I was getting double quotes (<code>" "</code>). </p>
<p>Here is the code:</p>
<pre><code>x = "There are %d types of people." % 10
binary = "binary"
do_not = "don't"
y = "Those who know %s and those who %s." % (binary, do_not)
print "I said: %r." % x
print "I also said: %r." % y
</code></pre>
<p>The result from the first print statement:
<code>
I said: 'There are 10 types of people.'.</code></p>
<p>The result from the second print statement:</p>
<p><code>I also said: "Those who know binary and those who don't.".</code></p>
<p>I want to know why one of the statements had a result with the single quotes (<code>' '</code>) and another with (<code>" "</code>).
]
P.S. I am using Python 2.7.</p>
| 1 | 2016-08-06T02:57:19Z | 38,800,146 | <p><code>%r</code> is getting the <code>repr</code> version of the string:</p>
<pre><code>>>> x = 'here'
>>> print repr(x)
'here'
</code></pre>
<p>You see, single quotes are what are normally used. In the case of <code>y</code>, however, you have a single quote (apostrophe) inside the string. Well, the <code>repr</code> of an object is often defined so that evaluating it as code is equal to the original object. If Python were to use single quotes, that would result in an error:</p>
<pre class="lang-none prettyprint-override"><code>>>> x = 'those who don't'
File "<stdin>", line 1
x = 'those who don't'
^
SyntaxError: invalid syntax
</code></pre>
<p>so it uses double quotes instead.</p>
| 1 | 2016-08-06T03:03:33Z | [
"python",
"string",
"printing",
"string-formatting"
] |
The difference between these 2 strings? | 38,800,125 | <p>I have recently started to learn Python and I am hoping that you will be able to help me with a question that has been bothering me. I have been learning Python online with <a href="http://learnpythonthehardway.org/book/" rel="nofollow">Learn Python The Hard Way</a>. In Exercise 6, I came across a problem where I was using the <code>%r</code> string formatting operation and it was resulting in two different strings. When I printed one string, I got the string with the single quotes (<code>' '</code>). With another I was getting double quotes (<code>" "</code>). </p>
<p>Here is the code:</p>
<pre><code>x = "There are %d types of people." % 10
binary = "binary"
do_not = "don't"
y = "Those who know %s and those who %s." % (binary, do_not)
print "I said: %r." % x
print "I also said: %r." % y
</code></pre>
<p>The result from the first print statement:
<code>
I said: 'There are 10 types of people.'.</code></p>
<p>The result from the second print statement:</p>
<p><code>I also said: "Those who know binary and those who don't.".</code></p>
<p>I want to know why one of the statements had a result with the single quotes (<code>' '</code>) and another with (<code>" "</code>).
]
P.S. I am using Python 2.7.</p>
| 1 | 2016-08-06T02:57:19Z | 38,800,147 | <p>Notice this line -> <code>do_not = "don't"</code>. There is a single quote in this string, that means that single quote has to be escaped; otherwise where would the <strong>interpreter</strong> know where a string <em>began</em> and <em>ended</em>? Python knows to use <code>""</code> to represent this string literal. </p>
<p>If we remove the <code>'</code>, then we can expect a single quote surrounding the string:</p>
<p><code>do_not = "dont"</code></p>
<p><code>>>> I also said: 'Those who know binary and those who dont.'.</code></p>
<p><a href="https://docs.python.org/2.0/ref/strings.html" rel="nofollow">Single vs. double quotes in python.</a></p>
| 2 | 2016-08-06T03:03:34Z | [
"python",
"string",
"printing",
"string-formatting"
] |
List comprehension with list and list of tuples | 38,800,133 | <p>In my Python 2.7.5 code I have the following data structures:</p>
<p>A simple list...</p>
<pre><code>>>> data["parts"]
['com', 'google', 'www']
</code></pre>
<p>...and a list of tuples...</p>
<pre><code>>>> data["glue"]
[(1L, 'com'), (3L, 'google')]
</code></pre>
<p>When entering the code where these structures exist I will always know what is in <code>data["parts"]</code>; <code>data["glue"]</code>, at best, will contain "matching" tuples with what is in <code>data["parts"]</code> - worst case <code>data["glue"]</code> can be empty. What I need is to know is the parts that are missing from glue. So with the example data above, I need to know that 'www' is missing, meaning it is not in any of the tuples that may exist in <code>data["glue"]</code>.</p>
<p>I first tried to produce a list of the missing pieces by way of various for loops coupled with if statements but it was very messy at best. I have tried list comprehensions and failed. Maybe list comprehension is not the way to handle this either.</p>
<p>Your help is much appreciated, thanks.</p>
| 1 | 2016-08-06T02:59:21Z | 38,800,208 | <p>You can use list comprehensions here. Maybe the simplest thing would be to create a set of all indices, then return the missing indices. Note this answer will give you all the missing components, even if there are duplicates in the parts array (for example, if "www" appeared twice in parts). This would not be the case with set comprehension.</p>
<pre><code># set of 0-based indices extracted from the 1-based tuples
indices = set(glue_tuple[0] - 1 for glue_tuple in data['glue'])
# array of missing parts, in order
missing_parts = [part for i, part in enumerate(data["parts"]) if i not in indices]
</code></pre>
| 0 | 2016-08-06T03:15:54Z | [
"python",
"list",
"tuples",
"list-comprehension"
] |
List comprehension with list and list of tuples | 38,800,133 | <p>In my Python 2.7.5 code I have the following data structures:</p>
<p>A simple list...</p>
<pre><code>>>> data["parts"]
['com', 'google', 'www']
</code></pre>
<p>...and a list of tuples...</p>
<pre><code>>>> data["glue"]
[(1L, 'com'), (3L, 'google')]
</code></pre>
<p>When entering the code where these structures exist I will always know what is in <code>data["parts"]</code>; <code>data["glue"]</code>, at best, will contain "matching" tuples with what is in <code>data["parts"]</code> - worst case <code>data["glue"]</code> can be empty. What I need is to know is the parts that are missing from glue. So with the example data above, I need to know that 'www' is missing, meaning it is not in any of the tuples that may exist in <code>data["glue"]</code>.</p>
<p>I first tried to produce a list of the missing pieces by way of various for loops coupled with if statements but it was very messy at best. I have tried list comprehensions and failed. Maybe list comprehension is not the way to handle this either.</p>
<p>Your help is much appreciated, thanks.</p>
| 1 | 2016-08-06T02:59:21Z | 38,800,219 | <p>You can use <a href="http://www.linuxtopia.org/online_books/programming_books/python_programming/python_ch16s03.html" rel="nofollow">set difference</a> operations.</p>
<pre><code>print set(data['parts'])-set(i[1] for i in data['glue'])
>>> set(['www'])
</code></pre>
<p>or with simply using <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehensions</a>:</p>
<pre><code>print [i for i in data['parts'] if i not in (j[1] for j in data['glue'])]
>>> ['www']
</code></pre>
<p>The set operation wins in the speed department, running the operation <em>10,000,000 times</em>, we can see that the list comprehension takes over <strong>16s longer</strong>:</p>
<pre><code>import timeit
print timeit.timeit(lambda : set(data['parts'])-set(i[1] for i in data['glue']), number=10000000)
>>> 16.8089739356
print timeit.timeit(lambda : [i for i in data['parts'] if i not in (j[1] for j in data['glue'])], number=10000000)
>>> 33.5426096522
</code></pre>
| 5 | 2016-08-06T03:17:02Z | [
"python",
"list",
"tuples",
"list-comprehension"
] |
Django app server hangs / won't start in Docker Compose | 38,800,238 | <p>I am trying to launch a straightforward Django app server in Docker Compose, paired with a Postgres container. It goes through as I would expect, launching the entrypoint script, but it never seems to actually run the Django app server (which should be the last step, and remain running).</p>
<p>I know it runs the entrypoint script, because the migrate step is run. The app server never outputs any of the expected output, and port 8000 never responds.</p>
<p>I am using Docker for Mac (stable), if it matters.</p>
<p>Dockerfile for my Django app container:</p>
<pre><code>FROM ubuntu:16.04
COPY my_app /my_app
RUN apt-get update \
&& apt-get install -y python3 python3-psycopg2 python3-pip
RUN apt-get install -y nodejs npm
WORKDIR /my_app
RUN pip3 install -r requirements.txt
RUN npm install bower
RUN python3 manage.py bower install
RUN python3 manage.py collectstatic --no-input
EXPOSE 8000
COPY entrypoint.sh /
RUN chmod 755 /entrypoint.sh
CMD python3 manage.py runserver 0.0.0.0:8000
ENTRYPOINT ["/entrypoint.sh"]
</code></pre>
<p>Django entrypoint script:</p>
<pre><code>#!/bin/sh
# Allow database container to start up or recover from a crash
sleep 10
cd /my_app
# Run any pending migrations
python3 manage.py migrate
exec $@
</code></pre>
<p>docker-compose.yml:</p>
<pre><code>version: '2'
services:
db:
image: postgres:9.6
volumes:
- ./db/pgdata:/pgdata
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- PGDATA=/pgdata
- POSTGRES_DB=my_database
appserver:
image: my-image
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_database
links:
- db
depends_on:
- db
</code></pre>
| 0 | 2016-08-06T03:20:34Z | 38,800,574 | <p>Use the exec form for <code>CMD</code> in your Dockerfile</p>
<pre><code>CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
</code></pre>
<p>The <code>entrypoint.sh</code> scripts <code>exec</code> is currently trying to run:</p>
<pre><code>/bin/sh -c python3 manage.py runserver 0.0.0.0:8000
</code></pre>
<p>Which doesn't seem to work, I think it's just running <code>python3</code>. </p>
<p>You should <a href="http://stackoverflow.com/a/3990540/1318694">quote the positional parameters variable</a> so the shell maintains each parameter, even if there are spaces.</p>
<pre><code>exec "$@"
</code></pre>
<p>But it's best not to have <code>sh</code> in between docker and your app, so always use the exec form for a <code>CMD</code>.</p>
| 1 | 2016-08-06T04:35:21Z | [
"python",
"django",
"docker",
"docker-compose"
] |
Python currying with any number of variables | 38,800,245 | <p>I am trying to use currying to make a simple functional add in Python. I found this curry decorator <a href="https://gist.github.com/JulienPalard/021f1c7332507d6a494b" rel="nofollow">here</a>.</p>
<pre><code>def curry(func):
def curried(*args, **kwargs):
if len(args) + len(kwargs) >= func.__code__.co_argcount:
return func(*args, **kwargs)
return (lambda *args2, **kwargs2:
curried(*(args + args2), **dict(kwargs, **kwargs2)))
return curried
@curry
def foo(a, b, c):
return a + b + c
</code></pre>
<p>Now this is great because I can do some simple currying:</p>
<pre><code>>>> foo(1)(2, 3)
6
>>> foo(1)(2)(3)
6
</code></pre>
<p>But this only works for exactly three variables. How do I write the function foo so that it can accept any number of variables and still be able to curry the result? I've tried the simple solution of using *args but it didn't work.</p>
<p>Edit: I've looked at the answers but still can't figure out how to write a function that can perform as shown below:</p>
<pre><code>>>> foo(1)(2, 3)
6
>>> foo(1)(2)(3)
6
>>> foo(1)(2)
3
>>> foo(1)(2)(3)(4)
10
</code></pre>
| 2 | 2016-08-06T03:22:46Z | 38,800,354 | <p>Arguably, <code>explicit is better than implicit</code>:</p>
<pre><code>from functools import partial
def example(*args):
print("This is an example function that was passed:", args)
one_bound = partial(example, 1)
two_bound = partial(one_bound, 2)
two_bound(3)
</code></pre>
<p>@JohnKugelman explained the design problem with what you're trying to do - a call to the curried function would be ambiguous between "add more curried arguments" and "invoke the logic". The reason this isn't a problem in Haskell (where the concept comes from) is that the language evaluates <em>everything</em> lazily, so there <em>isn't a distinction</em> you can meaningfully make between "a function named <code>x</code> that accepts no arguments and simply returns 3" and "a call to the aforementioned function", or even between those and "the integer 3". Python isn't like that. (You could, for example, use a zero-argument call to signify "invoke the logic now"; but that would break <code>special cases aren't special enough</code>, and require an extra pair of parentheses for simple cases where you don't actually want to do any currying.)</p>
<p><code>functools.partial</code> is an out-of-box solution for partial application of functions in Python. Unfortunately, repeatedly calling <code>partial</code> to add more "curried" arguments isn't quite as efficient (there will be nested <code>partial</code> objects under the hood). However, it's much more flexible; in particular, you can use it with existing functions that don't have any special decoration.</p>
| 3 | 2016-08-06T03:47:42Z | [
"python",
"python-2.7",
"currying"
] |
Python currying with any number of variables | 38,800,245 | <p>I am trying to use currying to make a simple functional add in Python. I found this curry decorator <a href="https://gist.github.com/JulienPalard/021f1c7332507d6a494b" rel="nofollow">here</a>.</p>
<pre><code>def curry(func):
def curried(*args, **kwargs):
if len(args) + len(kwargs) >= func.__code__.co_argcount:
return func(*args, **kwargs)
return (lambda *args2, **kwargs2:
curried(*(args + args2), **dict(kwargs, **kwargs2)))
return curried
@curry
def foo(a, b, c):
return a + b + c
</code></pre>
<p>Now this is great because I can do some simple currying:</p>
<pre><code>>>> foo(1)(2, 3)
6
>>> foo(1)(2)(3)
6
</code></pre>
<p>But this only works for exactly three variables. How do I write the function foo so that it can accept any number of variables and still be able to curry the result? I've tried the simple solution of using *args but it didn't work.</p>
<p>Edit: I've looked at the answers but still can't figure out how to write a function that can perform as shown below:</p>
<pre><code>>>> foo(1)(2, 3)
6
>>> foo(1)(2)(3)
6
>>> foo(1)(2)
3
>>> foo(1)(2)(3)(4)
10
</code></pre>
| 2 | 2016-08-06T03:22:46Z | 38,800,519 | <p>You can implement the same thing as the <code>functools.partial</code> example for yourself like this:</p>
<pre><code>def curry (prior, *additional):
def curried(*args):
return prior(*(args + additional))
return curried
def add(*args):
return sum(args)
x = curry(add, 3,4,5)
y = curry(b, 100)
print y(200)
# 312
</code></pre>
<p>It may be easier to think of <code>curry</code> as a function factory rather than a decorator; technically that's all a decorator does but the decorator usage pattern is static where a factory is something you expect to be invoking as part of a chain of operations. </p>
<p>You can see here that I'm starting with <code>add</code> as an argument to curry and not <code>add(1)</code> or something: the factory signature is <code><callable>, *<args></code> . That gets around the problem in the comments to the original post.</p>
| 0 | 2016-08-06T04:22:01Z | [
"python",
"python-2.7",
"currying"
] |
Improve performance of constraint-adding in Gurobi (Python-Interface) | 38,800,280 | <p>i got this decision variable:</p>
<pre><code>x={}
for j in range(10):
for i in range(500000):
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%d%d" %(i,j))
</code></pre>
<p>so i need to add constraints for each x[i,j] variable like this:</p>
<pre><code>for p in range(10):
for u in range(500000):
m.addConstr(x[u,p-1]<=x[u,p])
</code></pre>
<p>this is taking me so much time, more that 12hrs and then a lack of memory pop-up appears at my computer.
Can someone helpme to improve this constraint addition problem</p>
| 0 | 2016-08-06T03:29:05Z | 38,804,906 | <h2>General Remark:</h2>
<ul>
<li>It looks quite costly to add 5 million constraints in general</li>
</ul>
<h2>Specific Remark:</h2>
<h3>Approach</h3>
<ul>
<li>You are wasting time and space by using <em>dictionaries</em>
<ul>
<li>Despite having constant-access complexity, these constants are big</li>
<li>They are also wasting memory </li>
</ul></li>
<li>In a simple 2-dimensional case like this: stick to arrays!</li>
</ul>
<h3>Validity</h3>
<ul>
<li>Your indexing is missing the border-case of the first element, so indexing breaks!</li>
</ul>
<p>Try this (much more efficient approach; using numpy's arrays):</p>
<pre><code>import numpy as np
from gurobipy import *
N = 10
M = 500000
m = Model("Testmodel")
x = np.empty((N, M), dtype=object)
for i in range(N):
for j in range(M):
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%d%d" %(i,j))
m.update()
for u in range(M): # i switched the loop-order
for p in range(1,N): # i'm handling the border-case
m.addConstr(x[p-1,u] <= x[p,u])
</code></pre>
<p><strong>Result:</strong></p>
<ul>
<li><em>~2 minutes</em></li>
<li><em>~2.5GB</em> memory (complete program incl. Gurobi's internals)</li>
</ul>
| 1 | 2016-08-06T13:42:38Z | [
"python",
"constraints",
"linear-programming",
"gurobi"
] |
Improve performance of constraint-adding in Gurobi (Python-Interface) | 38,800,280 | <p>i got this decision variable:</p>
<pre><code>x={}
for j in range(10):
for i in range(500000):
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%d%d" %(i,j))
</code></pre>
<p>so i need to add constraints for each x[i,j] variable like this:</p>
<pre><code>for p in range(10):
for u in range(500000):
m.addConstr(x[u,p-1]<=x[u,p])
</code></pre>
<p>this is taking me so much time, more that 12hrs and then a lack of memory pop-up appears at my computer.
Can someone helpme to improve this constraint addition problem</p>
| 0 | 2016-08-06T03:29:05Z | 38,816,516 | <p>Most likely, you are running out of physical memory and using virtual (swap) memory. This would not cause your computer to report an out-of-memory warning or error.</p>
<p>I rewrote your code as follows:</p>
<pre><code>from gurobipy import *
m = Model()
x={}
for j in range(10):
for i in range(500000):
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%d%d" %(i,j))
m.update()
for p in range(10):
for u in range(500000):
try:
m.addConstr(x[u,p-1]<=x[u,p])
except:
pass
m.update()
</code></pre>
<p>I tested this using Gurobi Optimizer 6.5.2 on a computer with an Intel Xeon E3-1240 processor (3.40 GHz) and 32 GB of physical memory. It was able to formulate the variables and constraints in 1 minute 14 seconds. You might be able to save a small amount of memory using a list, but I believe that Gurobi Var and Constr objects require far more memory than a Python dict or list.</p>
| 1 | 2016-08-07T17:04:45Z | [
"python",
"constraints",
"linear-programming",
"gurobi"
] |
BeautifulSoup <span>foo</span> result | 38,800,307 | <p>HTML Code:
<div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div id="descProdotto"> <img alt="Mont Blanc Eyewear" class="logoDesc" src="http://151.9.39.27/nfs/Immagini/Loghi_Linee/BA.png"/> <p><span>Nome:</span> BA0055</p> <p><span>Occh.:</span> Metallo</p> <p><span>Forma:</span> Geometrico</p> <p><span>Tipo:</span> Cerchiato</p> <p><span>Asta Flex:</span> No</p> <p><span>Fitting:</span> Caucasian</p> <img class="separatore" height="11" src="../../Grafica/Icone/separator_S.png" width="197"/>
<div class="glassDes">
<p class="scroll"></p>
</div></code></pre>
</div>
</div>
</p>
<p>Python Code:</p>
<pre><code>from bs4 import BeautifulSoup
def get_description(sorgente):
soup = BeautifulSoup(sorgente, 'html.parser')
list = soup.find_all("p")
for a in list:
print(a.find('Nome:')
code = driver.page_source
get_description(code)
</code></pre>
<p>I can not extract the value in a variable: <code>Nome</code>: <code>BA0055</code></p>
<p>How can I do that?</p>
<p>Thank you</p>
| 0 | 2016-08-06T03:34:17Z | 38,800,387 | <p>Find the <code>span</code> element <em>by text</em> and get the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#next-sibling-and-previous-sibling" rel="nofollow">next sibling</a>:</p>
<pre><code>soup.find("span", text="Nome:").next_sibling
</code></pre>
<p>This is also covered in the <a class='doc-link' href="http://stackoverflow.com/documentation/beautifulsoup/1940/locating-elements/6339/locate-a-text-after-an-element-in-beautifulsoup#t=201608060356509896745">example snippet in the SO Documentation</a>.</p>
| 0 | 2016-08-06T03:54:50Z | [
"python",
"html",
"beautifulsoup"
] |
Cannot validate boolean field if value = false (missing field) | 38,800,338 | <p>I can't validate POST data with boolean field value "false" only, this works fine with "true". I was already try to force '0', 'False' and 'false' but is_valid() method return false with error "This field is required". </p>
<p>My post data dict is : </p>
<pre><code><QueryDict: {u'vote': [u'false'], u'csrfmiddlewaretoken': [u'l3RlmeHDnv7Y5aQiNJoadLUZDsiOFUI'], u'id': [u'40']}>
</code></pre>
<p>If I post 'true' value for this field, the QueryDict is exactly the same, just 'false' is changing. </p>
<p>I tried to convert into str instead of unicode dict, but it's not working. </p>
<p>Thanks !</p>
| 0 | 2016-08-06T03:42:54Z | 38,800,383 | <p>If I understand your question correctly, you would need to parse the input from a string to bool value. The following: <code>bool('false')</code> and <code>bool('true')</code> will always return <code>True</code> due to their implicit <a href="https://docs.python.org/2/library/stdtypes.html#truth-value-testing" rel="nofollow">truthiness and falseness</a> respectively. </p>
<pre><code>def to_bool(s):
return s.lower() == 'true'
</code></pre>
<p><strong>Sample output:</strong></p>
<pre><code>d = {u'vote': [u'false']}
print to_bool(d['vote'][0])
>>> False
</code></pre>
| 2 | 2016-08-06T03:54:35Z | [
"python",
"django",
"django-forms"
] |
Cannot validate boolean field if value = false (missing field) | 38,800,338 | <p>I can't validate POST data with boolean field value "false" only, this works fine with "true". I was already try to force '0', 'False' and 'false' but is_valid() method return false with error "This field is required". </p>
<p>My post data dict is : </p>
<pre><code><QueryDict: {u'vote': [u'false'], u'csrfmiddlewaretoken': [u'l3RlmeHDnv7Y5aQiNJoadLUZDsiOFUI'], u'id': [u'40']}>
</code></pre>
<p>If I post 'true' value for this field, the QueryDict is exactly the same, just 'false' is changing. </p>
<p>I tried to convert into str instead of unicode dict, but it's not working. </p>
<p>Thanks !</p>
| 0 | 2016-08-06T03:42:54Z | 38,804,363 | <p><strong>Solved</strong> :
Using NullBooleanField instead of BooleanField... </p>
| 0 | 2016-08-06T12:35:11Z | [
"python",
"django",
"django-forms"
] |
'WSGIRequest' object has no attribute 'flash' | 38,800,379 | <p>I badly need your help. I am currently trying to pass a string value using flash but I am not sure if I got it correctly. </p>
<p>This is my code:</p>
<pre><code>def first_view(request):
request.flash['message'] = 'Operation succeeded!'
return HttpResponseRedirect(reverse(second_view))
def second_view(request):
print request.flash['message']
request.flash.keep('message')
return HttpResponseRedirect(reverse(third_view))
</code></pre>
<p>I'd like to pass the message "Operation Succeeded" to second_view() through HttpResponseRedirect however I got this error message. I am new to python and django so this does not really make a clear sense to me. Your help is so much appreciated. Thanks</p>
| 0 | 2016-08-06T03:53:12Z | 38,800,858 | <p>By default the django HttpRequest object doesn't have an attribute named flash. That's why you are getting this error. You can see available attributes here: <a href="https://docs.djangoproject.com/en/1.9/ref/request-response/#httprequest-objects" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/request-response/#httprequest-objects</a></p>
<p>But there's no reason why you can't add one.</p>
<pre><code>def first_view(request):
request.flash = {'message'] : 'Operation succeeded!'}
return HttpResponseRedirect(reverse(second_view))
def second_view(request):
try:
print request.flash['message']
request.flash.keep('message')
except:
pass
return HttpResponseRedirect(reverse(third_view))
</code></pre>
<p>But from where your <code>flash.keep</code> comes from i have no idea!! As pointed out by <a href="http://stackoverflow.com/users/940098/wtower">wtower</a> it's more usual to rely on the <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/messages/" rel="nofollow">django messages framework</a> for this sort of thing.</p>
| 1 | 2016-08-06T05:25:55Z | [
"python",
"django"
] |
Python: Loop Prompt Back Around | 38,800,385 | <p>A lot of the errors of this Python file have been fixed but there is one last thing it's not working. I need the else statement to loop back to ask the question again if neither Yes or No is entered. You can probably see by the code what I was going for but I'm probably not even on the right track. Can someone help me with this one last thing?</p>
<pre><code>#These are my variables which are just strings entered by the user.
friend = raw_input("Who is your friend? ")
my_name = raw_input("Enter Your Name: ")
game_system = raw_input("What's your favorite game system? ")
game_name = raw_input("What's your favorite game for that system? ")
game_status = raw_input("Do you have the game? (Yes or No) ")
game_store = raw_input("What is your favorite game store? ")
game_price = raw_input("What's the price of the game today? Enter a whole number. ")
#This is what is printed after all the prompts. There is no condition for this print.
print "I went outside today and my friend " + friend + " was outside waiting for me. He said \"" + my_name + ", did you get the new " + game_system + " game yet? You were getting " + game_name + " today, right?\""
#If the condition on the Yes or No question is yes, then this code runs.
if game_status == "YES":
print "\"You know I got it, man!\" I said. \"Awesome! Let's play it,\" he said. \"I heard " + game_name + " is supposed to be amazing!\" We went back inside and took turns playing until " + friend + " had to go home. Today was a fun day."
#If the condition is No, then this code runs.
elif game_status == "No":
print "\"Well let's go get it today and we can come back here and play it!\" We went down to " + game_store + " and baught " + game_name + " for $" + str(game_price) + " and we went back to my house to take turns playing until " + friend + " went home. Today was a good day. (Now try again with the No option!)"
#If the condition meets neither Yes or No, then this code runs, sending the user back to the same question again. This repeats until a condition is met.
else:
raw_input("That answer didn't work. Try again? Do you have the game? (Yes or No) ")
</code></pre>
| -1 | 2016-08-06T03:54:45Z | 38,800,426 | <p>I would encourage to you always break to a new line after a conditional...</p>
<pre><code>if game_status == "YES":
print "\"You know I got it, man!\" I said. \"Awesome! Let's play it,\" he said. \"I heard " + game_name + " is supposed to be amazing!\" We went back inside and took turns playing until " + friend + " had to go home. Today was a fun day."
</code></pre>
<p>anything that is indented after the "if game_status:" will get run. And it reads better.</p>
<p><em>edit</em>:: if you use single quotes for all strings then you don't need to escape the double quotes...</p>
<pre><code> print '"You know I got it, man!" I said. "Awesome! Let\'s play it," he said. "I heard ' + game_name + ' is supposed to be amazing!" We went back inside and took turns playing until ' + friend + ' had to go home. Today was a fun day.'
</code></pre>
<p>it's a matter of preference...but may look less cluttered.</p>
| 2 | 2016-08-06T04:04:02Z | [
"python",
"if-statement"
] |
Python: Loop Prompt Back Around | 38,800,385 | <p>A lot of the errors of this Python file have been fixed but there is one last thing it's not working. I need the else statement to loop back to ask the question again if neither Yes or No is entered. You can probably see by the code what I was going for but I'm probably not even on the right track. Can someone help me with this one last thing?</p>
<pre><code>#These are my variables which are just strings entered by the user.
friend = raw_input("Who is your friend? ")
my_name = raw_input("Enter Your Name: ")
game_system = raw_input("What's your favorite game system? ")
game_name = raw_input("What's your favorite game for that system? ")
game_status = raw_input("Do you have the game? (Yes or No) ")
game_store = raw_input("What is your favorite game store? ")
game_price = raw_input("What's the price of the game today? Enter a whole number. ")
#This is what is printed after all the prompts. There is no condition for this print.
print "I went outside today and my friend " + friend + " was outside waiting for me. He said \"" + my_name + ", did you get the new " + game_system + " game yet? You were getting " + game_name + " today, right?\""
#If the condition on the Yes or No question is yes, then this code runs.
if game_status == "YES":
print "\"You know I got it, man!\" I said. \"Awesome! Let's play it,\" he said. \"I heard " + game_name + " is supposed to be amazing!\" We went back inside and took turns playing until " + friend + " had to go home. Today was a fun day."
#If the condition is No, then this code runs.
elif game_status == "No":
print "\"Well let's go get it today and we can come back here and play it!\" We went down to " + game_store + " and baught " + game_name + " for $" + str(game_price) + " and we went back to my house to take turns playing until " + friend + " went home. Today was a good day. (Now try again with the No option!)"
#If the condition meets neither Yes or No, then this code runs, sending the user back to the same question again. This repeats until a condition is met.
else:
raw_input("That answer didn't work. Try again? Do you have the game? (Yes or No) ")
</code></pre>
| -1 | 2016-08-06T03:54:45Z | 38,800,484 | <p>This:</p>
<pre><code>if game_status: "YES"
</code></pre>
<p>isn't how you make an <code>if</code> statement. You're treating it like the syntax is</p>
<pre><code>if some_variable: some_value
</code></pre>
<p>and if the variable has that value, the if statement triggers. In fact, the syntax is</p>
<pre><code>if some_expression:
</code></pre>
<p>and if the expression evaluates to something considered true, the if statement triggers. When you want the <code>if</code> statement to trigger on <code>game_status</code> equalling <code>"YES"</code>, the expression should be <code>game_status == "YES"</code>, so the <code>if</code> line should go</p>
<pre><code>if game_status == "YES":
</code></pre>
<p>Similarly, the <code>elif</code> line should go</p>
<pre><code>elif game_status == "NO":
</code></pre>
| 2 | 2016-08-06T04:16:08Z | [
"python",
"if-statement"
] |
How to read hadoop map file using python? | 38,800,430 | <p>I have map file that is block compressed using DefaultCodec. The map file is created by java application like this:</p>
<pre><code>MapFile.Writer writer =
new MapFile.Writer(conf, path,
MapFile.Writer.keyClass(IntWritable.class),
MapFile.Writer.valueClass(BytesWritable.class),
MapFile.Writer.compression(SequenceFile.CompressionType.BLOCK, new DefaultCodec()));
</code></pre>
<p>This file is stored in hdfs and I need to read some key,values from it in another application using python. I can't find any library that can do that. Do you have any suggestion and example?</p>
<p>Thanks</p>
| 0 | 2016-08-06T04:04:49Z | 38,800,680 | <p>I would suggest using Spark which has a function called textFile() which can read files from HDFS and turn them into RDDs for further processing using other Spark libraries.</p>
<p>Here's the documentation : <a href="http://spark.apache.org/docs/latest/api/python/pyspark.html" rel="nofollow">Pyspark</a></p>
| 0 | 2016-08-06T04:54:14Z | [
"python",
"hadoop"
] |
How to read hadoop map file using python? | 38,800,430 | <p>I have map file that is block compressed using DefaultCodec. The map file is created by java application like this:</p>
<pre><code>MapFile.Writer writer =
new MapFile.Writer(conf, path,
MapFile.Writer.keyClass(IntWritable.class),
MapFile.Writer.valueClass(BytesWritable.class),
MapFile.Writer.compression(SequenceFile.CompressionType.BLOCK, new DefaultCodec()));
</code></pre>
<p>This file is stored in hdfs and I need to read some key,values from it in another application using python. I can't find any library that can do that. Do you have any suggestion and example?</p>
<p>Thanks</p>
| 0 | 2016-08-06T04:04:49Z | 38,803,884 | <p>Create a reader as follow:</p>
<pre><code>path = '/hdfs/path/to/file'
key = LongWritable()
value = LongWritable()
reader = MapFile.Reader(path)
while reader.next(key, value):
print key, value
</code></pre>
<p>Check out these <a href="http://nullege.com/codes/show/src%40h%40a%40Hadoop-HEAD%40python-hadoop%40examples%40MapFileTest.py/31/hadoop.io.MapFile.Reader/python" rel="nofollow">hadoop.io.MapFile Python examples</a></p>
<p>And <a href="https://github.com/matteobertozzi/Hadoop/blob/master/python-hadoop/hadoop/io/MapFile.py" rel="nofollow">available methods in MapFile.py</a> </p>
| 0 | 2016-08-06T11:40:33Z | [
"python",
"hadoop"
] |
Function not returning a value | 38,800,462 | <p>Please help. I am having issues with my <code>getroofCalcs()</code> function not returning variables. Specifically the <code>roofArea</code> variable at the moment. This is a basic program for my intro to programming class and I can not figure out why when I run this I keep getting the error that <code>roofArea</code> is not defined when I call for the <code>getshingleCalcs()</code> function. This code is in Python.</p>
<pre><code># Stick Built Garage Estimator
# Written by: John Ruehs
#Initialization Variables
#Declare doAgain
#Input Variables
#Declare length
#Declare width
#Declare studSpace
#Declare wallHeight
#Declare roofPitch
#Declare overHang
#Declare bigGarageDoor
#Declare smallGarageDoor
#Declare entryDoor
#Declare window
#Calculated Variables
#Declare topTiePlate
#Declare bottomPlate
#Declare studs
#Declare wallSheathing
#Declare roofSheathing
#Declare shingles
#Declare shingleStarter
#Declare ridgeCap
#Declare roofArea
#Declare rakeLength
#Declare studAdj
#Declare botPlateAdj
#Declare wallAreaAdj
#Declare gableArea
import math
def main():
doAgain = "yes"
if doAgain == "yes":
length, width, studSpace, wallHeight, roofPitch, overHang, bigGarageDoor, smallGarageDoor, entryDoor, windows = getinputs()
getframeCalcs(length, width, studSpace, bigGarageDoor, smallGarageDoor, entryDoor, windows)
getwallCalcs(length, width, wallHeight, bigGarageDoor, smallGarageDoor, entryDoor, roofPitch)#need to put variables needed here
getroofCalcs(length, width, roofPitch, overHang)#need to put variables needed here
getshingleCalcs(length, roofArea, rakeLength)#need to put variables needed here
display(topTiePlate, bottomPlate, studs, wallSheathing, roofSheathing, shingles, shingleStarter, ridgeCap, rakeLength)#need to put variables needed here
doAgain = input("Do you want to run this again('yes' or 'no')?")
else:
print("")
def getinputs():
length = float(input("Enter the length of the building: "))
width = float(input("Enter the width of the building: "))
studSpace = float(input("Enter the stud spacing: "))
wallHeight = float(input("Enter the wall height: "))
roofPitch = input("Enter the roof pitch: ")
overHang = float(input("Enter the over-hang in inches: "))
bigGarageDoor = int(input("Enter the number of 16' garage doors: "))
smallGarageDoor = int(input("Enter the number of 9' garage doors: "))
entryDoor = int(input("Enter the number of entry doors: "))
windows = int(input("Enter the number of windows that are smaller than 3' wide: "))
return length, width, studSpace, wallHeight, roofPitch, overHang, bigGarageDoor, smallGarageDoor, entryDoor, windows
def getframeCalcs(length, width, studSpace, bigGarageDoor, smallGarageDoor, entryDoor, windows):
studAdj = ((bigGarageDoor*-7)+(smallGarageDoor*-3)+(entryDoor*2)+(windows*5))
botPlateAdj = ((bigGarageDoor*-16)+(smallGarageDoor*-9)+(entryDoor*-3))
studs = math.ceil((((((((length*2)+(width*2))*12)/studSpace)+8)*1.1)+studAdj))
topTiePlate = math.ceil((((length*2)+(width*2))/16)*2)
bottomPlate = math.ceil(((((length*2)+(width*2))+botPlateAdj)/16))
return studs, topTiePlate, bottomPlate
def getwallCalcs(length, width, wallHeight, bigGarageDoor, smallGarageDoor, entryDoor, roofPitch):
wallAreaAdj = ((bigGarageDoor*-112)+(smallGarageDoor*-63)+(entryDoor*-21.77))
if roofPitch == "1/12":
gableArea = math.ceil(((((width/2)+0.5)*1)/12)*(((width/2)+0.5))*2)
elif roofPitch == "2/12":
gableArea = math.ceil(((((width/2)+0.5)*2)/12)*(((width/2)+0.5))*2)
elif roofPitch == "3/12":
gableArea = math.ceil(((((width/2)+0.5)*3)/12)*(((width/2)+0.5))*2)
elif roofPitch == "4/12":
gableArea = math.ceil(((((width/2)+0.5)*4)/12)*(((width/2)+0.5))*2)
elif roofPitch == "5/12":
gableArea = math.ceil(((((width/2)+0.5)*5)/12)*(((width/2)+0.5))*2)
elif roofPitch == "6/12":
gableArea = math.ceil(((((width/2)+0.5)*6)/12)*(((width/2)+0.5))*2)
elif roofPitch == "7/12":
gableArea = math.ceil(((((width/2)+0.5)*7)/12)*(((width/2)+0.5))*2)
elif roofPitch == "8/12":
gableArea = math.ceil(((((width/2)+0.5)*8)/12)*(((width/2)+0.5))*2)
elif roofPitch == "9/12":
gableArea = math.ceil(((((width/2)+0.5)*9)/12)*(((width/2)+0.5))*2)
elif roofPitch == "10/12":
gableArea = math.ceil(((((width/2)+0.5)*10)/12)*(((width/2)+0.5))*2)
elif roofPitch == "11/12":
gableArea = math.ceil(((((width/2)+0.5)*11)/12)*(((width/2)+0.5))*2)
else:
gabelArea = math.ceil(((((width/2)+0.5)*12)/12)*(((width/2)+0.5))*2)
wallSheathing = math.ceil(((((((length*2)+(width*2))*wallHeight)+gableArea)+wallAreaAdj)/32))
return wallSheathing
def getroofCalcs(length, width, roofPitch, overHang):
if roofPitch == "1/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*1)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "2/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*2)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "3/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*3)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "4/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*4)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "5/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*5)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "6/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*6)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "7/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*7)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "8/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*8)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "9/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*9)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "10/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*10)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
elif roofPitch == "11/12":
roofArea = math.ceil((((((((((width/2)+(overHang/12))*11)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
else:
roofArea = math.ceil((((((((((width/2)+(overHang/12))*12)/12)**2)+(((width/2)+(overHang/12))**2))**.5)*length)*2))
roofSheathing = math.ceil(roofArea/32)
if roofPitch == "1/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*1)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "2/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*2)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "3/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*3)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "4/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*4)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "5/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*5)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "6/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*6)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "7/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*7)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "8/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*8)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "9/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*9)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "10/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*10)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
elif roofPitch == "11/12":
rakeLength = math.ceil((((((((width/2)+(overHang/12))*11)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
else:
rakeLength = math.ceil((((((((width/2)+(overHang/12))*12)/12)**2)+(((width/2)+(overHang/12))**2))**.5))
return roofArea, roofSheathing, rakeLength
def getshingleCalcs(length, roofArea, rakeLength):
shingles = math.ceil(((roofArea/100)*3))
shingleStarter = math.ceil((((rakeLength*4)+(length*2))/120))
ridgeCap = math.ceil(length/20)
return shingles, shingleStarter, ridgeCap
def display(topTiePlate, bottomPlate, studs, wallSheathing, roofSheathing, shingles, shingleStarter, ridgeCap, rakeLength):
print("")
print("16' Top Plate/Tie Plate: ", topTiePlate)
print("16' Bottom Plate: ", bottomPlate)
print("Studs: ", studs)
print("4'x8' Wall Sheathing: ", wallSheathing)
print("4'x8' Roof Sheathing: ", roofSheathing)
print("Rake Length (Rounded Up): ", rakeLength)
print("Bundles of Shingles: ", shingles)
print("Bundles of Shingle Starter: ", shingleStarter)
print("Bundles of Ridge Cap: ", ridgeCap)
print("")
print("")
main()
</code></pre>
| 0 | 2016-08-06T04:11:53Z | 38,800,538 | <p><code>getroofCalcs()</code> <em>is</em> returning a value - it's returning a tuple consisting of the three calculated values. The problem is, however, that the return value is not bound to any variable and so is lost. You can change the code where the call to <code>getroofCalcs()</code> is made in <code>main()</code> to bind the return value of the function to a variable:</p>
<pre><code> result = getroofCalcs(length, width, roofPitch, overHang)#need to put variables needed here
</code></pre>
<p>This will bind to the variable <code>result</code> the tuple returned by <code>getroofCalcs()</code>. It's also possible to unpack the tuple directly into individual variables like this:</p>
<pre><code> roofArea, roofSheathing, rakeLength = getroofCalcs(length, width, roofPitch, overHang)
</code></pre>
<p>Now the call to <code>getshingleCalcs()</code> should work.</p>
<p><strong>N.B.</strong> there is a similar problem with the call to <code>getshingleCalcs()</code> where the return value is lost because it is not bound to any variable(s). You should also change that line to:</p>
<pre><code> shingles, shingleStarter, ridgeCap = getshingleCalcs(length, roofArea, rakeLength)
</code></pre>
| 1 | 2016-08-06T04:25:08Z | [
"python",
"function",
"return"
] |
Flask on GAE - Connect to Google API - Can't use JSON file | 38,800,487 | <p>I'm fairly new to Flask, GAE and the use of API. I'm trying to build a basic Web App that can connect to one of Google's API.</p>
<p>My folder structure looks like this (I've kept it to the main files):<br>
app-webemotions:<br>
-app.yaml<br>
-main.py<br>
-lib<br>
--sentimentanalysis.py<br>
-static<br>
--credential.json</p>
<p>Everything is working but providing the json file for the credentials. My understanding is that there's a couple of ways to do it:<br>
1) Setting up the GOOGLE_APPLICATION_CREDENTIALS environment variable to the destination of my file in app.yaml<br>
2) Requesting the file through my script (sentimentanalysis.py) </p>
<p>Unfortunately, I haven't been able to make any of those work. </p>
<p><strong>Option 1):</strong><br>
In app.yaml I have the line:</p>
<pre><code>env_variables:
GOOGLE_APPLICATION_CREDENTIALS: static/key/credentials.json
</code></pre>
<p>I then run my code through dev_appserver.py . and get the following error: </p>
<pre><code>ApplicationDefaultCredentialsError: File static/key/credentials.json (pointed by GOOGLE_APPLICATION_CREDENTIALS environment variable) does not exist!
</code></pre>
<p><strong>Option 2):</strong>
I have a line of code in my script sentimentanalysis.py:</p>
<pre><code> scope = ['https://www.googleapis.com/auth/cloud-platform']
credentials = ServiceAccountCredentials.from_json_keyfile_name('/static/credentials.json', scope)
</code></pre>
<p>And when running the code I get the following error:</p>
<pre><code> raise IOError(errno.EACCES, 'file not accessible', filename)
IOError: [Errno 13] file not accessible: '/static/credentials.json'
INFO 2016-08-06 04:10:51,678 module.py:788] default: "POST /Sentiment-analysis HTTP/1.1" 500 -
</code></pre>
<p><strong>Question:</strong><br>
So it looks like regardless of the method I'm using, I'm not able to provide the right path to the JSON file</p>
<p>My question is to know first if any of the above options is the right option and if yes, what am I doing wrong? If they are not the right options, what would you recommend?</p>
<p>Apologies if this has already been asked, I've tried to find an answer for a few hours now and haven't been able to crack it...</p>
<p>Thank you!</p>
| 1 | 2016-08-06T04:16:36Z | 38,837,697 | <p>If you are running on Google App Engine, then your code automatically has the credentials it needs. Do not set GOOGLE_APPLICATION_CREDENTIALS and do not call .from_json_keyfile_name. Instead, call:</p>
<pre><code>credentials = GoogleCredentials.get_application_default()
</code></pre>
<p>As shown here:
<a href="https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/bigquery/api/getting_started.py" rel="nofollow">https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/bigquery/api/getting_started.py</a></p>
| 0 | 2016-08-08T20:10:05Z | [
"python",
"google-app-engine",
"flask",
"google-api"
] |
Python Inheritance: Calling Methods Inside Methods | 38,800,491 | <p>I'm new to Python and trying to understand the best "Pythonic" practices for OO and inheritance.</p>
<p>Here's a simplified version of something I'm trying to do: Let's say I have a base class <code>A</code> that has an initialization method, which, in turn, calls another method that sets some internal parameters -- I'd like to have the latter method accessible to clients as an independent function/service that can be reused after initialization:</p>
<pre><code>class A(object):
def __init__(self):
print "Entered A's __init__"
#Initialization specific to A:
print "Calling A's set_params"
self.set_params()
def set_params(self):
print "Entered A's set_params"
#Parameter setting specific to A:
#...
</code></pre>
<p>Then as expected, the output of <code>A()</code> prints the following:</p>
<pre><code>Entered A's __init__
Calling A's set_params
Entered A's set_params
</code></pre>
<p>So far there's no problem. Next, I add a sub class <code>B</code> that inherits from <code>A</code> but has more specific tasks going on in addition to the inherited ones:</p>
<pre><code>class B(A):
def __init__(self):
print "Entered B's __init__"
#Inheriting initialization from A:
print "Calling A's __init__"
super(B, self).__init__()
#More initialization specific to B:
#...
def set_params(self):
print "Entered B's set_params"
#Inheriting parameter setting from A:
print "Calling A's set_params"
super(B, self).set_params()
#More parameter setting specific to B:
#...
</code></pre>
<p>The problem is that I expect the initializer <code>A.__init__</code> called by <code>B.__init__</code> to operate <em>completely independently</em> of how I'm overriding the functions of <code>B</code>, so that overriding the functions of <code>B</code> does not change the behavior of <code>A</code> when instantiating <code>B</code>. In this case, I need <code>A.__init__</code> to call <code>A.set_params</code> as before rather than calling <code>B.set_params</code>, and disregard the fact that the latter is being overridden.</p>
<p>In other words, in this case I obtain the following output after running <code>B()</code>:</p>
<pre><code>Entered B's __init__
Calling A's __init__
Entered A's __init__
Calling A's set_params
Entered B's set_params
Calling A's set_params
Entered A's set_params
</code></pre>
<p>And the question is, what should I do to get this instead after running <code>B()</code>?</p>
<pre><code>Entered B's __init__
Calling A's __init__
Entered A's __init__
Calling A's set_params
Entered A's set_params
</code></pre>
<p>The problem would disappear if I simply got rid of <code>A.set_params</code> and put its code content inside <code>A.__init__</code>, but as I mentioned, I'd like it to be separate and accessible by client code independently.</p>
<p>I understand that it has something to do with the fact that functions are bound to <em>instances</em> in this case rather than to <em>classes</em>, and I have tried static methods, class methods, and abstract methods as well, but I could not figure out the correct combination to solve this problem.</p>
<p>Some insight would be greatly appreciated! :-)</p>
| 3 | 2016-08-06T04:17:34Z | 38,800,659 | <p>Yeah, designing for inheritance is tricky like that. People don't put as much thought as they really should into what overridden methods do or don't affect, and they don't do enough to document how overriding methods affects other methods.</p>
<p>If you want to bypass overrides for that <code>set_params</code> call, you can explicitly specify which class's method you want to call:</p>
<pre><code>class A(object):
def __init__(self):
...
A.set_params(self)
</code></pre>
<p>or you can add a <code>__set_params</code> method that won't get overridden and call that from <code>__init__</code> and <code>set_params</code>:</p>
<pre><code>class A(object):
def __init__(self):
...
self.__set_params()
def set_params(self):
self.__set_params()
def __set_params(self):
# actual implementation here
</code></pre>
<p>One way, you have to type out the class name explicitly every time, and the other way, you have to write up another method definition. Which one works better depends on how long your class name is and how many times you need to use the method.</p>
| 3 | 2016-08-06T04:49:51Z | [
"python",
"inheritance"
] |
python how to sum up values in list by condition | 38,800,497 | <p>I have in python the following list:</p>
<pre><code>[{'a':4,'b':40},{'a':6, 'b':60}, {'a':3, 'b':90}, {'a':7, 'b':95}]
</code></pre>
<p>the 'b' values are in ascending order.</p>
<p>Also I have a num variable, say <code>num=25</code>.</p>
<p>what I need is to build a list in which I sum up all the 'a's until the difference between the 'b's is at least num.</p>
<p>So for this example the result should be:</p>
<pre><code>[{'a':13, 'b':50}, {'a':9, 'b':30}]
</code></pre>
<ul>
<li>13 is 4+6+3 (sum of the first 3 'a's)</li>
<li>50 is 90-40 (the third 'b' minus the first 'b')</li>
<li>9 is 6+3 (sum of second and third 'a')</li>
<li>30 is 90-60 (the third 'b' minus the second 'b')</li>
</ul>
<p>There are only two elements since from the third element we can't have difference of 'b's bigger than num.</p>
<p>I wrote this code which works but I used loop in loop and it looks more than c code than python code.</p>
<pre><code>def get_new_data(data, time_length):
new_data=[]
for i in range(0,len(data)):
sum_data = 0
for j in range(i,len(data)):
sum_data += data[j]['a']
diff = data[j]['b'] - data[i]['b']
if diff>=time_length:
new_data.append({'a':sum_data, 'b':diff})
break
return new_data
data = [{'a':4,'b':40},{'a':6, 'b':60}, {'a':3, 'b':90}, {'a':7, 'b':95}]
print (data) new_data = get_new_data(data, 25) print (new_data)
</code></pre>
<p>Is there any pythonic way to do it with as little code as possible.</p>
<p>Thanks!
David</p>
| -3 | 2016-08-06T04:19:05Z | 38,986,531 | <p>The only way to make your code more "pythonic" is probably to use <code>enumerate()</code> instead of <code>rang(len(object))</code>. Other than that it's pretty much as pythonic as possible.</p>
| 0 | 2016-08-17T00:46:39Z | [
"python",
"list"
] |
Plot color NaN values | 38,800,532 | <p>I'm trying to plot color some array, and convert some of the values to np.nan (for easier interpretation) and expecting different color when plotted (white?), instead it causes problem with the plot and the colorbar. </p>
<pre><code>#this is before converted to nan
array = np.random.rand(4,10)
plt.pcolor(array)
plt.colorbar(orientation='horizontal')
</code></pre>
<p><a href="http://i.stack.imgur.com/ubeSJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/ubeSJ.png" alt="normal result"></a></p>
<pre><code>#conditional value converted to nan
array = np.random.rand(4,10)
array[array<0.5]=np.nan
plt.pcolor(array)
plt.colorbar(orientation='horizontal')
</code></pre>
<p><a href="http://i.stack.imgur.com/hOcoq.png" rel="nofollow"><img src="http://i.stack.imgur.com/hOcoq.png" alt="conditional result"></a> </p>
<p>Any suggestion?</p>
| 4 | 2016-08-06T04:24:06Z | 38,800,580 | <p>One of the solution is to plot masked array, like here:</p>
<pre><code>import matplotlib.pylab as plt
import numpy as np
#conditional value converted to nan
array = np.random.rand(4,10)
array[array<0.5]=np.nan
m = np.ma.masked_where(np.isnan(array),array)
plt.pcolor(m)
plt.colorbar(orientation='horizontal')
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/qish6.png" rel="nofollow"><img src="http://i.stack.imgur.com/qish6.png" alt="enter image description here"></a></p>
| 4 | 2016-08-06T04:36:17Z | [
"python",
"numpy",
"matplotlib",
null,
"colorbar"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.