title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
How to assign member variables temporarily? | 38,531,851 | <p>I often find that I need to assign some member variables temporarily, <em>e.g.</em></p>
<pre><code>old_x = c.x
old_y = c.y
# keep c.z unchanged
c.x = new_x
c.y = new_y
do_something(c)
c.x = old_x
c.y = old_y
</code></pre>
<p>but I wish I could simply write</p>
<pre><code>with c.x = new_x; c.y = new_y:
do_something(c)
</code></pre>
<p>or even</p>
<pre><code>do_something(c with x = new_x; y = new_y)
</code></pre>
<p>Can Python's decorators or other language features enable this kind of pattern? (I could modify <code>c</code>'s class as needed)</p>
| 12 | 2016-07-22T17:02:28Z | 38,532,027 | <p><code>mock</code> provides this functionality, specifically look at the context-manager usage of <a href="https://docs.python.org/3/library/unittest.mock.html#patch-object" rel="nofollow"><code>patch.object</code></a>. It's in core libraries in python3, and <a href="https://pypi.python.org/pypi/mock" rel="nofollow">available on pypi</a> for older python.</p>
<p>Setup:</p>
<pre><code>>>> class C:
... def __init__(self, x, y, z):
... self.x = x
... self.y = y
... self.z = z
...
>>> c = C(0,1,2)
</code></pre>
<p>Usage demo:</p>
<pre><code>>>> print(c.x, c.y, c.z)
0 1 2
>>> with patch.object(c, 'x', 'spam'), patch.object(c, 'y', 'eggs'):
... print(c.x, c.y, c.z)
...
spam eggs 2
>>> print(c.x, c.y, c.z)
0 1 2
</code></pre>
| 3 | 2016-07-22T17:14:08Z | [
"python",
"python-2.7",
"python-decorators",
"python-descriptors"
] |
How to assign member variables temporarily? | 38,531,851 | <p>I often find that I need to assign some member variables temporarily, <em>e.g.</em></p>
<pre><code>old_x = c.x
old_y = c.y
# keep c.z unchanged
c.x = new_x
c.y = new_y
do_something(c)
c.x = old_x
c.y = old_y
</code></pre>
<p>but I wish I could simply write</p>
<pre><code>with c.x = new_x; c.y = new_y:
do_something(c)
</code></pre>
<p>or even</p>
<pre><code>do_something(c with x = new_x; y = new_y)
</code></pre>
<p>Can Python's decorators or other language features enable this kind of pattern? (I could modify <code>c</code>'s class as needed)</p>
| 12 | 2016-07-22T17:02:28Z | 38,532,086 | <p><a href="https://docs.python.org/2/reference/datamodel.html#context-managers">Context managers</a> may be used for it easily.</p>
<p>Quoting official docs:</p>
<blockquote>
<p>Typical uses of context managers include saving and restoring various
kinds of global state, locking and unlocking resources, closing opened
files, etc.</p>
</blockquote>
<p>It seems like saving and restoring state is exactly what we want to do here.</p>
<p>Example:</p>
<pre><code>from contextlib import contextmanager
@contextmanager
def temporary_change_attributes(something, **kwargs):
previous_values = {k: getattr(something, k) for k in kwargs}
for k, v in kwargs.items():
setattr(something, k, v)
try:
yield
finally:
for k, v in previous_values.items():
setattr(something, k, v)
class Something(object):
def __init__(self, x, y):
self.x = x
self.y = y
def say_hello(self):
print("hello", self.x, self.y)
s = Something(1, 2)
s.say_hello() # hello 1 2
with temporary_change_attributes(s, x=4, y=5):
s.say_hello() # hello 4 5
s.say_hello() # hello 1 2
</code></pre>
| 20 | 2016-07-22T17:19:17Z | [
"python",
"python-2.7",
"python-decorators",
"python-descriptors"
] |
How to assign member variables temporarily? | 38,531,851 | <p>I often find that I need to assign some member variables temporarily, <em>e.g.</em></p>
<pre><code>old_x = c.x
old_y = c.y
# keep c.z unchanged
c.x = new_x
c.y = new_y
do_something(c)
c.x = old_x
c.y = old_y
</code></pre>
<p>but I wish I could simply write</p>
<pre><code>with c.x = new_x; c.y = new_y:
do_something(c)
</code></pre>
<p>or even</p>
<pre><code>do_something(c with x = new_x; y = new_y)
</code></pre>
<p>Can Python's decorators or other language features enable this kind of pattern? (I could modify <code>c</code>'s class as needed)</p>
| 12 | 2016-07-22T17:02:28Z | 38,532,088 | <p>I think a <a href="https://docs.python.org/3/library/contextlib.html"><code>contextmanager</code></a> should do what you want:</p>
<pre><code>from contextlib import contextmanager
@contextmanager
def current_instance(c, temp_x, temp_y):
old_x, old_y = c.x, c.y
c.x, c.y = temp_x, temp_y
yield c
c.x, c.y = old_x, old_y
with current_instance(c, x, y) as c_temp:
do_something(c_temp)
</code></pre>
| 5 | 2016-07-22T17:19:23Z | [
"python",
"python-2.7",
"python-decorators",
"python-descriptors"
] |
How to assign member variables temporarily? | 38,531,851 | <p>I often find that I need to assign some member variables temporarily, <em>e.g.</em></p>
<pre><code>old_x = c.x
old_y = c.y
# keep c.z unchanged
c.x = new_x
c.y = new_y
do_something(c)
c.x = old_x
c.y = old_y
</code></pre>
<p>but I wish I could simply write</p>
<pre><code>with c.x = new_x; c.y = new_y:
do_something(c)
</code></pre>
<p>or even</p>
<pre><code>do_something(c with x = new_x; y = new_y)
</code></pre>
<p>Can Python's decorators or other language features enable this kind of pattern? (I could modify <code>c</code>'s class as needed)</p>
| 12 | 2016-07-22T17:02:28Z | 38,532,167 | <p>You can also do this natively using <code>__enter__</code> and <code>__exit__</code>. Simplistic example:</p>
<pre><code>class SomeObject(object):
def __init__(self):
self.a = 1
self.b = 2
self.c = 3
class Temporary(object):
def __init__(self, target, **kv):
self.target = target
self.to_set = kv
self.to_restore = {}
def __enter__(self):
self.to_restore = map(partial(getattr, self.target), filter(partial(hasattr, self.target), self.to_set.keys()))
for k,v in self.to_set.items():
if hasattr(self.target, k):
self.to_restore[k] = getattr(self.target, k)
setattr(self.target, k, v)
def __exit__(self, *_):
for k,v in self.to_restore.items():
setattr(self.target, k, v)
for k in self.to_set.keys():
if k not in self.to_restore:
delattr(self.target, k)
o = SomeObject()
print(o.__dict__)
with Temporary(o, a=42, d=1337):
print(o.__dict__)
print(o.__dict__)
</code></pre>
| 3 | 2016-07-22T17:25:26Z | [
"python",
"python-2.7",
"python-decorators",
"python-descriptors"
] |
How to assign member variables temporarily? | 38,531,851 | <p>I often find that I need to assign some member variables temporarily, <em>e.g.</em></p>
<pre><code>old_x = c.x
old_y = c.y
# keep c.z unchanged
c.x = new_x
c.y = new_y
do_something(c)
c.x = old_x
c.y = old_y
</code></pre>
<p>but I wish I could simply write</p>
<pre><code>with c.x = new_x; c.y = new_y:
do_something(c)
</code></pre>
<p>or even</p>
<pre><code>do_something(c with x = new_x; y = new_y)
</code></pre>
<p>Can Python's decorators or other language features enable this kind of pattern? (I could modify <code>c</code>'s class as needed)</p>
| 12 | 2016-07-22T17:02:28Z | 38,532,169 | <p>Goofy solution</p>
<pre><code>>>> class Foo(object):
def __init__(self):
self._x = []
self._y = []
@property
def x(self):
return self._x[-1] or None
@x.setter
def x(self, val):
self._x.append(val)
def reset_vals(self):
if len(self._x) > 1:
self._x.pop()
>>> bar = Foo()
>>> bar.x = 1
>>> bar.x
1
>>> bar.x = 2
>>> bar.x
2
>>> bar.reset_vals()
>>> bar.x
1
>>> bar.reset_vals()
>>> bar.x
1
</code></pre>
<p>Still goofy but less so solution</p>
<pre><code>>>> class Foo(object):
def __init__(self):
pass
>>> import copy
>>> bar = Foo()
>>> bar.x = 1
>>> bar.x
1
>>> bar2 = copy.copy(bar)
>>> bar2.x
1
>>> bar2.x = 5
>>> bar2.x
5
>>> bar
<__main__.Foo object at 0x0426A870>
>>> bar.x
1
</code></pre>
| 2 | 2016-07-22T17:25:31Z | [
"python",
"python-2.7",
"python-decorators",
"python-descriptors"
] |
Django: model OneToOneField with User can't add default value | 38,531,939 | <p>I have an model and I want to add an OneToOneField to hold the creator of an object:
models.py</p>
<pre><code>creator = models.OneToOneField(User, blank=True,
default=User.objects.filter(
username="antoni4040"))
</code></pre>
<p>I already have a database with items and just want the default value to be the admin user, which has the username "antoni4040". When I try to migrate without the default field it asks for a default value, so I can't get away with it. But here's what I get when running makemigrations:</p>
<pre><code>Migrations for 'jokes_app':
0008_joke_creator.py:
- Add field creator to joke
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/core/management/commands/makemigrations.py", line 150, in handle
self.write_migration_files(changes)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/core/management/commands/makemigrations.py", line 178, in write_migration_files
migration_string = writer.as_string()
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/db/migrations/writer.py", line 167, in as_string
operation_string, operation_imports = OperationWriter(operation).serialize()
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/db/migrations/writer.py", line 124, in serialize
_write(arg_name, arg_value)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/db/migrations/writer.py", line 88, in _write
arg_string, arg_imports = MigrationWriter.serialize(_arg_value)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/db/migrations/writer.py", line 433, in serialize
return cls.serialize_deconstructed(path, args, kwargs)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/db/migrations/writer.py", line 318, in serialize_deconstructed
arg_string, arg_imports = cls.serialize(arg)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/db/migrations/writer.py", line 517, in serialize
item_string, item_imports = cls.serialize(item)
File "/home/antoni4040/Documents/Jokes_Website/django-jokes/venv/lib/python3.4/site-packages/django/db/migrations/writer.py", line 540, in serialize
"topics/migrations/#migration-serializing" % (value, get_docs_version())
ValueError: Cannot serialize: <User: antoni4040>
There are some values Django cannot serialize into migration files.
For more, see https://docs.djangoproject.com/en/1.9/topics/migrations/#migration-serializing
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-07-22T17:08:05Z | 38,532,035 | <p>You are using the queryset <code>User.objects.filter(username="antoni4040")</code>. To get a model instance, you would use <code>User.objects.get(username="antoni4040")</code>.</p>
<p>However, you shouldn't use a a model instance for as the default in the model field. There's no error handling if the user does not exist in the database. In fact, if <code>models.py</code> is loaded before you run the initial migrations, then the <code>User</code> table won't even exist so the query will give an error.</p>
<p>The logic to set the default user should go in the view (or Django admin) instead of the models file.</p>
<p>In the admin, you could define a custom form that sets the initial value for the <code>creator</code> field.</p>
<pre><code>class MyModelForm(forms.ModelForm):
class Meta:
model = MyModel
def __init__(self, *args, **kwargs):
try:
user = User.objects.get(username="antoni4040")
kwargs['initial']['creator'] = user
except MyModel.DoesNotExist:
pass
super(MyModelForm, self).__init__(*args, **kwargs)
</code></pre>
<p>Then use that model form in your admin.</p>
<pre><code>class MyModelAdmin(admin.ModelAdmin):
form = MyModelForm
...
</code></pre>
| 1 | 2016-07-22T17:14:47Z | [
"python",
"django",
"user",
"database-migration",
"one-to-one"
] |
how to set the number of features to use in random selection sklearn | 38,531,941 | <p>I am using sklearn RandomForest Classifier/Bag classifier for learning and I am not getting the expected results when compared to Java/Weka Machine Learning library.
In Weka, I am learning the model with - Random forest of 10 trees, each constructed while considering 6 random features. (setNumFeatures need to be set and default is 10 trees)</p>
<p>In sklearn - I am not sure how to specify the number of features to randomly consider while constructing a random forest of 10 trees. This what I am doing:</p>
<pre><code>rf_classifier = RandomForestClassifier(n_estimators=num_trees, max_features=6)
rf_classifier = rf_classifier.fit(train_file, train_file_label)
for items in rf_classifier.estimators_:
classifier_list.append(items)
</code></pre>
<p>I saw the docs and there is a parameter - max_features but I am not sure if that serves the purpose. I get this error when I am trying to calculate entropy:</p>
<pre><code># code to calculate voting entropy for all features (unlabeled data)
vote_count_for_features = list(classifier_list[0].predict(feature_data_arr))
for i in range(1, len(classifier_list)):
res_temp = []
res_temp = list(classifier_list[i].predict(feature_data_arr))
vote_count_for_features = [x + y for x, y in zip(vote_count_for_features, res_temp)]
</code></pre>
<p>If I set that parameter to 6, than my code fails with the error message:</p>
<blockquote>
<p>Number of features of the model must match the input. Model n_features
is 6 and input n_features is 31</p>
</blockquote>
<p>Inputs: Sample set of 1 million records with 31 features. When I run weka, the number of rules extracted are around 1000 whereas when I run the same thing through sklearn - I get hardly 70 rules.</p>
<p>I am new to python and sklearn and I am trying to figure out where am I doing wrong. (Weka code has been tested well and gives 95% precision, 80% recall - so I am assuming that's good)</p>
<p>Note: I have used sklearn imputer to impute missing values using 'mean' strategy whereas Weka has ways to handle NaN. </p>
<p>This is what I am trying to achieve: Learn Random Forest on a sample set, extract rules, evaluate rules and then apply on the bigger set</p>
<p>Any suggestions or input will really help me debug through the issue and solve it quickly. </p>
| 0 | 2016-07-22T17:08:22Z | 38,534,795 | <p>I think the issue is that the individual trees get confused since they only use 6 features, but you give them 31. You can try to get the prediction to work by setting <code>check_input = False</code>:</p>
<pre><code> list(classifier_list[i].predict(feature_data_arr, check_input = False))
</code></pre>
| 0 | 2016-07-22T20:29:17Z | [
"python",
"scikit-learn",
"random-forest",
"decision-tree"
] |
Flink batch data processing | 38,531,965 | <p>I'm evaluating <a href="http://flink.apache.org/" rel="nofollow">Flink</a> for some processing batches of data. As a simple example say I have 2000 points which I would like to pass through an FIR filter using functionality provided by <a href="http://www.scipy.org/" rel="nofollow">scipy</a>. The scipy filter is a simple function which accepts a set of coefficients and the data to filter and returns the data. Is is possible to create a transformation to handle this in Flink? It seems Flink transformations are applied on a point by point basis but I may be missing something.</p>
| 0 | 2016-07-22T17:10:27Z | 38,560,225 | <p>This should certainly be possible. Flink already has a <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/python.html" rel="nofollow">Python API (beta)</a> you might want to use.</p>
<p>About your second question: Flink can apply a function point by point and can do other stuff, too. It depends what kink of function you are defining. For example, <code>filter</code>, <code>project</code>, <code>map</code>, <code>flatMap</code> are applied per record; <code>max</code>, <code>min</code>, <code>reduce</code>, etc. are applied to a group of records (the groups are defined via <code>groupBy</code>). There is also the possibility to join data from different dataset using <code>join</code>, <code>cross</code>, or <code>cogroup</code>. Please have a look into the list of available transformations in the documentation: <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/dataset_transformations.html" rel="nofollow">https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/dataset_transformations.html</a></p>
| 0 | 2016-07-25T05:22:58Z | [
"python",
"apache-flink"
] |
Django Static Files Not Being Found (debug off) | 38,531,966 | <p>I cannot get Django to correctly reference my static files when debug is off. I know there are many other posts on this site about this, but none of them have fixed my issue. </p>
<p>My directory tree is like this:</p>
<pre><code>âââ project
âââ app1
âââ app1
âââ manage.py
âââ project
âââ project.sock
âââ projectenv
âââ static
âââ template
</code></pre>
<p>In my <code>settings.py</code> I have the following:</p>
<pre><code>STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static/")
</code></pre>
<p>In my nginx configuration file I have the following:</p>
<pre><code>location /static {
autoindex on;
alias /home/user/myproject/static/;
}
</code></pre>
<p>running <code>./manage.py collectstatic</code> correctly places all my files into the static directory. </p>
<p>however, in running my server, all of these static files 404:</p>
<pre><code>[22/Jul/2016 17:05:31] "GET /static/lib/bootstrap/js/bootstrap.min.js HTTP/1.1" 404 6418
[22/Jul/2016 17:05:31] "GET /static/lib/bootstrap/css/bootstrap.min.css HTTP/1.1" 404 6418
[22/Jul/2016 17:05:31] "GET /static/lib/jquery/jquery.min.js HTTP/1.1" 404 6418
[22/Jul/2016 17:05:31] "GET /static/lib/bootstrap/js/bootstrap.min.js HTTP/1.1" 404 6418
</code></pre>
<p>The static files are placed in the same directory as my manage.py file. What am I doing wrong?</p>
<p>Adding this in my urls.py does work, but aren't I supposed to serve them directly from nginx instead? </p>
<pre><code>if settings.DEBUG is False: #if DEBUG is True it will be served automatically
urlpatterns += patterns('',
url(r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.STATIC_ROOT}),
)
</code></pre>
<blockquote>
<p>It's slower since static files rendering goes through Django instead
served by your web server directly</p>
</blockquote>
| 1 | 2016-07-22T17:10:29Z | 38,534,301 | <p>You should configure your nginx: edit nginx.conf the following way: add this:</p>
<pre><code>location ~* ^(/static/|/media/).+.(jpg|jpeg|gif|png|zip|eot|woff|woff2|svg|ttf|tgz|gz|rar|bz2|doc|xls|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf)$ {
root /path/to/parent/folder/of/static/folder;
}
</code></pre>
<p>to the <code>server {</code> clause corresponding to your website, then restart nginx and it should fix your problem, if not - check nginx logs, make sure that the folder has appropriate permissions for nginx user/group etc</p>
| 0 | 2016-07-22T19:51:11Z | [
"python",
"django",
"nginx"
] |
How to enable dependencies with python? | 38,531,982 | <p>I've written a python program to define and load a sheet in smartsheet from an Excel spreadsheet, and have provided a start date, end date, predecessors, duration and %complete columns in the definition (it works well). But, now I want enable dependencies for the sheet using API 2.0 and tell smartsheet to use my predefined columns when enabling dependencies, just like I am able to do using the smartsheet GUI interface for the sheet. I could not find a method in the API 2.0 documentation.</p>
| 0 | 2016-07-22T17:11:33Z | 38,532,245 | <p>It is not currently possible to create a dependency-enabled project sheet by using the Smartsheet API. The information in the <a href="http://smartsheet-platform.github.io/api-docs/#column-types" rel="nofollow">Column Types</a> section of the API documentation supports this assertion -- i.e., the <em>Duration</em>, <em>Predecessor</em>, <em>Start Date</em>, and <em>End Date</em> columns in a dependency-enabled project sheet are actually special column types -- but the API does not support creating columns of these types. </p>
<p><a href="http://i.stack.imgur.com/5RnA4.png" rel="nofollow"><img src="http://i.stack.imgur.com/5RnA4.png" alt="enter image description here"></a></p>
<p>So currently, the only way to enable dependencies for a sheet and designate columns for <em>Duration</em>, <em>Predecessor</em>, <em>Start Date</em>, and <em>End Date</em> is to use the Smartsheet (web) UI.</p>
| 0 | 2016-07-22T17:30:06Z | [
"python",
"dependencies",
"smartsheet-api"
] |
Memory efficient way to add columns to .csv files | 38,532,023 | <p>Ok, I couldn't really find an answer to this anywhere else, so I figured I'd ask.</p>
<p>I'm working with some .csv files that have about 74 million lines right now and I'm trying to add columns into one file from another file.</p>
<p>ex.</p>
<pre><code>Week,Sales Depot,Sales Channel,Route,Client,Product,Units Sold,Sales,Units Returned,Returns,Adjusted Demand
3,1110,7,3301,15766,1212,3,25.14,0,0,3
3,1110,7,3301,15766,1216,4,33.52,0,0,4
</code></pre>
<p>combined with</p>
<pre><code>Units_cat
0
1
</code></pre>
<p>so that</p>
<pre><code>Week,Sales Depot,Sales Channel,Route,Client,Product,Units Sold,Units_cat,Sales,Units Returned,Returns,Adjusted Demand
3,1110,7,3301,15766,1212,3,0,25.14,0,0,3
3,1110,7,3301,15766,1216,4,1,33.52,0,0,4
</code></pre>
<p>I've been using pandas to read in and output the .csv files, but the issue I'm coming to is the program keeps crashing because creating the DataFrame overloads my memory. I've tried applying the csv library from Python but I'm not sure how merge the files the way I want (not just append).</p>
<p>Anyone know a more memory efficient method of combining these files?</p>
| 2 | 2016-07-22T17:13:58Z | 38,532,276 | <p>Something like this might work for you:</p>
<h3>Using <code>csv.DictReader()</code></h3>
<pre><code>import csv
from itertools import izip
with open('file1.csv') as file1:
with open('file2.csv') as file2:
with open('result.csv', 'w') as result:
file1 = csv.DictReader(file1)
file2 = csv.DictReader(file2)
# Get the field order correct here:
fieldnames = file1.fieldnames
index = fieldnames.index('Units Sold')+1
fieldnames = fieldnames[:index] + file2.fieldnames + fieldnames[index:]
result = csv.DictWriter(result, fieldnames)
def dict_merge(a,b):
a.update(b)
return a
result.writeheader()
result.writerows(dict_merge(a,b) for a,b in izip(file1, file2))
</code></pre>
<h3>Using <code>csv.reader()</code></h3>
<pre><code>import csv
from itertools import izip
with open('file1.csv') as file1:
with open('file2.csv') as file2:
with open('result.csv', 'w') as result:
file1 = csv.reader(file1)
file2 = csv.reader(file2)
result = csv.writer(result)
result.writerows(a[:7] + b + a[7:] for a,b in izip(file1, file2))
</code></pre>
<p>Notes:</p>
<ul>
<li><p>This is for Python2. You can use the normal <code>zip()</code> function in Python3. If the two files are not of equivalent lengths, consider <code>itertools.izip_longest()</code>.</p></li>
<li><p>The memory efficiency comes from passing a generator expression to <code>.writerows()</code> instead of a list. This way, only the current line is under consideration at any moment in time, not the entire file. If a generator expression isn't appropriate, you'll get the same benefit from a <code>for</code> loop: <code>for a,b in izip(...): result.writerow(...)</code></p></li>
<li><p>The <code>dict_merge</code> function is not required starting from Python3.5. In sufficiently new Pythons, try <code>result.writerows({**a,**b} for a,b in zip(file1, file2))</code> (See <a href="http://treyhunner.com/2016/02/how-to-merge-dictionaries-in-python/" rel="nofollow">this explanation</a>).</p></li>
</ul>
| 4 | 2016-07-22T17:31:25Z | [
"python",
"csv"
] |
Numba not speeding up function | 38,532,055 | <p>I have some code I'm trying to speed up with numba. I've done some reading on the topic, but I haven't been able to figure it out 100%.</p>
<p>Here is the code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
import seaborn as sns
from numba import jit, vectorize, float64, autojit
sns.set(context='talk', style='ticks', font_scale=1.2, rc={'figure.figsize': (6.5, 5.5), 'xtick.direction': 'in', 'ytick.direction': 'in'})
#%% constraints
x_min = 0 # death below this
x_max = 20 # maximum weight
t_max = 100 # maximum time
foraging_efficiencies = np.linspace(0, 1, 10) # potential foraging efficiencies
R = 10.0 # Resource level
#%% make the body size and time categories
body_sizes = np.arange(x_min, x_max+1)
time_steps = np.arange(t_max)
#%% parameter functions
@jit
def metabolic_fmr(x, u,temp): # metabolic cost function
fmr = 0.125*(2**(0.2*temp))*(1 + 0.5*u) + x*0.1
return fmr
def intake_dist(u): # intake stochastic function (returns a vector)
g = st.binom.pmf(np.arange(R+1), R, u)
return g
@jit
def mass_gain(x, u, temp): # mass gain function (returns a vector)
x_prime = x - metabolic_fmr(x, u,temp) + np.arange(R+1)
x_prime = np.minimum(x_prime, x_max)
x_prime = np.maximum(x_prime, 0)
return x_prime
@jit
def prob_attack(P): # probability of an attack
p_a = 0.02*P
return p_a
@jit
def prob_see(u): # probability of not seeing an attack
p_s = 1-(1-u)**0.3
return p_s
@jit
def prob_lethal(x): # probability of lethality given a successful attack
p_l = 0.5*np.exp(-0.05*x)
return p_l
@jit
def prob_mort(P, u, x):
p_m = prob_attack(P)*prob_see(u)*prob_lethal(x)
return np.minimum(p_m, 1)
#%% terminal fitness function
@jit
def terminal_fitness(x):
t_f = 15.0*x/(x+5.0)
return t_f
#%% linear interpolation function
@jit
def linear_interpolation(x, F, t):
floor = x.astype(int)
delta_c = x-floor
ceiling = floor + 1
ceiling[ceiling>x_max] = x_max
floor[floor<x_min] = x_min
interpolated_F = (1-delta_c)*F[floor,t] + (delta_c)*F[ceiling,t]
return interpolated_F
#%% solver
@jit
def solver_jit(P, temp):
F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness
F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size
V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort
D = np.zeros((len(body_sizes), len(time_steps))) # Decision
for t in range(t_max-1)[::-1]:
for x in range(x_min+1, x_max+1): # iterate over every body size except dead
for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency
u = foraging_efficiencies[i]
g_u = intake_dist(u) # calculate the distribution of intakes
xp = mass_gain(x, u, temp) # calculate the mass gain
p_m = prob_mort(P, u, x) # probability of mortality
V[i,x,t] = (1 - p_m)*(linear_interpolation(xp, F, t+1)*g_u).sum() # Fitness calculation
vmax = V[:,x,t].max()
idx = np.argwhere(V[:,x,t]==vmax).min()
D[x,t] = foraging_efficiencies[idx]
F[x,t] = vmax
return D, F
def solver_norm(P, temp):
F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness
F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size
V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort
D = np.zeros((len(body_sizes), len(time_steps))) # Decision
for t in range(t_max-1)[::-1]:
for x in range(x_min+1, x_max+1): # iterate over every body size except dead
for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency
u = foraging_efficiencies[i]
g_u = intake_dist(u) # calculate the distribution of intakes
xp = mass_gain(x, u, temp) # calculate the mass gain
p_m = prob_mort(P, u, x) # probability of mortality
V[i,x,t] = (1 - p_m)*(linear_interpolation(xp, F, t+1)*g_u).sum() # Fitness calculation
vmax = V[:,x,t].max()
idx = np.argwhere(V[:,x,t]==vmax).min()
D[x,t] = foraging_efficiencies[idx]
F[x,t] = vmax
return D, F
</code></pre>
<p>The individual jit functions tend to be much faster than the un-jitted ones. For example, prob_mort is about 600% faster once it has been run through jit. However, the solver itself isn't much faster:</p>
<pre><code>In [3]: %timeit -n 10 solver_jit(200, 25)
10 loops, best of 3: 3.94 s per loop
In [4]: %timeit -n 10 solver_norm(200, 25)
10 loops, best of 3: 4.09 s per loop
</code></pre>
<p>I know some functions can't be jitted, so I replaced the st.binom.pmf function with a custom jit function and that actually slowed down the time to about 17s per loop, over 5x slower. Presumably because the scipy functions are, at this point, heavily optimized.</p>
<p>So I suspect the slowness is either in the linear_interpolate function or somewhere in the solver code outside of the jitted functions (because at one point I un-jitted all the functions and ran solver_norm and got the same time). Any thoughts on where the slow part would be and how to speed it up?</p>
<p><strong>UPDATE</strong></p>
<p>Here's the binomial code I used in an attempt to speed up jit</p>
<pre><code>@jit
def factorial(n):
if n==0:
return 1
else:
return n*factorial(n-1)
@vectorize([float64(float64,float64,float64)])
def binom(k, n, p):
binom_coef = factorial(n)/(factorial(k)*factorial(n-k))
pmf = binom_coef*p**k*(1-p)**(n-k)
return pmf
@jit
def intake_dist(u): # intake stochastic function (returns a vector)
g = binom(np.arange(R+1), R, u)
return g
</code></pre>
<p><strong>UPDATE 2</strong>
I tried running my binomial code in nopython mode and found out I was doing it wrong because it was recursive. Upon fixing that by changing code to:</p>
<pre><code>@jit(int64(int64), nopython=True)
def factorial(nn):
res = 1
for ii in range(2, nn + 1):
res *= ii
return res
@vectorize([float64(float64,float64,float64)], nopython=True)
def binom(k, n, p):
binom_coef = factorial(n)/(factorial(k)*factorial(n-k))
pmf = binom_coef*p**k*(1-p)**(n-k)
return pmf
</code></pre>
<p>the solver now runs at</p>
<pre><code>In [34]: %timeit solver_jit(200, 25)
1 loop, best of 3: 921 ms per loop
</code></pre>
<p>which is about 3.5x faster. However, solver_jit() and solver_norm() still run at the same pace, which means there is some code outside the jit functions slowing it down.</p>
| 0 | 2016-07-22T17:16:18Z | 38,535,181 | <p>As said, there is likely some code that is falling back to object mode. I just wanted to add that you can use njit instead of jit to disable object mode. That will help diagnose what code is the culprit.</p>
| 0 | 2016-07-22T21:02:56Z | [
"python",
"performance",
"numpy",
"numba"
] |
Numba not speeding up function | 38,532,055 | <p>I have some code I'm trying to speed up with numba. I've done some reading on the topic, but I haven't been able to figure it out 100%.</p>
<p>Here is the code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
import seaborn as sns
from numba import jit, vectorize, float64, autojit
sns.set(context='talk', style='ticks', font_scale=1.2, rc={'figure.figsize': (6.5, 5.5), 'xtick.direction': 'in', 'ytick.direction': 'in'})
#%% constraints
x_min = 0 # death below this
x_max = 20 # maximum weight
t_max = 100 # maximum time
foraging_efficiencies = np.linspace(0, 1, 10) # potential foraging efficiencies
R = 10.0 # Resource level
#%% make the body size and time categories
body_sizes = np.arange(x_min, x_max+1)
time_steps = np.arange(t_max)
#%% parameter functions
@jit
def metabolic_fmr(x, u,temp): # metabolic cost function
fmr = 0.125*(2**(0.2*temp))*(1 + 0.5*u) + x*0.1
return fmr
def intake_dist(u): # intake stochastic function (returns a vector)
g = st.binom.pmf(np.arange(R+1), R, u)
return g
@jit
def mass_gain(x, u, temp): # mass gain function (returns a vector)
x_prime = x - metabolic_fmr(x, u,temp) + np.arange(R+1)
x_prime = np.minimum(x_prime, x_max)
x_prime = np.maximum(x_prime, 0)
return x_prime
@jit
def prob_attack(P): # probability of an attack
p_a = 0.02*P
return p_a
@jit
def prob_see(u): # probability of not seeing an attack
p_s = 1-(1-u)**0.3
return p_s
@jit
def prob_lethal(x): # probability of lethality given a successful attack
p_l = 0.5*np.exp(-0.05*x)
return p_l
@jit
def prob_mort(P, u, x):
p_m = prob_attack(P)*prob_see(u)*prob_lethal(x)
return np.minimum(p_m, 1)
#%% terminal fitness function
@jit
def terminal_fitness(x):
t_f = 15.0*x/(x+5.0)
return t_f
#%% linear interpolation function
@jit
def linear_interpolation(x, F, t):
floor = x.astype(int)
delta_c = x-floor
ceiling = floor + 1
ceiling[ceiling>x_max] = x_max
floor[floor<x_min] = x_min
interpolated_F = (1-delta_c)*F[floor,t] + (delta_c)*F[ceiling,t]
return interpolated_F
#%% solver
@jit
def solver_jit(P, temp):
F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness
F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size
V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort
D = np.zeros((len(body_sizes), len(time_steps))) # Decision
for t in range(t_max-1)[::-1]:
for x in range(x_min+1, x_max+1): # iterate over every body size except dead
for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency
u = foraging_efficiencies[i]
g_u = intake_dist(u) # calculate the distribution of intakes
xp = mass_gain(x, u, temp) # calculate the mass gain
p_m = prob_mort(P, u, x) # probability of mortality
V[i,x,t] = (1 - p_m)*(linear_interpolation(xp, F, t+1)*g_u).sum() # Fitness calculation
vmax = V[:,x,t].max()
idx = np.argwhere(V[:,x,t]==vmax).min()
D[x,t] = foraging_efficiencies[idx]
F[x,t] = vmax
return D, F
def solver_norm(P, temp):
F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness
F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size
V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort
D = np.zeros((len(body_sizes), len(time_steps))) # Decision
for t in range(t_max-1)[::-1]:
for x in range(x_min+1, x_max+1): # iterate over every body size except dead
for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency
u = foraging_efficiencies[i]
g_u = intake_dist(u) # calculate the distribution of intakes
xp = mass_gain(x, u, temp) # calculate the mass gain
p_m = prob_mort(P, u, x) # probability of mortality
V[i,x,t] = (1 - p_m)*(linear_interpolation(xp, F, t+1)*g_u).sum() # Fitness calculation
vmax = V[:,x,t].max()
idx = np.argwhere(V[:,x,t]==vmax).min()
D[x,t] = foraging_efficiencies[idx]
F[x,t] = vmax
return D, F
</code></pre>
<p>The individual jit functions tend to be much faster than the un-jitted ones. For example, prob_mort is about 600% faster once it has been run through jit. However, the solver itself isn't much faster:</p>
<pre><code>In [3]: %timeit -n 10 solver_jit(200, 25)
10 loops, best of 3: 3.94 s per loop
In [4]: %timeit -n 10 solver_norm(200, 25)
10 loops, best of 3: 4.09 s per loop
</code></pre>
<p>I know some functions can't be jitted, so I replaced the st.binom.pmf function with a custom jit function and that actually slowed down the time to about 17s per loop, over 5x slower. Presumably because the scipy functions are, at this point, heavily optimized.</p>
<p>So I suspect the slowness is either in the linear_interpolate function or somewhere in the solver code outside of the jitted functions (because at one point I un-jitted all the functions and ran solver_norm and got the same time). Any thoughts on where the slow part would be and how to speed it up?</p>
<p><strong>UPDATE</strong></p>
<p>Here's the binomial code I used in an attempt to speed up jit</p>
<pre><code>@jit
def factorial(n):
if n==0:
return 1
else:
return n*factorial(n-1)
@vectorize([float64(float64,float64,float64)])
def binom(k, n, p):
binom_coef = factorial(n)/(factorial(k)*factorial(n-k))
pmf = binom_coef*p**k*(1-p)**(n-k)
return pmf
@jit
def intake_dist(u): # intake stochastic function (returns a vector)
g = binom(np.arange(R+1), R, u)
return g
</code></pre>
<p><strong>UPDATE 2</strong>
I tried running my binomial code in nopython mode and found out I was doing it wrong because it was recursive. Upon fixing that by changing code to:</p>
<pre><code>@jit(int64(int64), nopython=True)
def factorial(nn):
res = 1
for ii in range(2, nn + 1):
res *= ii
return res
@vectorize([float64(float64,float64,float64)], nopython=True)
def binom(k, n, p):
binom_coef = factorial(n)/(factorial(k)*factorial(n-k))
pmf = binom_coef*p**k*(1-p)**(n-k)
return pmf
</code></pre>
<p>the solver now runs at</p>
<pre><code>In [34]: %timeit solver_jit(200, 25)
1 loop, best of 3: 921 ms per loop
</code></pre>
<p>which is about 3.5x faster. However, solver_jit() and solver_norm() still run at the same pace, which means there is some code outside the jit functions slowing it down.</p>
| 0 | 2016-07-22T17:16:18Z | 38,541,770 | <p>I was able to make a few changes to your code to make it so the jit version could compile completely in <code>nopython</code> mode. On my laptop, this results in:</p>
<pre><code>%timeit solver_jit(200, 25)
1 loop, best of 3: 50.9 ms per loop
%timeit solver_norm(200, 25)
1 loop, best of 3: 192 ms per loop
</code></pre>
<p>For reference, I'm using Numba 0.27.0. I'll admit that Numba's compilation errors still make it difficult to identify what is going on, but since I've been playing with it for a while, I've built up an intuition for what needs to be fixed. The complete code is below, but here is the list of changes I made:</p>
<ul>
<li>In <code>linear_interpolation</code> change <code>x.astype(int)</code> to <code>x.astype(np.int64)</code> so it could compile in <code>nopython</code> mode. </li>
<li>In the solver, use <code>np.sum</code> as a function and not a method of an array.</li>
<li><code>np.argwhere</code> isn't supported. Write a custom loop.</li>
</ul>
<p>There are probably some further optimizations that could be made, but this gives an initial speed-up.</p>
<p>The full code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
import seaborn as sns
from numba import jit, vectorize, float64, autojit, njit
sns.set(context='talk', style='ticks', font_scale=1.2, rc={'figure.figsize': (6.5, 5.5), 'xtick.direction': 'in', 'ytick.direction': 'in'})
#%% constraints
x_min = 0 # death below this
x_max = 20 # maximum weight
t_max = 100 # maximum time
foraging_efficiencies = np.linspace(0, 1, 10) # potential foraging efficiencies
R = 10.0 # Resource level
#%% make the body size and time categories
body_sizes = np.arange(x_min, x_max+1)
time_steps = np.arange(t_max)
#%% parameter functions
@njit
def metabolic_fmr(x, u,temp): # metabolic cost function
fmr = 0.125*(2**(0.2*temp))*(1 + 0.5*u) + x*0.1
return fmr
@njit()
def factorial(nn):
res = 1
for ii in range(2, nn + 1):
res *= ii
return res
@vectorize([float64(float64,float64,float64)], nopython=True)
def binom(k, n, p):
binom_coef = factorial(n)/(factorial(k)*factorial(n-k))
pmf = binom_coef*p**k*(1-p)**(n-k)
return pmf
@njit
def intake_dist(u): # intake stochastic function (returns a vector)
g = binom(np.arange(R+1), R, u)
return g
@njit
def mass_gain(x, u, temp): # mass gain function (returns a vector)
x_prime = x - metabolic_fmr(x, u,temp) + np.arange(R+1)
x_prime = np.minimum(x_prime, x_max)
x_prime = np.maximum(x_prime, 0)
return x_prime
@njit
def prob_attack(P): # probability of an attack
p_a = 0.02*P
return p_a
@njit
def prob_see(u): # probability of not seeing an attack
p_s = 1-(1-u)**0.3
return p_s
@njit
def prob_lethal(x): # probability of lethality given a successful attack
p_l = 0.5*np.exp(-0.05*x)
return p_l
@njit
def prob_mort(P, u, x):
p_m = prob_attack(P)*prob_see(u)*prob_lethal(x)
return np.minimum(p_m, 1)
#%% terminal fitness function
@njit
def terminal_fitness(x):
t_f = 15.0*x/(x+5.0)
return t_f
#%% linear interpolation function
@njit
def linear_interpolation(x, F, t):
floor = x.astype(np.int64)
delta_c = x-floor
ceiling = floor + 1
ceiling[ceiling>x_max] = x_max
floor[floor<x_min] = x_min
interpolated_F = (1-delta_c)*F[floor,t] + (delta_c)*F[ceiling,t]
return interpolated_F
#%% solver
@njit
def solver_jit(P, temp):
F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness
F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size
V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort
D = np.zeros((len(body_sizes), len(time_steps))) # Decision
for t in range(t_max-2,-1,-1):
for x in range(x_min+1, x_max+1): # iterate over every body size except dead
for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency
u = foraging_efficiencies[i]
g_u = intake_dist(u) # calculate the distribution of intakes
xp = mass_gain(x, u, temp) # calculate the mass gain
p_m = prob_mort(P, u, x) # probability of mortality
V[i,x,t] = (1 - p_m)*np.sum((linear_interpolation(xp, F, t+1)*g_u)) # Fitness calculation
vmax = V[:,x,t].max()
for k in xrange(V.shape[0]):
if V[k,x,t] == vmax:
idx = k
break
#idx = np.argwhere(V[:,x,t]==vmax).min()
D[x,t] = foraging_efficiencies[idx]
F[x,t] = vmax
return D, F
def solver_norm(P, temp):
F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness
F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size
V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort
D = np.zeros((len(body_sizes), len(time_steps))) # Decision
for t in range(t_max-1)[::-1]:
for x in range(x_min+1, x_max+1): # iterate over every body size except dead
for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency
u = foraging_efficiencies[i]
g_u = intake_dist(u) # calculate the distribution of intakes
xp = mass_gain(x, u, temp) # calculate the mass gain
p_m = prob_mort(P, u, x) # probability of mortality
V[i,x,t] = (1 - p_m)*(linear_interpolation(xp, F, t+1)*g_u).sum() # Fitness calculation
vmax = V[:,x,t].max()
idx = np.argwhere(V[:,x,t]==vmax).min()
D[x,t] = foraging_efficiencies[idx]
F[x,t] = vmax
return D, F
</code></pre>
| 1 | 2016-07-23T12:26:28Z | [
"python",
"performance",
"numpy",
"numba"
] |
Python: Wrong output and ValueError: Prime Factors Creator | 38,532,164 | <p>I've created a program that successfully detects whether a number is prime, or not, it also will return a list of the factors of the number if it isn't, but that part is not successful. </p>
<p>Here is my code:</p>
<pre><code>def prime_num():
num = int(input("Give me a number...: "))
prime = True
if num == 1:
prime = False
elif num == 2:
prime = True
for x in range(2, num):
if num % x == 0:
prime = False
break
if prime == False:
print("That's not a prime number!")
factors(num)
elif prime == True:
print("That's a prime number!")
def factors(num):
factors = []
for x in range(1, num+1):
if num % x == 0:
factors.append(x)
print("The factors for " + str(num) + " are: ", factors)
for x in factors:
for y in range(1, x):
if x % y == 0:
factors.remove(x)
print("The prime factors for " + str(num) + " are: ", factors)
</code></pre>
<p>When I use this function with a "num" value of 25 I get this output...</p>
<pre><code>prime_num()
Give me a number...: 25
That's not a prime number!
The factors for 25 are: [1, 5, 25]
The prime factors for 25 are: [1, 25]
</code></pre>
<p>Which isn't the correct output for prime factors, I just want it to return: [5]
(I'm not concerned about the multiplicity of the factors at this time)</p>
<p>However, when I try the number 50, as my "num". I get this output with a valueError:</p>
<pre><code>prime_num()
Give me a number...: 50
That's not a prime number!
The factors for 50 are: [1, 2, 5, 10, 25, 50]
Traceback (most recent call last):
File "<ipython-input-19-12c785465e2a>", line 1, in <module>
prime_num()
File "C:/Users/x/Desktop/Python/Python Practice/primes.py", line 25, in prime_num
factors(num)
File "C:/Users/x/Desktop/Python/Python Practice/primes.py", line 40, in factors
factors.remove(x)
ValueError: list.remove(x): x not in list
</code></pre>
<p>I realize this means somehow my x isn't in factors, but I'm not sure how considering I'm specifically iterating through factors. </p>
| 1 | 2016-07-22T17:25:02Z | 38,532,373 | <p>This should make it clear what your problem is:</p>
<pre><code>factors = [1,5,25]
for x in factors:
for y in range(1,x):
print x,y
5 1
5 2
5 3
5 4
25 1
25 2
25 3
25 4
25 5
25 6
25 7
25 8
25 9
25 10
25 11
25 12
25 13
25 14
25 15
25 16
25 17
25 18
25 19
25 20
25 21
25 22
25 23
25 24
</code></pre>
<p>You're iterating over your factors in such a way that you ignore 1 and ignore the x % x combination. range(1,1) is the empty list, and then you simply stop short because you've increased the start point by 1 (from zero) but not the end point, leaving what you iterate over too short.</p>
<p>The reason you get a ValueError is because any non-square number (ie, not 4, 9, 16, 25, etc.) will be removed twice. For 6, for example, it will remove for the 2,3 combo and when it gets to the 3,2 combo it's already been removed, thus the error. One way to fix this is to make the code only go halfway minus one to your total, so that inverted numbers aren't removed twice. For example, stop at 2 for 6, 4 for 10, etc.</p>
| 1 | 2016-07-22T17:38:15Z | [
"python",
"loops",
"error-handling",
"prime-factoring"
] |
Partitioning a (string or integer) into n-elements of min (length or value) of 2 | 38,532,199 | <p>I have a list of ~120'000 strings of various lengths (from 4 to 27) and I want to check if this strings are made of sub-strings that exist in a dictionary, and this sub-strings can be of various lengths and min 2 chars long.</p>
<p>For example a string 9 chars long would be divided into min 2 sub-strings. And of course I need all possible combinations</p>
<pre><code>astring = '123456789'
# possible divisions
2 sub-strings = [['12','3456789'],['1234567','89'],['123','456789'],...]
3 sub-strings = [['12345', '67','89'],['1234','567','89']...]
4 sub-strings = [['12','34','56','789'],['12','34','567','89']...]
</code></pre>
<p>I found <a href="http://code.activestate.com/recipes/577665-partitioning-a-sequence/" rel="nofollow">code below at this address</a> and after rejecting results according to requirements I got what I need, but I'm not sure if it is not too slow. At 18 char long string, it takes 2 sec to process one string (hours for whole list).
In case of 18 chars long string I get 1596 good slices out of 131072 possible, so 98% is useless.
Is there a faster way to do it ?</p>
<pre><code>from itertools import chain, combinations
def partition(iterable, chain=chain, map=map):
s = iterable if hasattr(iterable, '__getslice__') else tuple(iterable)
n = len(s)
first, middle, last = [0], range(1, n), [n]
getslice = s.__getslice__
return [map(getslice, chain(first, div), chain(div, last))
for i in range(n) for div in combinations(middle, i)]
some_string = '12345678'
for xyz in xrange(100):
for x in partition(some_string):
if (any(len(astring) == 1 for astring in x)):
continue
if len(x) == 1:
continue
# otherwise do something here
</code></pre>
<p>to specify in answer to <strong>eyquem</strong> comment:</p>
<p>I have a dictionary of words in Japanese (Japanese doesn't use spaces) and lots of words of length of 4 chars or longer are compound words made of shorter words. I want to filter out those words that can be split into shorter words. Later I could go through the list and make sure that slicing of words makes semantic sense.</p>
<p>This approach is kind of brutal force, which I thought would be simpler and I could use instead of more logical but more complicated for loop with limited recursion.
Starting from left and finding the longest possible word...</p>
<p>Regards
Bart</p>
| 1 | 2016-07-22T17:27:20Z | 38,532,266 | <p>I'm not sure this helps, but you could try implementing a modified <a href="https://en.wikipedia.org/wiki/Radix_tree" rel="nofollow">radix tree</a>.</p>
| 1 | 2016-07-22T17:31:05Z | [
"python",
"string",
"slice"
] |
Speeding up cross-reference filtering in Pandas DB | 38,532,244 | <p>I am working with a very large donation database of data with relevant columns for donation ID, conduit ID, amount, for example:</p>
<pre><code> TRANSACTION_ID BACK_REFERENCE_TRAN_ID_NUMBER CONTRIBUTION_AMOUNT
0 VR0P4H2SEZ1 0 100
1 VR0P4H3X770 0 2700
2 VR0P4GY6QV1 0 500
3 VR0P4H3X720 0 1700
4 VR0P4GYHHA0 VR0P4GYHHA0E 200
</code></pre>
<p>What I need to do is to identify all of the rows where the TRANSACTION_ID corresponds to any BACK_REFERENCE_TRAN_ID_NUMBER. My current code, albeit a little clumsy, is:</p>
<pre><code>is_from_conduit = df[df.BACK_REFERENCE_TRAN_ID_NUMBER != "0"].BACK_REFERENCE_TRAN_ID_NUMBER.tolist()
df['CONDUIT_FOR_OTHER_DONATION'] = 0
for row in df.index:
if df['TRANSACTION_ID'][row] in is_from_conduit:
df['CONDUIT_FOR_OTHER_DONATION'][row] = 1
else:
df['CONDUIT_FOR_OTHER_DONATION'][row] = 0
</code></pre>
<p>However, on very large data sets with a large number of conduit donations, this takes for ever. I know there must be a simpler way, but clearly I can't come up with how to phrase this to find out what that may be.</p>
| 4 | 2016-07-22T17:30:05Z | 38,532,344 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>Series.isin</code></a>. It is a vectorized operation that checks if each element of the Series is in a supplied iterable.</p>
<pre><code>df['CONDUIT_FOR_OTHER_DONATION'] = df['TRANSACTION_ID'].isin(df['BACK_REFERENCE_TRAN_ID_NUMBER'].unique())
</code></pre>
<p>As @root mentioned if you prefer <code>0</code>/<code>1</code> (as in your example) instead of <code>True</code>/<code>False</code>, you can cast to <code>int</code>:</p>
<pre><code>df['CONDUIT_FOR_OTHER_DONATION'] = df['TRANSACTION_ID'].isin(df['BACK_REFERENCE_TRAN_ID_NUMBER'].unique()).astype(int)
</code></pre>
| 5 | 2016-07-22T17:35:49Z | [
"python",
"performance",
"pandas"
] |
Speeding up cross-reference filtering in Pandas DB | 38,532,244 | <p>I am working with a very large donation database of data with relevant columns for donation ID, conduit ID, amount, for example:</p>
<pre><code> TRANSACTION_ID BACK_REFERENCE_TRAN_ID_NUMBER CONTRIBUTION_AMOUNT
0 VR0P4H2SEZ1 0 100
1 VR0P4H3X770 0 2700
2 VR0P4GY6QV1 0 500
3 VR0P4H3X720 0 1700
4 VR0P4GYHHA0 VR0P4GYHHA0E 200
</code></pre>
<p>What I need to do is to identify all of the rows where the TRANSACTION_ID corresponds to any BACK_REFERENCE_TRAN_ID_NUMBER. My current code, albeit a little clumsy, is:</p>
<pre><code>is_from_conduit = df[df.BACK_REFERENCE_TRAN_ID_NUMBER != "0"].BACK_REFERENCE_TRAN_ID_NUMBER.tolist()
df['CONDUIT_FOR_OTHER_DONATION'] = 0
for row in df.index:
if df['TRANSACTION_ID'][row] in is_from_conduit:
df['CONDUIT_FOR_OTHER_DONATION'][row] = 1
else:
df['CONDUIT_FOR_OTHER_DONATION'][row] = 0
</code></pre>
<p>However, on very large data sets with a large number of conduit donations, this takes for ever. I know there must be a simpler way, but clearly I can't come up with how to phrase this to find out what that may be.</p>
| 4 | 2016-07-22T17:30:05Z | 38,532,485 | <p>Here's a NumPy based approach using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html" rel="nofollow"><code>np.in1d</code></a> -</p>
<pre><code>vals = np.in1d(df.TRANSACTION_ID,df.BACK_REFERENCE_TRAN_ID_NUMBER).astype(int)
df['CONDUIT_FOR_OTHER_DONATION'] = vals
</code></pre>
| 2 | 2016-07-22T17:44:46Z | [
"python",
"performance",
"pandas"
] |
SPI_SETDESKWALLPAPER not working with tempfile.NamedTemporaryFile() | 38,532,270 | <p><strong>Code:</strong></p>
<pre><code>import urllib.request
import tempfile
import shutil
import ctypes
SPI_SETDESKWALLPAPER = 20
with urllib.request.urlopen('https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png') as response, tempfile.NamedTemporaryFile() as f:
shutil.copyfileobj(response, f)
ctypes.windll.user32.SystemParametersInfoW(SPI_SETDESKWALLPAPER, 0, f.name, 0)
</code></pre>
<p>However, if you use <code>tempfile.NamedTemporaryFile(delete=False)</code> it works.</p>
<p>The docs state:</p>
<blockquote>
<p>If delete is true (the default), the file is deleted as soon as it is
closed.</p>
</blockquote>
<p>In my original code the file isn't deleted until after having exited the body of the <code>with</code> statement when it is automatically closed. So why isn't <code>SPI_SETDESKWALLPAPER</code> working?</p>
| 0 | 2016-07-22T17:31:13Z | 38,533,681 | <p>You need to read the next couple sentences in the documentation, which read something like this:</p>
<blockquote>
<p>Under Unix, the directory entry for the file is either not created at all or
is removed immediately after the file is created. Other platforms do not
support this; your code should not rely on a temporary file created using
this function having or not having a visible name in the file system.</p>
</blockquote>
| 0 | 2016-07-22T19:05:32Z | [
"python",
"python-3.5"
] |
SPI_SETDESKWALLPAPER not working with tempfile.NamedTemporaryFile() | 38,532,270 | <p><strong>Code:</strong></p>
<pre><code>import urllib.request
import tempfile
import shutil
import ctypes
SPI_SETDESKWALLPAPER = 20
with urllib.request.urlopen('https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png') as response, tempfile.NamedTemporaryFile() as f:
shutil.copyfileobj(response, f)
ctypes.windll.user32.SystemParametersInfoW(SPI_SETDESKWALLPAPER, 0, f.name, 0)
</code></pre>
<p>However, if you use <code>tempfile.NamedTemporaryFile(delete=False)</code> it works.</p>
<p>The docs state:</p>
<blockquote>
<p>If delete is true (the default), the file is deleted as soon as it is
closed.</p>
</blockquote>
<p>In my original code the file isn't deleted until after having exited the body of the <code>with</code> statement when it is automatically closed. So why isn't <code>SPI_SETDESKWALLPAPER</code> working?</p>
| 0 | 2016-07-22T17:31:13Z | 40,027,054 | <p>I figured out the problem:</p>
<p>To begin with, the value of the <code>fWinIni</code> parameter needs to be changed:</p>
<pre><code>SPIF_UPDATEINIFILE = 0x01
SPIF_SENDCHANGE = 0x02
ctypes.windll.user32.SystemParametersInfoW(SPI_SETDESKWALLPAPER, 0, f.name, SPIF_UPDATEINIFILE | SPIF_SENDCHANGE)
</code></pre>
<p>This preserves the wallpaper after logging off.</p>
<p>Second, the temp file needs to be closed in order for <code>SystemParametersInfoW</code> to work. Therefore, <code>delete=False</code> is necessary.</p>
<p>Finally, delete the temp file manually using <code>os.remove(f.name)</code>.</p>
| 0 | 2016-10-13T17:08:37Z | [
"python",
"python-3.5"
] |
possible combinations of strings in an array in python? | 38,532,277 | <p><code>arr=['one','two','three']</code>
<br><br>Result must be like this:
<code>onetwo,twothree,onethree</code></p>
<blockquote>
<p>itertools.permutations will not work in this situation.</p>
</blockquote>
<p>we can do this by simply adding for loops and appending them ,that works for small arrays but takes time for big arrays.
<br>
I was wondering is there any way <code>(like itertools.permutations)</code>this can be achieved?</p>
| 2 | 2016-07-22T17:31:27Z | 38,532,356 | <p>Perhaps what you wanted was the <code>itertools.combinations</code>?</p>
<pre><code>>>> [''.join(comb) for comb in (itertools.combinations(arr, 2))]
['onetwo', 'onethree', 'twothree']
</code></pre>
| 4 | 2016-07-22T17:36:31Z | [
"python",
"arrays",
"python-3.x",
"python-3.4",
"itertools"
] |
possible combinations of strings in an array in python? | 38,532,277 | <p><code>arr=['one','two','three']</code>
<br><br>Result must be like this:
<code>onetwo,twothree,onethree</code></p>
<blockquote>
<p>itertools.permutations will not work in this situation.</p>
</blockquote>
<p>we can do this by simply adding for loops and appending them ,that works for small arrays but takes time for big arrays.
<br>
I was wondering is there any way <code>(like itertools.permutations)</code>this can be achieved?</p>
| 2 | 2016-07-22T17:31:27Z | 38,532,961 | <blockquote>
<p>for two lists</p>
<ul>
<li>create a list with equal length compare with other list</li>
<li>zip new list with other list</li>
<li>put all sublist together </li>
<li>join list</li>
</ul>
</blockquote>
<pre><code>from itertools import permutations
arr1=['name1','name2']
arr2=['name3','name4']
set( map(lambda x: ''.join(x),reduce( lambda x,y:x+y, [ zip(i,arr1) for i in permutations(arr2,len(arr1)) ] ) ) )
output:
set(['name3name1', 'name3name2', 'name4name1', 'name4name2'])
</code></pre>
| 1 | 2016-07-22T18:16:31Z | [
"python",
"arrays",
"python-3.x",
"python-3.4",
"itertools"
] |
Import test failed when using Conda Build | 38,532,279 | <p>I am trying to create my own Conda package, but I'm having a problem when I go to "Build" the package, specifically within the "Testing" phase. I have been following the tutorial linked below, and it's been very helpful in explaining what each part is doing. </p>
<p><a href="http://kylepurdon.com/blog/packaging-python-basics-with-continuum-analytics-conda.html" rel="nofollow">http://kylepurdon.com/blog/packaging-python-basics-with-continuum-analytics-conda.html</a></p>
<p>Everything seems to build fine until it gets to the testing phase when it fails. </p>
<pre><code>===== testing package: py_tools-0.0.1-py27_0 =====
import: u'twitter_functions'
Traceback (most recent call last):
File "/home/curtis/miniconda2/conda-bld/test-tmp_dir/run_test.py", line 26, in <module>
import twitter_functions
ImportError: No module named twitter_functions
TESTS FAILED: py_tools-0.0.1-py27_0
</code></pre>
<p>Here is a link to my Github that contains the directory with my Conda package that I'm trying to build. </p>
<p><a href="https://github.com/CurtLH/py_tools/tree/develop" rel="nofollow">https://github.com/CurtLH/py_tools/tree/develop</a></p>
<p>Do you know what I'm doing wrong when in either my meta.yaml file or somewhere else?</p>
| 0 | 2016-07-22T17:31:34Z | 38,534,741 | <p>The correct import test would be <code>src.twitter_tools</code>, since you've named your package directory <code>src</code>. You can also see the Python packaging documentation to help in naming your package, etc.: <a href="https://python-packaging.readthedocs.io/en/latest/index.html" rel="nofollow">https://python-packaging.readthedocs.io/en/latest/index.html</a> I'd recommend you start by making sure that everything works when you run <code>python setup.py develop</code> before you make a conda package.</p>
| 1 | 2016-07-22T20:24:58Z | [
"python",
"conda",
"miniconda"
] |
Create a unit test using Django's test client post method passing parameters and requesting JSON from rest_framework | 38,532,288 | <p>I want to instantiate a <code>django.test.client.Client()</code> or <code>rest_framework.test.APIClient()</code>, POST a simple set of parameters, and request a JSON format response from a djangorestframework class-based view.</p>
<p>The documentation <a href="http://www.django-rest-framework.org/api-guide/testing/#making-requests" rel="nofollow">suggests</a> I just instantiate APIClient() and post with the parameter <code>format='json'</code>:</p>
<pre><code>rest_framework.test import APIClient
apiclient = APIClient()
response = apiclient.post('/api/v1/model/1/run',
data=request_params, format='json')
</code></pre>
<p>However then my view (a DRF viewset custom method) does not receive the request parameters. Tracing this to the view, the POST parameters do make it to <code>request.data</code> as a dict, but <code>request.POST.items()</code> returns an empty list. When I use the code below to make a POST request over AJAX from a browser, <code>request.POST.items()</code> returns all the parameters correctly. It is only when using the unit test <code>APIClient()</code> <code>post()</code> method that the parameter values aren't in <code>request.POST.items()</code>.</p>
<p>If I use the <code>.get()</code> method of <code>APIClient()</code>, the request parameters are not in <code>request.data</code> when it reaches the view, but they are in <code>request.GET.items()</code>, passed down in <code>QUERY_STRING</code>. The values are moved from query string to the WSGIRequest GET QueryDict by ClientHandler.<strong>call</strong> in django.test.client line 115 <code>request = WSGIRequest(environ)</code> (Django 1.9.7). This doesn't seem to be happening for <code>APIClient()</code> <code>post()</code>.</p>
<p>I tried the following:</p>
<ul>
<li><p>Passing <code>json.dumps(request_params)</code> to the data parameter, but same response - my view doesn't see any parameters in the request (<a href="http://stackoverflow.com/questions/11802299/django-testing-post-based-views-with-json-objects">ref</a>).</p></li>
<li><p>Using the Django Client, passing <code>content_type='application/json'</code>, with and without json.dumps, but same response.</p></li>
<li><p>Using Django Client, setting post **extra parameter to <code>HTTP_ACCEPT='application/json'</code> (with and without json.dumps) - same response.</p></li>
<li><p>Initializing the Django Client with <code>HTTP_ACCEPT='application/json'</code> (with and without json.dumps) - same response.</p></li>
<li><p>Leaving the <code>Accept</code> HTTP header, post's content_type parameter, and APIClient's format parameter undefined, and adding <code>{'format':'json'}</code> to the request_params - <em>which works for <code>Client.get</code> requests</em>, my code sees request parameters, but rest_framework returns HTML. The JSON rendered in this HTML shows the code is working correctly (returns status 202 and a polling URL, as it should).</p></li>
<li><p>Appending <code>.json</code> to the URL in the unit test and leaving content type, etc, at their defaults, but I get <code>Not Found: /api/v1/model/1/run/.json</code> from get_response.</p></li>
</ul>
<p>My code works fine accepting AJAX POST requests through the browser, and my unit tests were working fine when I was using client.get(). It is only the combination of using client.post() and needing JSON back that I cannot get working.</p>
<p>I extract the request values with:</p>
<pre><code>if request.method == 'POST':
form_values = ((key, value) for key, value in request.POST.items())
else:
form_values = ((key, value) for key, value in request.GET.items())
</code></pre>
<p>The Javascript that sends the AJAX request, <em>that succeeds and returns JSON</em>, is as follows:</p>
<pre><code>// Setup at the bottom of the HTML body
$(document).ready(function(){
$.ajaxSetup({
data: {csrfmiddlewaretoken: "{{ csrf_token }}", format: "json" }
});
$.ajaxSetup({
beforeSend: function (xhr, settings) {
xhr.setRequestHeader("Accept", "application/json");
if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", getCookie('csrftoken'));
}
}
});
});
// Code that makes the request url=/api/v1/model/1/run, method=post
// Only POST is permitted on the view method by decorator @detail_route(methods=['post']))
function run_model(event)
{
var form = $(event.target);
$.ajax({
type: form.attr('method'),
url: form.attr('action'),
data: $("#" + form.attr('id')).serialize() + "&format=json&csrfmiddlewaretoken={{ csrf_token }}"
})
.done(function (data, status, jqXHR) {
poll_instance(data.instance_id, data.model_id);
})
.fail(function (jqXHR, status, err) {
var status_div = $("." + construct_div_class("model", "div", jqXHR.responseJSON.model_id)).children("div.status");
if (catch_ajax_error(status_div, failed_tries, jqXHR, status, err)) {
setTimeout(run_model, 3000, event);
};
});
event.preventDefault();
};
</code></pre>
<p>The Accept header was what got this working, format=json didn't work.</p>
<p>This is the receiving view:</p>
<pre><code>class ModelViewSet(viewsets.ModelViewSet):
@detail_route(methods=['post'])
def run(self, request, *args, **kwargs):
"""
Runs a model and redirects to the URL that will return the output results when ready.
"""
try:
instance_id = run_model(request, self.get_object().id)
except ParameterValidationError as e:
# ...
return Response(data={'instance_id': instance_id, 'model_id': self.get_object().id},
status=status.HTTP_202_ACCEPTED)
</code></pre>
<p>The form, whose submit is tied to run_model() above:</p>
<pre><code><form method="POST" action="/api/v1/model/3/run/" id="model-form-3">
<table class="table table-striped table-bordered table-hover">
<tbody><tr>
<th>
Model
</th>
<th>
Parameter
</th>
<th>
Value
</th>
</tr>
<tr>
<td>
Source source model of Composite (model #2)
</td>
<td>
GUI dim value in for POC model #89
</td>
<td>
<select name="5_77" id="5_77">
<option value="13">
Dimension description #17
</option>
<option value="14">
Dimension description #18
</option>
</select>
</td>
</tr>
<tr>
<td>
Source model of Composite (model #1)
</td>
<td>
Decimal GUI value in for POC model #64
</td>
<td>
<input name="4_52" id="4_52" value="123456789" type="text">
</td>
</tr>
<tr>
<td>
Second source model of Composite (model #3)
</td>
<td>
GUI dim value in for POC model #112
</td>
<td>
<select name="6_100" id="6_100">
<option value="16">
Dimension description #20
</option>
<option value="17">
Dimension description #21
</option>
</select>
</td>
</tr>
<tr>
<td>
Dependent of Composite (model #0)
</td>
<td>
GUI dim value in for POC model #45
</td>
<td>
<select name="3_33" id="3_33">
<option value="7">
Dimension description #11
</option>
<option value="8">
Dimension description #12
</option>
</select>
</td>
</tr>
<tr>
<td>
Dependent of Composite (model #0)
</td>
<td>
Decimal GUI value in for POC model #43
</td>
<td>
<input name="3_31" id="3_31" value="123456789" type="text">
</td>
</tr>
</tbody></table>
<input value="Run model" type="submit"><br><br>
</form>
</code></pre>
<p>I'm on Python 3.5, Django 1.9.7, djangorestframework 3.4.0 (also happened in 3.2.1), djangorestframework-xml 1.3.0, debugging in PyCharm 2016.1</p>
| 0 | 2016-07-22T17:32:04Z | 38,591,996 | <p>Turns out the AJAX data is <em>supposed</em> to appear in <code>request.data</code>, and I was using the wrong approach to submit the data via AJAX from the browser. Django rest_framework (DRF) assumes that data from the request will be passed in the same format as data returned to the client - in this case JSON both ways. As it assumes that for an <code>Accept=application/json</code> request, incoming data will be in JSON format, it automatically parses it and populates <code>request.data</code> for you, and <code>request.GET</code> and <code>request.POST</code> are empty by the time the request reaches your DRF view.</p>
<p>To pass a form of data in the AJAX request I use the <a href="http://malsup.com/jquery/form/" rel="nofollow">jquery form plugin's</a> <code>.formSerialize()</code> method.</p>
<p>I did just have a <code>.map()</code> compile a dictionary from a form, but this won't work for radios and other instances where you might have several values for a single key/form id.</p>
<p>Credit for this answer should really go to @dhke, who pointed out my fundamental error. Although perhaps this question should be deleted.</p>
| 0 | 2016-07-26T14:00:49Z | [
"python",
"json",
"django",
"unit-testing",
"django-rest-framework"
] |
How can you plot data from a .txt file using matplotlib? | 38,532,298 | <p>Hi I'm trying to plot of .txt file using matplotlib but I keep getting this error, I'm not that familiar with python, as I started learning a couple of weeks ago, Apologises! The text file is formatted like (it is 2048 rows long) </p>
<pre><code>6876.593750 1
6876.302246 1
6876.003418 0
</code></pre>
<p>I would like just to plot the data in the .txt file.<br>
The error is [IndexError: list index out of range[
This is the program I'm using below. </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
with open("Alpha_Particle.txt") as f:
data = f.read()
data = data.split('\n')
x = [row.split(' ')[0] for row in data]
y = [row.split(' ')[1] for row in data]
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.set_title("Plot title")
ax1.set_xlabel('x label')
ax1.set_ylabel('y label')
ax1.plot(x,y, c='r', label='the data')
leg = ax1.legend()
plt.show()
</code></pre>
<p>Thank you in advance! </p>
| 0 | 2016-07-22T17:33:32Z | 38,532,503 | <p>A quick solution woul dbe to remove the 4th element in data like this:</p>
<pre><code>data.pop()
</code></pre>
<p>Place it after </p>
<pre><code>data = data.split('\n')
</code></pre>
| 0 | 2016-07-22T17:46:16Z | [
"python",
"matplotlib"
] |
How can you plot data from a .txt file using matplotlib? | 38,532,298 | <p>Hi I'm trying to plot of .txt file using matplotlib but I keep getting this error, I'm not that familiar with python, as I started learning a couple of weeks ago, Apologises! The text file is formatted like (it is 2048 rows long) </p>
<pre><code>6876.593750 1
6876.302246 1
6876.003418 0
</code></pre>
<p>I would like just to plot the data in the .txt file.<br>
The error is [IndexError: list index out of range[
This is the program I'm using below. </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
with open("Alpha_Particle.txt") as f:
data = f.read()
data = data.split('\n')
x = [row.split(' ')[0] for row in data]
y = [row.split(' ')[1] for row in data]
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.set_title("Plot title")
ax1.set_xlabel('x label')
ax1.set_ylabel('y label')
ax1.plot(x,y, c='r', label='the data')
leg = ax1.legend()
plt.show()
</code></pre>
<p>Thank you in advance! </p>
| 0 | 2016-07-22T17:33:32Z | 38,532,516 | <p>You're just reading in the data wrong. Here's a cleaner way:</p>
<pre><code>with open('Alpha_Particle.txt') as f:
lines = f.readlines()
x = [line.split()[0] for line in lines]
y = [line.split()[1] for line in lines]
x
['6876.593750', '6876.302246', '6876.003418']
y
['1', '1', '0']
</code></pre>
| 1 | 2016-07-22T17:46:45Z | [
"python",
"matplotlib"
] |
mpi4py passing dict object | 38,532,336 | <pre><code>#mpiexec -n 3 python pass_dict.py
from mpi4py import MPI
import psycopg2
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
tax_dict={}
if rank == 0:
tax_files=['2008','2009','2011','2012','2013','2014','2015']
file_dir='/zhome/nah316/taxonomy/'
for tax_file in tax_files:
filename=file_dir + tax_file+'.csv'
with open(filename,'r') as f:
temp={}
for line in f:
temp_list=[]
splitted_line = line.split()
tag=splitted_line[1]
temp_list.append(tag)
temp[splitted_line[1]] = temp_list
tax_dict[tax_file]=temp
else:
tax_dict=None
comm.bcast(tax_dict, root = 0)
print '-' * 20, rank , '-'* 30
print tax_dict['2015']['InvestmentSoldNotYetPurchasedRestrictedCost']
</code></pre>
<p>Here's the code that I tried to construct a dictionary of dictionary, and bcast it on the communicator to 2 other core. when I ran it I get the error:</p>
<pre><code>-------------------- 2 ------------------------------
Traceback (most recent call last):
File "pass_dict.py", line 33, in <module>
print tax_dict['2015']['InvestmentSoldNotYetPurchasedRestrictedCost']
TypeError: 'NoneType' object has no attribute '__getitem__'
-------------------- 1 ------------------------------
Traceback (most recent call last):
File "pass_dict.py", line 33, in <module>
print tax_dict['2015']['InvestmentSoldNotYetPurchasedRestrictedCost']
TypeError: 'NoneType' object has no attribute '__getitem__'
-------------------- 0 ------------------------------
['InvestmentSoldNotYetPurchasedRestrictedCost']
</code></pre>
<p>it seems to me that the dictionary that got passed to core other than the root has lost some functionality as a dictionary. why is this? How should I go around this and make the pass as it is on the root node?</p>
<p>Thanks in advance!</p>
| 0 | 2016-07-22T17:35:28Z | 38,573,438 | <p>I don't know much about python or mpi4py in detail, but I found some code at <a href="https://pythonhosted.org/mpi4py/usrman/tutorial.html" rel="nofollow">https://pythonhosted.org/mpi4py/usrman/tutorial.html</a> that would imply that you need to assign the result of comm.bcast to the dictionaries on the other ranks.</p>
<p>The code should be</p>
<pre><code>tax_dict = comm.bcast(tax_dict, root = 0)
</code></pre>
<p>Maybe this solves the problem?</p>
| 0 | 2016-07-25T16:51:54Z | [
"python",
"parallel-processing",
"mpi4py"
] |
How to extract a value from a Python dictionary using Javascript? | 38,532,380 | <p>I have a Python dictionary that looks like this...</p>
<pre><code>{u'reason': u'invalidQuery', u'message': u'Encountered " <ID> "asd "" at line 1, column 1.\nWas expecting:\n <EOF> \n ', u'location': u'query'}
</code></pre>
<p>How do I access the <code>u'message'</code> value using Javascript? It is being passed to the front end via Django REST Framework, shouldn't that convert it to a JSON object automatically? It isn't...</p>
| 1 | 2016-07-22T17:38:34Z | 38,532,410 | <p>This is not a valid JavaScript object.</p>
| 2 | 2016-07-22T17:40:12Z | [
"javascript",
"python",
"dictionary",
"unicode",
"key"
] |
How to extract a value from a Python dictionary using Javascript? | 38,532,380 | <p>I have a Python dictionary that looks like this...</p>
<pre><code>{u'reason': u'invalidQuery', u'message': u'Encountered " <ID> "asd "" at line 1, column 1.\nWas expecting:\n <EOF> \n ', u'location': u'query'}
</code></pre>
<p>How do I access the <code>u'message'</code> value using Javascript? It is being passed to the front end via Django REST Framework, shouldn't that convert it to a JSON object automatically? It isn't...</p>
| 1 | 2016-07-22T17:38:34Z | 38,532,571 | <p>In views.py</p>
<pre><code>import json
json_object = json.dumps(your_object)
</code></pre>
<p>In .html / .js</p>
<pre><code>var json_object = {{json_object|safe}};
</code></pre>
<p>The .py side converts it to a valid JSON object. The tag in html escapes invalid json characters like "&"</p>
<p>In case you are sending your object as a string to your template then you have to convert it to JSON again via JSON, for instance:</p>
<pre><code>JSON.parse("{{json_object}}");
</code></pre>
<p>Or if the object is a string or a Python dict on the Django views level:</p>
<pre><code>json.loads(your_string)
</code></pre>
<p>To convert it into a json object</p>
| 2 | 2016-07-22T17:50:41Z | [
"javascript",
"python",
"dictionary",
"unicode",
"key"
] |
Running both Python 2.7 and 3.5 on PC | 38,532,440 | <p>I have both versions of Python installed on my PC running Windows 10 and I can switch between them manually as needed, but I was wondering if there is a way to edit their path environment variables so that I can launch both of them from the CMD easily.</p>
<p>For example, instead of typing "python" to launch whatever is the default one right now, I want to just type python2 for one, and python3 for the other, is that possible?</p>
<p><strong>Update</strong>: it turned out that you don't need any trick for this, you just use either <code>py -2</code> or <code>py -3</code> accordingly. Alternatively, you can configure your own aliases in <code>cmd</code> as mentioned below.</p>
| 3 | 2016-07-22T17:42:14Z | 38,532,808 | <p>This has more to do with Windows and less to do with Python IMO. You might want to take a look at <a href="http://stackoverflow.com/questions/20530996/aliases-in-windows-command-prompt">Aliases in windows command prompt</a>
You should be able to use </p>
<pre><code>DOSKEY python3=C:\path\to\python3.exe $*
DOSKEY python2=C:\path\to\python2.exe $*
</code></pre>
<p>to define the alias. You can then put those in a <code>.cmd</code> file e.g. <code>env.cmd</code> and use</p>
<pre><code>cmd.exe /K env.cmd
</code></pre>
<p>to automatically load the aliases into the shell when you run it.
That's the way I would go about doing this. I hope it helps.</p>
| 2 | 2016-07-22T18:06:32Z | [
"python",
"windows",
"python-2.7",
"python-3.x"
] |
Running both Python 2.7 and 3.5 on PC | 38,532,440 | <p>I have both versions of Python installed on my PC running Windows 10 and I can switch between them manually as needed, but I was wondering if there is a way to edit their path environment variables so that I can launch both of them from the CMD easily.</p>
<p>For example, instead of typing "python" to launch whatever is the default one right now, I want to just type python2 for one, and python3 for the other, is that possible?</p>
<p><strong>Update</strong>: it turned out that you don't need any trick for this, you just use either <code>py -2</code> or <code>py -3</code> accordingly. Alternatively, you can configure your own aliases in <code>cmd</code> as mentioned below.</p>
| 3 | 2016-07-22T17:42:14Z | 38,533,464 | <p>You can try <a href="https://virtualenv.pypa.io/en/stable/userguide/" rel="nofollow"><code>virtualenv</code></a> or <a href="https://www.cygwin.com/" rel="nofollow"><code>cygwin</code></a>. Using the later you can have both versions python installed and invoked as you from the same terminal.</p>
<p>Another possible alternative might be <a href="https://blogs.windows.com/buildingapps/2016/03/30/run-bash-on-ubuntu-on-windows/" rel="nofollow">Ubuntu on Windows</a> but personally I have not tried this.</p>
<p>If your are looking for a native solution to use in <code>Windows Command Prompt</code> or <code>Power Shell</code>, as mentioned by <a href="http://stackoverflow.com/users/6510412/paradoxinabox">Paradoxinabox</a> you have to go with aliases. </p>
| 0 | 2016-07-22T18:50:03Z | [
"python",
"windows",
"python-2.7",
"python-3.x"
] |
Running both Python 2.7 and 3.5 on PC | 38,532,440 | <p>I have both versions of Python installed on my PC running Windows 10 and I can switch between them manually as needed, but I was wondering if there is a way to edit their path environment variables so that I can launch both of them from the CMD easily.</p>
<p>For example, instead of typing "python" to launch whatever is the default one right now, I want to just type python2 for one, and python3 for the other, is that possible?</p>
<p><strong>Update</strong>: it turned out that you don't need any trick for this, you just use either <code>py -2</code> or <code>py -3</code> accordingly. Alternatively, you can configure your own aliases in <code>cmd</code> as mentioned below.</p>
| 3 | 2016-07-22T17:42:14Z | 38,534,073 | <p>I have copied two batch files from WinPython distribution, </p>
<p><em>cmd.bat</em></p>
<pre><code>@echo off
call %~dp0env.bat
cmd.exe /k
</code></pre>
<p>and <em>env.bat</em> (edited)</p>
<pre><code>@echo off
set WINPYDIR=C:\devel\Python34
set PATH=%WINPYDIR%\;%WINPYDIR%\DLLs;%WINPYDIR%\Scripts;%PATH%;
</code></pre>
<p>where <code>WINPYDIR</code> corresponds to the install path. I have placed these to <em>Scripts</em> subdirectory (for example <em>C:\devel\Python34\Scripts</em>), and then a suitable shortcut on desktop that launches command prompt with <code>PATH</code> variable set.</p>
| 0 | 2016-07-22T19:34:15Z | [
"python",
"windows",
"python-2.7",
"python-3.x"
] |
Difference between __new__ and __init__ order in Python2/3 | 38,532,445 | <p>In Python 3, if any value that is not an instance of <code>cls</code> is returned, the <code>__init__</code> method is never called. So I can, for example, do this:</p>
<pre><code>class Foo:
@staticmethod
def bar(n):
return n * 5
def __new__(cls, n):
return Foo.bar(n)
print(Foo(3)) # => 15
</code></pre>
<p>I was under the impression that the order was <code>__call__</code> (if it's an instance) -> <code>__new__</code> -> <code>__init__</code>.</p>
<p>However, in Python 2, this seems to raise a <code>TypeError: this constructor takes no arguments</code> due to the lack of an <code>__init__</code>. I can fix that by inheriting from <code>object</code>. So, running this:</p>
<pre><code>class Foo:
def __new__(cls, *args, **kwargs):
print("new called")
def __init__(self, *args, **kwargs):
print("init called")
Foo()
"""
Python2: "init called"
Python3: "new called"
"""
</code></pre>
<p>In Python 2, I even messed around with metaclasses.</p>
<pre><code>Meta = type("Meta", (type,), dict(__call__=lambda self, x: x * 5))
class Foo(object):
__metaclass__ = Meta
print(Foo(4)) # => 20
</code></pre>
<p>But this does not work in Python3 because the init/new methods seem to be reversed.</p>
<p>Is there any Python2/3 compatible way of doing this?</p>
<h1>Solution:</h1>
<p>This is the way I did it. I don't like it, but it works:</p>
<pre><code>class Foo(object):
@staticmethod
def __call__(i):
return i * 5
def __new__(cls, i):
return Foo.__call__(i)
</code></pre>
<p>Surely there is a more pythonic way of doing this.</p>
| 4 | 2016-07-22T17:42:25Z | 38,532,508 | <p>In Python 2, you need to use new-style classes to make classes work properly. That means you need to define your class as <code>class Foo(object)</code>. Then your first example will work in both Python 2 and Python 3.</p>
| 6 | 2016-07-22T17:46:27Z | [
"python",
"python-2.7",
"python-3.x",
"metaclass"
] |
How can I use a Python defaultdict to divide one value by another? | 38,532,479 | <p>I have a defaultdict built with keys and values, but I need to be able to divide the first value by the second value if there are two values in the pair.</p>
<pre><code>defaultdict(<type 'list'>, {3: [7567, 6525], 4: [0], 65542: [609, 5245], 13:
[73585, 84764], 14: [159, 19385], 65552: [1834], 22: [47333], 25: [0, 5320],
65562: [0], 98332: [0], 30: [0, 704249], 32: [5612], 33: [76050]}
</code></pre>
<p>So, for 3:, I would need to get 7567/6525, and put that into a new dictionary with the same key. But, for 32, I can't do any division, so I'd need to remove it from the set. </p>
<p>How would I go about doing this, knowing I don't always have 2 values to divide with? </p>
| 0 | 2016-07-22T17:44:10Z | 38,532,576 | <p>You could do something like this:</p>
<pre><code>for k in mydict.keys():
if len(mydict[k]) != 2:
mydict.pop(k)
else:
v1, v2 = mydict[k]
mydict[k] = v1 / v2
</code></pre>
| 0 | 2016-07-22T17:50:58Z | [
"python"
] |
How can I use a Python defaultdict to divide one value by another? | 38,532,479 | <p>I have a defaultdict built with keys and values, but I need to be able to divide the first value by the second value if there are two values in the pair.</p>
<pre><code>defaultdict(<type 'list'>, {3: [7567, 6525], 4: [0], 65542: [609, 5245], 13:
[73585, 84764], 14: [159, 19385], 65552: [1834], 22: [47333], 25: [0, 5320],
65562: [0], 98332: [0], 30: [0, 704249], 32: [5612], 33: [76050]}
</code></pre>
<p>So, for 3:, I would need to get 7567/6525, and put that into a new dictionary with the same key. But, for 32, I can't do any division, so I'd need to remove it from the set. </p>
<p>How would I go about doing this, knowing I don't always have 2 values to divide with? </p>
| 0 | 2016-07-22T17:44:10Z | 38,532,645 | <p>Use a dictionary comprehension and the ternary operator. </p>
<p>Python 3</p>
<pre><code>from operator import truediv
{key: (truediv(*val) if len(val) == 2 else val[0]) for key, val in dic.items()}
</code></pre>
<p>Python 2</p>
<pre><code>from operator import truediv
{key: (truediv(*val) if len(val) == 2 else val[0]) for key, val in dic.iteritems()}
</code></pre>
<p>I'll continue with Python 3. If you want to remove those entries with less than 2 items</p>
<pre><code>{key: div(*val) for key, val in dic.items() if len(val) == 2}
</code></pre>
<p><strong>Update</strong> per your comment under the question</p>
<p>Wrapping the results back into lists and getting a list of the "outliers".</p>
<pre><code>new_dic = {key: [div(*val)] for key, val in dic.items() if len(val) == 2}
outliers = [key for key, val in dic.items() if len(val) != 2]
</code></pre>
| 2 | 2016-07-22T17:55:36Z | [
"python"
] |
How can I use a Python defaultdict to divide one value by another? | 38,532,479 | <p>I have a defaultdict built with keys and values, but I need to be able to divide the first value by the second value if there are two values in the pair.</p>
<pre><code>defaultdict(<type 'list'>, {3: [7567, 6525], 4: [0], 65542: [609, 5245], 13:
[73585, 84764], 14: [159, 19385], 65552: [1834], 22: [47333], 25: [0, 5320],
65562: [0], 98332: [0], 30: [0, 704249], 32: [5612], 33: [76050]}
</code></pre>
<p>So, for 3:, I would need to get 7567/6525, and put that into a new dictionary with the same key. But, for 32, I can't do any division, so I'd need to remove it from the set. </p>
<p>How would I go about doing this, knowing I don't always have 2 values to divide with? </p>
| 0 | 2016-07-22T17:44:10Z | 38,532,667 | <p>I guess since everyone else is interpreting the question this way I'll post my version as an answer:</p>
<pre><code>{k: reduce(operator.div, map(float, v)) for k, v in original.items()}
</code></pre>
<p>Alternatively:</p>
<pre><code>{k: reduce(operator.truediv, v) for k, v in original.items()}
</code></pre>
<p>Where <code>original</code> is your dict and you <code>import operator</code>. If you're on Python 3 you'll need to <code>from functools import reduce</code> as well (global in Python 2).</p>
<p><s>Also, if you're using Python 3, you can drop <code>map(float, v)</code> in favor of just <code>v</code>. That's so you don't get burned by integer division (Floor division) on Python 2.</s> You can use <code>truediv</code> instead of <code>map</code> to avoid this -- I was thinking, incorrectly, this was not portable between versions of Python. I stand corrected.</p>
<p>Note: This will not work if your dict values are not always iterable. You're using a defaultdict with a list default so you'll be fine.</p>
| 1 | 2016-07-22T17:56:55Z | [
"python"
] |
Google's Python Course: iterating through list of regular expressions and printing each output on a new line | 38,532,502 | <p>I am trying to pass some regular expressions examples that are provided by Google's Python Course to a list and print each output on a new line.</p>
<pre><code>import re
str='an example word:cat!!'
match=re.search(r'word:\w\w\w',str)
if match:
print('I found a cat :)',match.group()) ## 'found word:cat'
else:
print('did not find')
matches=[re.search(r'pi+', 'piiig'),re.search(r'i+', 'piigiiii'),re.search(r'\d\s*\d\s*\d', 'xx1 2 3xx'),re.search(r'\d\s*\d\s*\d', 'xx12 3xx'),re.search(r'\d\s*\d\s*\d', 'xx123xx'),re.search(r'^b\w+', 'foobar'),re.search(r'b\w+', 'foobar')]
for the_match in matches:
matches.append(the_match)
print("\n".join(matches))
#del matches
</code></pre>
<p>When I run <code>python regex.py</code>, I get the following:</p>
<pre><code>python regex.py
I found a cat :) word:cat
</code></pre>
<p>It just stalls and produces no further output. I will have to press <code>ctrl+c</code> 2 times to exit. </p>
<p>Please let me how to get an output such as:</p>
<pre><code>re.search(r'pi+', 'piiig') returned (<_sre.SRE_Match object; span=(0, 4), match='piii'>, <_sre.SRE_Match object; span=(1, 3), match='ii'>)
re.search(r'i+', 'piigiiii') returned <_sre.SRE_Match object; span=(1, 3), match='ii'>
etc...
</code></pre>
<p>I am running Python 3.5.2 on Windows 10 version 10.0.10586 64 bit.</p>
<p>Thank you!</p>
<p>After your answers (@Buzz), my script is as follows:</p>
<pre><code>import re
str='an example word:cat!!'
match=re.search(r'word:\w\w\w',str)
matches=[re.search(r'pi+', 'piiig'),re.search(r'i+', 'piigiiii'),re.search(r'\d\s*\d\s*\d', 'xx1 2 3xx'),re.search(r'\d\s*\d\s*\d', 'xx12 3xx'),re.search(r'\d\s*\d\s*\d', 'xx123xx'),re.search(r'^b\w+', 'foobar'),re.search(r'b\w+', 'foobar')]
if match:
print('I found a cat :)',match.group()) ## 'found word:cat'
else:
print('No match found.')
for the_match in matches:
print(the_match)
</code></pre>
<p>The output is as follows:</p>
<pre><code>I found a cat :) word:cat
<_sre.SRE_Match object; span=(0, 4), match='piii'>
<_sre.SRE_Match object; span=(1, 3), match='ii'>
<_sre.SRE_Match object; span=(2, 9), match='1 2 3'>
<_sre.SRE_Match object; span=(2, 7), match='12 3'>
<_sre.SRE_Match object; span=(2, 5), match='123'>
None
<_sre.SRE_Match object; span=(3, 6), match='bar'>
</code></pre>
<p>This works perfectly. Thank you so much.</p>
| 0 | 2016-07-22T17:46:13Z | 38,532,647 | <p>You have an infinite loop:</p>
<pre><code>for the_match in matches:
matches.append(the_match)
</code></pre>
<p>This will append new items to the list forever. I don't know what you were trying to achieve with this, but it looks like you can simply remove these two lines.</p>
| 0 | 2016-07-22T17:55:42Z | [
"python",
"regex",
"tuples"
] |
Google's Python Course: iterating through list of regular expressions and printing each output on a new line | 38,532,502 | <p>I am trying to pass some regular expressions examples that are provided by Google's Python Course to a list and print each output on a new line.</p>
<pre><code>import re
str='an example word:cat!!'
match=re.search(r'word:\w\w\w',str)
if match:
print('I found a cat :)',match.group()) ## 'found word:cat'
else:
print('did not find')
matches=[re.search(r'pi+', 'piiig'),re.search(r'i+', 'piigiiii'),re.search(r'\d\s*\d\s*\d', 'xx1 2 3xx'),re.search(r'\d\s*\d\s*\d', 'xx12 3xx'),re.search(r'\d\s*\d\s*\d', 'xx123xx'),re.search(r'^b\w+', 'foobar'),re.search(r'b\w+', 'foobar')]
for the_match in matches:
matches.append(the_match)
print("\n".join(matches))
#del matches
</code></pre>
<p>When I run <code>python regex.py</code>, I get the following:</p>
<pre><code>python regex.py
I found a cat :) word:cat
</code></pre>
<p>It just stalls and produces no further output. I will have to press <code>ctrl+c</code> 2 times to exit. </p>
<p>Please let me how to get an output such as:</p>
<pre><code>re.search(r'pi+', 'piiig') returned (<_sre.SRE_Match object; span=(0, 4), match='piii'>, <_sre.SRE_Match object; span=(1, 3), match='ii'>)
re.search(r'i+', 'piigiiii') returned <_sre.SRE_Match object; span=(1, 3), match='ii'>
etc...
</code></pre>
<p>I am running Python 3.5.2 on Windows 10 version 10.0.10586 64 bit.</p>
<p>Thank you!</p>
<p>After your answers (@Buzz), my script is as follows:</p>
<pre><code>import re
str='an example word:cat!!'
match=re.search(r'word:\w\w\w',str)
matches=[re.search(r'pi+', 'piiig'),re.search(r'i+', 'piigiiii'),re.search(r'\d\s*\d\s*\d', 'xx1 2 3xx'),re.search(r'\d\s*\d\s*\d', 'xx12 3xx'),re.search(r'\d\s*\d\s*\d', 'xx123xx'),re.search(r'^b\w+', 'foobar'),re.search(r'b\w+', 'foobar')]
if match:
print('I found a cat :)',match.group()) ## 'found word:cat'
else:
print('No match found.')
for the_match in matches:
print(the_match)
</code></pre>
<p>The output is as follows:</p>
<pre><code>I found a cat :) word:cat
<_sre.SRE_Match object; span=(0, 4), match='piii'>
<_sre.SRE_Match object; span=(1, 3), match='ii'>
<_sre.SRE_Match object; span=(2, 9), match='1 2 3'>
<_sre.SRE_Match object; span=(2, 7), match='12 3'>
<_sre.SRE_Match object; span=(2, 5), match='123'>
None
<_sre.SRE_Match object; span=(3, 6), match='bar'>
</code></pre>
<p>This works perfectly. Thank you so much.</p>
| 0 | 2016-07-22T17:46:13Z | 38,532,660 | <p>The reason it keeps running forever is becasue you are appending to <code>matches</code> inside the for loop. You are always adding to it making the list longer which in turn makes the for loop run until it can reach the end, but it will never reach it. </p>
<pre><code>for the_match in matches:
print (the_match)
</code></pre>
| 2 | 2016-07-22T17:56:42Z | [
"python",
"regex",
"tuples"
] |
How do I create a matplotlib slideshow? | 38,532,522 | <p>The Python/pyplot code below generates four figures and four windows. I need code that opens one window showing fig1. Then when the user presses right arrow button or right arrow key the same window clears fig1 and shows fig2. So basically only one of the four figures will be selected by the user for viewing in a slideshow. I have searched for an answer in the docs and online without success. I have edited the question to show the definition of six axes that appear in the four figures. It appears that one must associate the axes with a single figure and then draw, clear, and redraw axes to simulate a slideshow in the default GUI?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig1 = plt.figure()
ax1 = fig1.add_subplot(3, 1, 1)
ax2 = fig1.add_subplot(3, 1, 2, sharex=ax1)
ax3 = fig1.add_subplot(3, 1, 3, sharex=ax1)
fig2 = plt.figure()
ax4 = fig2.add_subplot(1, 1, 1)
fig3 = plt.figure()
ax5 = fig2.add_subplot(1, 1, 1)
fig4 = plt.figure()
ax6 = fig2.add_subplot(1, 1, 1)
plt.show()
</code></pre>
<p>Ideally I would like to set the backend to ensure the same code functions on MacOS, Linux, and Windows. However I would be satisfied to get a very basic slideshow working on Windows 7 and develop for other OS later if necessary.</p>
| 1 | 2016-07-22T17:46:59Z | 38,551,548 | <p>Maybe something like this:
(click on the graph to switch)</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
i = 0
def fig1(fig):
ax = fig.add_subplot(111)
ax.plot(x, np.sin(x))
def fig2(fig):
ax = fig.add_subplot(111)
ax.plot(x, np.cos(x))
def fig3(fig):
ax = fig.add_subplot(111)
ax.plot(x, np.tan(x))
def fig4(fig):
ax1 = fig.add_subplot(311)
ax1.plot(x, np.sin(x))
ax2 = fig.add_subplot(312)
ax2.plot(x, np.cos(x))
ax3 = fig.add_subplot(313)
ax3.plot(x, np.tan(x))
switch_figs = {
0: fig1,
1: fig2,
2: fig3,
3: fig4
}
def onclick1(fig):
global i
print(i)
fig.clear()
i += 1
i %= 4
switch_figs[i](fig)
plt.draw()
x = np.linspace(0, 2*np.pi, 1000)
fig = plt.figure()
switch_figs[0](fig)
fig.canvas.mpl_connect('button_press_event', lambda event: onclick1(fig))
plt.show()
</code></pre>
| 1 | 2016-07-24T11:29:45Z | [
"python",
"matplotlib",
"cross-platform",
"slideshow"
] |
Quickbase module add_record() functionâfile upload parameters? | 38,532,528 | <p>The code below is part of the Python Quickbase module which has not been updated in quite a while. The help text for one of the function shown below is not clear on how to pass the parameters to upload a file (the value of which is actually base64 encoded).</p>
<pre><code>def add_record(self, fields, named=False, database=None, ignore_error=True, uploads=None):
"""Add new record. "fields" is a dict of name:value pairs
(if named is True) or fid:value pairs (if named is False). Return the new records RID
"""
request = {}
if ignore_error:
request['ignoreError'] = '1'
attr = 'name' if named else 'fid'
request['field'] = []
for field, value in fields.iteritems():
request_field = ({attr: to_xml_name(field) if named else field}, value)
request['field'].append(request_field)
if uploads:
for upload in uploads:
request_field = (
{attr: (to_xml_name(upload['field']) if named else upload['field']),
'filename': upload['filename']}, upload['value'])
request['field'].append(request_field)
response = self.request('AddRecord', database or self.database, request, required=['rid'])
return int(response['rid'])
</code></pre>
<p>Can someone help me in how I should pass the parameters to add a record.</p>
| 1 | 2016-07-22T17:47:11Z | 38,544,832 | <p>Based on the definition you provided, it appears that you you need to pass an array of dictionaries that each provide the field name/id, filename, and the base64 encoding of the file for the <code>uploads</code> parameter. So, if I had a table where I record the name of a color to the field named "color" with the field id of 19 and a sample image to the field named "sample image" with the field id of 21, I believe my method call would be something like:</p>
<pre><code>my_color_file = #base64 encoding of your file
my_fields = {'19': 'Seafoam Green'}
my_uploads = [{'field': 21, 'filename':'seafoam_green.png', 'value': my_color_file}]
client.add_record(fields=my_fields, uploads=my_uploads)
</code></pre>
<p>Or, if you're using field names:</p>
<pre><code>my_color_file = #base64 encoding of your file
my_fields = {'color': 'Seafoam Green'}
my_uploads = [{'field': 'sample_image', 'filename':'seafoam_green.png', 'value': my_color_file}]
client.add_record(fields=my_fields, named=True, uploads=my_uploads)
</code></pre>
<p><code>client</code> is just the object you instantiated earlier using whatever constructor this module has.</p>
| 1 | 2016-07-23T17:50:43Z | [
"python",
"api",
"quickbase"
] |
How to load a CSV string into MySQL using Python | 38,532,655 | <p>In my use case, I have a csv stored as a string and I want to load it into a MySQL table. Is there a better way than saving the string as a file, use LOAD DATA INFILE, and then deleting the file? I find <a href="http://stackoverflow.com/questions/3627537/is-a-load-data-without-a-file-i-e-in-memory-possible-for-mysql-and-java">this answer</a> but it's for JDBC and I haven't find a Python equivalent to it.</p>
| 0 | 2016-07-22T17:56:11Z | 38,532,713 | <p>Yes what you describe is very possible! Say, for example, that your csv file has three columns:</p>
<pre><code>import MySQLdb
conn = MySQLdb.connect('your_connection_string')
cur = conn.cursor()
with open('yourfile.csv','rb') as fin:
for row in fin:
cur.execute('insert into yourtable (col1,col2,col3) values (%s,%s,%s)',row)
cur.close(); conn.close()
</code></pre>
| 0 | 2016-07-22T18:00:12Z | [
"python",
"mysql"
] |
Numpy Conditional Max of Range | 38,532,678 | <p>I'm trying to make a version of my program faster using as much Pandas and Numpy as possible. I am new to Numpy but have been grasping most of it, but I am having trouble with conditional formatting a column with the max of a range. This is the code I am trying to use to achieve this:</p>
<pre><code>x=3
df1['Max']=numpy.where(df1.index>=x,max(df1.High[-x:],0))
</code></pre>
<p>Basically, I am trying to conditionally put the maximum value over the last 3 entries into a cell and repeat down the column. Any and all help is appreciated.</p>
| 4 | 2016-07-22T17:57:36Z | 38,532,806 | <p>Use <a href="http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.ndimage.filters.maximum_filter1d.html" rel="nofollow"><code>Scipy's maximum_filter</code></a> -</p>
<pre><code>from scipy.ndimage.filters import maximum_filter1d
df['max'] = maximum_filter1d(df.High,size=3,origin=1,mode='nearest')
</code></pre>
<p>Basically, maximum_filter operates in a sliding window looking for maximum in that window. Now, by default each such <code>max</code> computation would be performed with window being centered at the index itself. Since, we are looking to go three elements before and ending at the current one, we need to change that <em>centeredness</em> with the parameter <code>origin</code>. Therefore, we have it set at <code>1</code>.</p>
<p>Sample run -</p>
<pre><code>In [21]: df
Out[21]:
High max
0 13 13
1 77 77
2 16 77
3 30 77
4 25 30
5 98 98
6 79 98
7 58 98
8 51 79
9 23 58
</code></pre>
<p><strong>Runtime test</strong></p>
<p>Got me interested to see how this Scipy's sliding max operation performs against Pandas's rolling max method on performance. Here's some results on big datasizes -</p>
<pre><code>In [55]: df = pd.DataFrame(np.random.randint(0,99,(10000)),columns=['High'])
In [56]: %%timeit # @Merlin's rolling based solution :
...: df['max'] = df.High.rolling(window=3, min_periods=1).max()
...:
1000 loops, best of 3: 1.35 ms per loop
In [57]: %%timeit # Using Scipy's max filter :
...: df['max1'] = maximum_filter1d(df.High,size=3,\
...: origin=1,mode='nearest')
...:
1000 loops, best of 3: 487 µs per loop
</code></pre>
| 5 | 2016-07-22T18:06:30Z | [
"python",
"numpy",
"pandas"
] |
Numpy Conditional Max of Range | 38,532,678 | <p>I'm trying to make a version of my program faster using as much Pandas and Numpy as possible. I am new to Numpy but have been grasping most of it, but I am having trouble with conditional formatting a column with the max of a range. This is the code I am trying to use to achieve this:</p>
<pre><code>x=3
df1['Max']=numpy.where(df1.index>=x,max(df1.High[-x:],0))
</code></pre>
<p>Basically, I am trying to conditionally put the maximum value over the last 3 entries into a cell and repeat down the column. Any and all help is appreciated.</p>
| 4 | 2016-07-22T17:57:36Z | 38,532,905 | <p>Here is the logic on <code>np.where</code> </p>
<pre><code> numpy.where('test something,if true ,if false)
</code></pre>
<p>I think you need below. </p>
<pre><code>dd= {'to': [100, 200, 300, 400, -500, 600, 700,800, 900, 1000]}
df = pd.DataFrame(dd)
df
to
0 100
1 200
2 300
3 400
4 -500
5 600
6 700
7 800
8 900
9 1000
df['Max'] = df.rolling(window=3, min_periods=1).max()
to Max
0 100 100.0
1 200 200.0
2 300 300.0
3 400 400.0
4 -500 400.0
5 600 600.0
6 700 700.0
7 800 800.0
8 900 900.0
9 1000 1000.0
</code></pre>
| 3 | 2016-07-22T18:13:20Z | [
"python",
"numpy",
"pandas"
] |
How to use rpartition to split the string with backward slashes | 38,532,681 | <p>I need to print the last string which is "foo" here, it's an error with no escape character and wrong result with escape character.</p>
<pre><code>>>> str1='\\a\b\c\foo'
>>> print str1.rpartition('\')[1]
File "<stdin>", line 1
print str1.rpartition('\')[1]
^
SyntaxError: EOL while scanning string literal
>>> print str1.rpartition('\\')[1]
\
>>>
</code></pre>
| 0 | 2016-07-22T17:58:02Z | 38,532,776 | <p>Assuming you have:</p>
<pre><code>s = r'\\a\b\c\foo'
</code></pre>
<p>You need:</p>
<pre><code>s.rsplit('\\', 1)[-1]
</code></pre>
<p>If your backslashes are messed up, see the other answers (no need to repeat those caveats here).</p>
| 0 | 2016-07-22T18:03:48Z | [
"python"
] |
How to use rpartition to split the string with backward slashes | 38,532,681 | <p>I need to print the last string which is "foo" here, it's an error with no escape character and wrong result with escape character.</p>
<pre><code>>>> str1='\\a\b\c\foo'
>>> print str1.rpartition('\')[1]
File "<stdin>", line 1
print str1.rpartition('\')[1]
^
SyntaxError: EOL while scanning string literal
>>> print str1.rpartition('\\')[1]
\
>>>
</code></pre>
| 0 | 2016-07-22T17:58:02Z | 38,532,781 | <p>Two mistakes:</p>
<p>First of all, there's no <code>foo</code> in <code>\\a\b\c\foo</code>. Why? Because <code>\f</code> is a form feed character. You can either escape the backslash with another backslash <code>\\\\a\\b\\c\\foo</code> or use a raw string <code>str1=r'\\a\b\c\foo'</code>.</p>
<p>Secondly, <code>rpartition</code> returns the left part of the string, the separator itself, and the right part of the string. So <code>str1.rpartition('\\')[1]</code> gives you the separator. Use <code>str1.rpartition('\\')[2]</code> to get the result you want.</p>
| 1 | 2016-07-22T18:04:06Z | [
"python"
] |
How to use rpartition to split the string with backward slashes | 38,532,681 | <p>I need to print the last string which is "foo" here, it's an error with no escape character and wrong result with escape character.</p>
<pre><code>>>> str1='\\a\b\c\foo'
>>> print str1.rpartition('\')[1]
File "<stdin>", line 1
print str1.rpartition('\')[1]
^
SyntaxError: EOL while scanning string literal
>>> print str1.rpartition('\\')[1]
\
>>>
</code></pre>
| 0 | 2016-07-22T17:58:02Z | 38,532,784 | <p>You need the third element of the partition tuple:</p>
<pre><code>>>> str1=r'a\b\c\foo'
>>> str1.rpartition('\\')
('a\\b\\c', '\\', 'foo')
>>> str1.rpartition('\\')[2]
'foo'
</code></pre>
| 0 | 2016-07-22T18:04:29Z | [
"python"
] |
How to use rpartition to split the string with backward slashes | 38,532,681 | <p>I need to print the last string which is "foo" here, it's an error with no escape character and wrong result with escape character.</p>
<pre><code>>>> str1='\\a\b\c\foo'
>>> print str1.rpartition('\')[1]
File "<stdin>", line 1
print str1.rpartition('\')[1]
^
SyntaxError: EOL while scanning string literal
>>> print str1.rpartition('\\')[1]
\
>>>
</code></pre>
| 0 | 2016-07-22T17:58:02Z | 38,532,794 | <p>In you print line the '\' character is escaping the second <code>'</code> . It can't parse the line of code becasue to to the compiler it looks like the line doesn't end.
it looks like this to the compiler <code>')[1]</code> It things the quote mark is part of the string. you need to add a second '\' to escape the first one. try this:</p>
<pre><code>print str1.rpartition('\\')[1]
</code></pre>
| 0 | 2016-07-22T18:05:18Z | [
"python"
] |
Python/Matlab - Taking rank of matrix in quad precision or more | 38,532,778 | <p>I have a 14x14 matrix of which I'm trying to take the rank. The problem is that it has a high condition number so using double precision my matrix is not full rank. I know that it should be, so I'm trying to take the rank in higher precision. </p>
<p>So far I have installed the bigfloat package in python, but have been unsuccessful in trying to get the rank in higher precision. I have also scaled my matrix, I tried python's jacobi preconditioner and some other scaling methods but it was not sufficient.</p>
<p>I'm not trying to solve a system of linear equations, I just need to verify that all my columns are linearly independent. In other words, I want to verify that a (simplified) matrix such as the one shown is of rank 2, not 1. </p>
<pre><code>[1, 0;
0, 1e-20]
</code></pre>
<p>Any Suggestions?</p>
| 2 | 2016-07-22T18:03:58Z | 38,532,908 | <p>Has matlab's <code>rank</code> function not worked for you?</p>
<pre><code>>> A = [1,0; 0, 1e-20];
>> rank(A, 1e-19)
ans = 1
>> rank(A, 1e-21)
ans = 2
</code></pre>
| 0 | 2016-07-22T18:13:24Z | [
"python",
"matrix",
"precision",
"rank",
"quad"
] |
Speeding up reading xml files | 38,532,805 | <p>I have a document of patents that is a concatenated string of xml files in one text document. I'm looking to split it up into separate documents each a single xml file. My code works but I need to speed it up. My code is like this: </p>
<pre><code>import time
count = 0
filestr = ''
line = 'x'
start_time = time.time()
with open('C:/Users/RNCZF01/Documents/Cameron-Fen/Economics-Projects/Patent-project/similarity/Patents/ipg121225.xml') as txtfile:
while line:
line = txtfile.readline()
if '<?xml version="1.0" encoding="UTF-8"?>' in line:
filestr = str(count) + '.xml'
count += 1
with open('C:/Users/RNCZF01/Documents/Cameron-Fen/Economics-Projects/Patent-project/similarity/Patents/2012-12-25/' + filestr, 'ab') as textfile:
textfile.write(line)
textfile.write('\n')
print("--- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>The one optimization I can think of to speed it up is the if statement. It checks if the line contains the xml header: <code><?xml version="1.0" encoding="UTF-8"?></code>. It probably would be significantly faster if I could check that the line was <code><?xml version="1.0" encoding="UTF-8"?></code> instead of just containing it. But when I write <code>if line == '<?xml version="1.0" encoding="UTF-8"?>':</code> it doesn't pick up the line. Do I need to include a <code>\n</code> at the end or something? Are there any other optimizations you can think of to speed this process up? thanks,</p>
<p>Cameron</p>
| 1 | 2016-07-22T18:06:30Z | 38,533,009 | <p>instead of checking each line, you might want to load the whole file content and perform the python regex pattern matcher. This way you will reduce the steps to check and get all the matches just by calling method findall().</p>
<p>Here is the doc link - <a href="https://docs.python.org/3/howto/regex.html" rel="nofollow">https://docs.python.org/3/howto/regex.html</a></p>
| 1 | 2016-07-22T18:19:00Z | [
"python",
"xml",
"performance"
] |
How to transform a polygon string on a real polygon | 38,532,816 | <p>I need some support to real a polygon.
Today I have a string and I need to change in a format that is possible to recognize as a polygon.</p>
<p>I am acquiring directly from SQL the value of polygon:</p>
<p>Example:</p>
<p>I read on this way:</p>
<pre><code>string = "POLYGON ((-47.158846224312285 -21.349760242365733;-47.158943117468695 -21.349706412900805;-47.159778541623055 -21.349008036758804))"
</code></pre>
<p>I need to change in this format</p>
<pre><code>list = [(-47.158846224312285, -21.349760242365733), (47.158943117468695 -21.349706412900805), (-47.159778541623055, -21.349008036758804)]
</code></pre>
<p>Any idea how to modificate?</p>
| 0 | 2016-07-22T18:06:54Z | 38,533,545 | <p>You can try parsing the string with a regular expression via the <a href="https://docs.python.org/2/library/re.html#module-re" rel="nofollow"><code>re</code> module</a> something like this:</p>
<pre><code>import re
pat = re.compile(r'''(-*\d+\.\d+ -*\d+\.\d+);*''')
s = "POLYGON ((-47.158846224312285 -21.349760242365733;-47.158943117468695 -21.349706412900805;-47.159778541623055 -21.349008036758804))"
matches = pat.findall(s)
if matches:
lst = [tuple(map(float, m.split())) for m in matches]
print(lst)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>[(-47.158846224312285, -21.349760242365733), (-47.158943117468695, -21.349706412900805), (-47.159778541623055, -21.349008036758804)]
</code></pre>
| 1 | 2016-07-22T18:56:38Z | [
"python",
"polygon"
] |
How to transform a polygon string on a real polygon | 38,532,816 | <p>I need some support to real a polygon.
Today I have a string and I need to change in a format that is possible to recognize as a polygon.</p>
<p>I am acquiring directly from SQL the value of polygon:</p>
<p>Example:</p>
<p>I read on this way:</p>
<pre><code>string = "POLYGON ((-47.158846224312285 -21.349760242365733;-47.158943117468695 -21.349706412900805;-47.159778541623055 -21.349008036758804))"
</code></pre>
<p>I need to change in this format</p>
<pre><code>list = [(-47.158846224312285, -21.349760242365733), (47.158943117468695 -21.349706412900805), (-47.159778541623055, -21.349008036758804)]
</code></pre>
<p>Any idea how to modificate?</p>
| 0 | 2016-07-22T18:06:54Z | 38,533,657 | <p>Depending on how string your inputs are, this may be an easy case to simply use some string manipulation <a href="https://docs.python.org/2/library/re.html" rel="nofollow">regular expression</a> library and some string manipulation.</p>
<pre class="lang-py prettyprint-override"><code>import re
# create a regular expression to extract polygon coordinates
polygon_re = re.compile(r"^POLYGON \(\((.*)\)\)")
input = "POLYGON ((-47.1 -21.3;-47.1 -21.3;-47.1 -21.3))"
polygon_match = polygon_re.match(input)
if polygon_match is not None:
coords_str = polygon_match.groups()[0]
# parse string of coordinates into a list of float pairs
point_strs = coord_str.split(";")
polygon = [[float(s) for s in p.split()] for p in coords_str.split(";")]
</code></pre>
| 0 | 2016-07-22T19:03:31Z | [
"python",
"polygon"
] |
Replacing cells in a column, but not header, in a csv file with python | 38,532,831 | <p>I've been looking for a few hours now and not found what I'm looking for...</p>
<p>I'm looking to make a program that takes an already compiled .csv file with some information missing and asking the user what they would like to add and then placing this in the csv but not effecting the header line. Then saving the file with the additions.</p>
<p>It would look like (input):</p>
<pre><code>data 1, data 2, data 3
2,,4
4,,6
6,,3
3,,2
</code></pre>
<p>program asks "what would you like in data 2 column?</p>
<p>answer: 5</p>
<p>(output):</p>
<pre><code>data 1, data 2, data 3
2,5,4
4,5,6
6,5,3
3,5,2
</code></pre>
<p>All help highly appreciated.</p>
| 0 | 2016-07-22T18:07:53Z | 38,532,975 | <p>I believe the main part of your question (how to skip the header) has been answered here:</p>
<p><a href="http://stackoverflow.com/questions/14257373/skip-the-headers-when-editing-a-csv-file-using-python">Skip the headers when editing a csv file using Python</a> </p>
<p>If you call <code>next()</code> on the reader once, the first row (the header) will be skipped. Then you can do <code>for row in reader</code> like usual.</p>
| 0 | 2016-07-22T18:17:23Z | [
"python",
"csv"
] |
Replacing cells in a column, but not header, in a csv file with python | 38,532,831 | <p>I've been looking for a few hours now and not found what I'm looking for...</p>
<p>I'm looking to make a program that takes an already compiled .csv file with some information missing and asking the user what they would like to add and then placing this in the csv but not effecting the header line. Then saving the file with the additions.</p>
<p>It would look like (input):</p>
<pre><code>data 1, data 2, data 3
2,,4
4,,6
6,,3
3,,2
</code></pre>
<p>program asks "what would you like in data 2 column?</p>
<p>answer: 5</p>
<p>(output):</p>
<pre><code>data 1, data 2, data 3
2,5,4
4,5,6
6,5,3
3,5,2
</code></pre>
<p>All help highly appreciated.</p>
| 0 | 2016-07-22T18:07:53Z | 38,533,063 | <ol>
<li>We open the input file and the output file with a python context manager.</li>
<li>get the user input using <code>input()</code> (python 3) or <code>raw_input()</code> (python 2) functions</li>
<li>grab the 1st row in the file and write it out without changing anything and write that out</li>
<li><p>Loop through the rest of the file splitting the columns out and replacing column 2 with the user's input</p>
<pre><code>with open('in.csv', 'r') as infile, open('out.csv', 'w') as outfile:
middle_col = input('What would you like in data 2 column>: ')
outfile.write(infile.readline()) # write out the 1st line
for line in infile:
cols = line.strip().split(',')
cols[1] = middle_col
outfile.write(','.join(cols) + '\n')</code></pre></li>
</ol>
| 1 | 2016-07-22T18:22:43Z | [
"python",
"csv"
] |
Returning two dictionaries, using each one in separate functions | 38,532,850 | <pre><code>def make(node): # takes some input
for reg_names in reg.names # dont worry about reg_names and reg.names
if reg.size > 0: #reg.size is an inbuilt function
found_dict = {} # first dictionary
found_dict['reg.name'] = 'reg.size' # i want to save the name of the register : size of the register in the format name : size
else:
not_found_dict = {}
not_found_dict['reg.name'] = 'reg.size' #again, i want to save the name of the register : size of the register in the format name : size
return found_dict, not_found_dict
</code></pre>
<p>Ok, so can you tell me whether from the for loop above, if the constructs for creating the dictionaries (found_dict and not_found_dict) are correct assuming reg.name and reg.size are valid constructs?</p>
<p>I then want to use found_dict in function_one and not_found_dict in function_two like below:</p>
<pre><code>def function_one(input): # should this input be the function 'make' as I only want found_dict?
for name, size in found_dict.items(): #just for the names in found_dict
name_pulled = found_dict['reg.name'] # save the names temporarily to name_pulled using the key reg.name of found_dict
final_names[] = final_names.append(name_pulled) #save names from name_pulled into the list final_names and append them through the for loop. will this work?
def function_two(input): # i need not_found_dict so what should this input be?
for name, size in not_found_dict.items(): #using the names in not_found_dict
discard_name_pulled = not_found_dict['reg.name'] # save the names temporarily to discard_name_pulled using on the 'reg.name' from not_found_dict which is essentially the key to the dict
not_used_names[] = not_used_names.append(discard_name_pulled) # in the same way in function_one, save the names to the list not_used_names and append them through the for loop. Will this construct work?
</code></pre>
<p>Main question is, since def make is returning two dictionaries (found_dict and not_found_dict) how do I correctly input found_dict in function_one and not_found_dict in function_two?</p>
| -1 | 2016-07-22T18:09:01Z | 38,533,028 | <p>First of all in your first section in the for loop every time you do :<code>found_dict = {}</code> or <code>not_found_dict = {}</code> you are clearing the contents of the dictionary. I'm not sure if this is what you want. </p>
<p>Second if you want to return more than one thing from a function you could always return them as an array or a tuple, something like this:</p>
<pre><code>return [found_dict, not_found_dict]
</code></pre>
<p>Look at <a href="http://stackoverflow.com/questions/354883/how-do-you-return-multiple-values-in-python">this question</a> for more information. </p>
<p>After you return your array or tuple you can then store it in another variable like this:</p>
<pre><code>result=make(inputVariable)
</code></pre>
<p>this will let you use each element as you want.</p>
<pre><code>result[0]
result[1]
</code></pre>
<p>you can input them into the functions you want like this:</p>
<pre><code>def function_one(inputParameter, found_dict):
#code ....
def function_one(inputParameter, not_found_dict):
#code ....
function_one(inputVariable, result[0])
function_two(inputVariable, result[1])
</code></pre>
| 0 | 2016-07-22T18:20:12Z | [
"python"
] |
python regex - how to create and reference arbitrary, unknown number of groups | 38,532,873 | <p>I have a text file consisting of space-separate text values:</p>
<pre><code>a: b c d e f g
h: i j k
l:
m: n
</code></pre>
<p>I do not know how many of these values - right of <code>;</code>- I'll have a priori.</p>
<p>I want to use <a href="https://docs.python.org/3/library/re.html" rel="nofollow">Python groups</a> within a regular expression to be able to refer to each capture.</p>
<p><code>GnuATgtRE = re.compile(br'^\r\n(?P<target>.+): (?P<deps>.*)\r\n# Implicit rule search has', re.MULTILINE)</code></p>
<p>Currently, <code><target></code> references the item to the left of semi-colon and <code><deps></code> references everything, in one string, to the right.</p>
<p>I do not know a priori how many <code>deps</code> each <code>target</code> will have.</p>
<p>The syntax <a href="https://docs.python.org/3/library/re.html" rel="nofollow"><code>(?P<text>)</code> is used to create a group which can be used to reference a specific captured sub-regex</a>. </p>
<p>For example, for line 1</p>
<p><code>match_obj.group('target')</code> = <code>a</code>
<code>match_obj.group('deps')</code> = <code>b c d e f g</code></p>
<p>Line 2:</p>
<p><code>match_obj.group('target')</code> = <code>h</code>
<code>match_obj.group('deps')</code> = <code>i j k</code></p>
<p><strong>Question</strong></p>
<p>After I execute <code>match = GnuATgtRE.search(string)</code>, I want to be able to be able to reference each space-separate <code>dep</code> via <code>match.group('some_text')</code>.</p>
<p>The problem is that I don't know if there is a way to create an <strong>arbitrary number of unnamed groups.</strong></p>
<p>For line 1, I'd like to be able to say <code>match.group('<5>')</code> and have that return <code>d</code>. </p>
<p>For line 2, <code>match.group('<5')</code> should return `` since there's only 5 letters.</p>
| 2 | 2016-07-22T18:11:00Z | 38,533,069 | <p>See <a href="http://stackoverflow.com/questions/6673686/python-regex-repetition-with-capture-question">this answer</a>.</p>
<blockquote>
<p>Most or all regular expression engines in common use, including in particular those based on the PCRE syntax (like Python's), label their capturing groups according to the numerical index of the opening parenthesis, as the regex is written. So no, you cannot use capturing groups alone to extract an arbitrary, variable number of subsequences from a string.</p>
</blockquote>
<p>A better solution is to just call line.split() on everything after the <code>x:</code> on a line.</p>
| 2 | 2016-07-22T18:23:24Z | [
"python",
"regex"
] |
time complexity of this approach | 38,532,919 | <p>So I am reading two files and storing each line in two different lists respectively. Now I have to check if a string in the first list is present in the 2nd list.</p>
<p>By normal comparison this will take O(n^2)</p>
<p>But using a graph based data structure like -</p>
<p>File1_visited[string] = True </p>
<p>File2_visited[string] = True. </p>
<p>I can check if both are true, then the string is present in both the files. This makes it O(n). </p>
<p>Is there any other approach I can reduce the time complexity and Is my understanding correct?</p>
<p>Example Scenario - </p>
<p>File1-</p>
<p>Text1
Text2
Text3
Text4</p>
<p>File2 -</p>
<p>Text5
Text7
Text1
Text2</p>
<p>Comparing these two files.</p>
| 0 | 2016-07-22T18:13:56Z | 38,533,334 | <p>Yes, you went from <code>O(n^2)</code> to <code>O(n)</code>. You might want to look into the space complexity as well, you would have to store the graph for one of them while for the other one you would use less space. A HashMap looks ideal for this situation if you do not care about memory, or any other array if it's easier to implement.</p>
| 0 | 2016-07-22T18:41:59Z | [
"java",
"python",
"time",
"time-complexity"
] |
Pandas - join item from different dataframe within an array | 38,532,939 | <p>I am a first data frame looking like this</p>
<pre><code>item_id | options
------------------------------------------
item_1_id | [option_1_id, option_2_id]
</code></pre>
<p>And a second like this:</p>
<pre><code>option_id | option_name
---------------------------
option_1_id | option_1_name
</code></pre>
<p>And I'd like to transform my first data set to:</p>
<pre><code>item_id | options
----------------------------------------------
item_1_id | [option_1_name, option_2_name]
</code></pre>
<p>What is an elegant way to do so using Pandas' data frames?</p>
| -1 | 2016-07-22T18:15:13Z | 38,533,095 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="nofollow"><code>apply</code></a>.</p>
<p>For the record, storing lists in <code>DataFrames</code> is typically unnecessary and not very "pandonic". Also, if you only have one column, you can do this with a <code>Series</code> (though this solution also works for <code>DataFrames</code>).</p>
<h3>Setup</h3>
<p>Build the <code>Series</code> with the lists of options.</p>
<pre><code>index = list('abcde')
s = pd.Series([['opt1'], ['opt1', 'opt2'], ['opt0'], ['opt1', 'opt4'], ['opt3']], index=index)
</code></pre>
<p>Build the <code>Series</code> with the names.</p>
<pre><code>index_opts = ['opt%s' % i for i in range(5)]
vals_opts = ['name%s' % i for i in range(5)]
s_opts = pd.Series(vals_opts, index=index_opts)
</code></pre>
<h3>Solution</h3>
<p>Map options to names using <code>apply</code>. The lambda function looks up each option in the <code>Series</code> mapping options to names. It is applied to each element of the <code>Series</code>.</p>
<pre><code>s.apply(lambda l: [s_opts[opt] for opt in l])
</code></pre>
<p>outputs</p>
<pre><code>a [name1]
b [name1, name2]
c [name0]
d [name1, name4]
e [name3]
</code></pre>
| 0 | 2016-07-22T18:25:15Z | [
"python",
"pandas"
] |
python Iterative loop through columns of dataframe | 38,532,978 | <p>Working on a problem, I have the following dataframe in python</p>
<pre><code> week hour week_hr store_code baskets
0 201616 106 201616106 505 0
1 201616 107 201616107 505 0
2 201616 108 201616108 505 0
3 201616 109 201616109 505 18
4 201616 110 201616110 505 0
5 201616 106 201616108 910 0
6 201616 107 201616106 910 0
7 201616 108 201616107 910 2
8 201616 109 201616108 910 3
9 201616 110 201616109 910 10
</code></pre>
<p>Here "hour" variable is a concat of "weekday" and "hour of shop", example weekday is monday=1 and hour of shop is 6am then hour variable = 106, similarly cal_hr is a concat of week and hour. I want to get those rows where i see a trend of no baskets , i.e 0 baskets for <strong>rolling 3 weeks</strong>. in the above case i will only get the <strong>first 3 rows</strong>. i.e. for store 505 there is a continuous cycle of 1 baskets from <strong>106 to 108</strong>. But i <strong>do not</strong> want the <strong>rows (4,5,6)</strong> because even though there are 0 baskets for 3 continuous hours but the hours are actually NOT continuous. <strong>110 -> 106 -> 107</strong> . For the hours to be continuous they should lie in the <strong>range</strong> of <strong>106 - 110.</strong>. Essentially i want all stores and the respective rows if it has 0 baskets for continuous 3 hours on any given day. Dummy output</p>
<pre><code> week hour week_hr store_code baskets
0 201616 106 201616106 505 0
1 201616 107 201616107 505 0
2 201616 108 201616108 505 0
</code></pre>
<p>Can i do this in python using pandas and loops? The dataset requires sorting by store and hour. Completely new to python (</p>
| 4 | 2016-07-22T18:17:30Z | 38,533,819 | <p>You can solve:</p>
<ol>
<li>Sort by store_code, week_hr</li>
<li>Filter by 0</li>
<li>Group by store_code</li>
<li>Find continuous</li>
</ol>
<p>Code:</p>
<pre><code>t1 = df.sort_values(['store_code', 'week_hr'])
t2 = t1[t1['baskets'] == 0]
grouped = t2.groupby('store_code')['week_hr'].apply(lambda x: x.tolist())
for store_code, week_hrs in grouped.iteritems():
print(store_code, week_hrs)
# do something
</code></pre>
| 0 | 2016-07-22T19:14:33Z | [
"python",
"loops",
"python-3.x",
"pandas",
"dataframe"
] |
python Iterative loop through columns of dataframe | 38,532,978 | <p>Working on a problem, I have the following dataframe in python</p>
<pre><code> week hour week_hr store_code baskets
0 201616 106 201616106 505 0
1 201616 107 201616107 505 0
2 201616 108 201616108 505 0
3 201616 109 201616109 505 18
4 201616 110 201616110 505 0
5 201616 106 201616108 910 0
6 201616 107 201616106 910 0
7 201616 108 201616107 910 2
8 201616 109 201616108 910 3
9 201616 110 201616109 910 10
</code></pre>
<p>Here "hour" variable is a concat of "weekday" and "hour of shop", example weekday is monday=1 and hour of shop is 6am then hour variable = 106, similarly cal_hr is a concat of week and hour. I want to get those rows where i see a trend of no baskets , i.e 0 baskets for <strong>rolling 3 weeks</strong>. in the above case i will only get the <strong>first 3 rows</strong>. i.e. for store 505 there is a continuous cycle of 1 baskets from <strong>106 to 108</strong>. But i <strong>do not</strong> want the <strong>rows (4,5,6)</strong> because even though there are 0 baskets for 3 continuous hours but the hours are actually NOT continuous. <strong>110 -> 106 -> 107</strong> . For the hours to be continuous they should lie in the <strong>range</strong> of <strong>106 - 110.</strong>. Essentially i want all stores and the respective rows if it has 0 baskets for continuous 3 hours on any given day. Dummy output</p>
<pre><code> week hour week_hr store_code baskets
0 201616 106 201616106 505 0
1 201616 107 201616107 505 0
2 201616 108 201616108 505 0
</code></pre>
<p>Can i do this in python using pandas and loops? The dataset requires sorting by store and hour. Completely new to python (</p>
| 4 | 2016-07-22T18:17:30Z | 38,534,604 | <p>Do the following:</p>
<ol>
<li>Sort by store_code, week_hr</li>
<li>Filter by 0</li>
<li>Store the subtraction between df['week_hr'][1:].values-df['week_hr'][:-1].values so you will get to know if they are continuos.</li>
<li><p>Now you can give groups to continuous and filter as you want.</p>
<pre><code>import numpy as np
# 1
t1 = df.sort_values(['store_code', 'week_hr'])
# 2
t2 = t1[t1['baskets'] == 0]
# 3
continuous = t2['week_hr'][1:].values-t2['week_hr'][:-1].values == 1
groups = np.cumsum(np.hstack([False, continuous==False]))
t2['groups'] = groups
# 4
t3 = t2.groupby(['store_code', 'groups'], as_index=False)['week_hr'].count()
t4 = t3[t3.week_hr > 2]
print pd.merge(t2, t4[['store_code', 'groups']])
</code></pre></li>
</ol>
<p>There's no need for looping!</p>
| 1 | 2016-07-22T20:14:26Z | [
"python",
"loops",
"python-3.x",
"pandas",
"dataframe"
] |
how to specify a page title in a django views.py file using a context variable? | 38,532,985 | <p>I am trying to specify a page title that shows up in the browser tab within a views.py file for a class based view. I am working with a file that uses a base template html page for many different pages where I am trying specify the title using something such as:</p>
<pre><code>{% block title %}{{ view.build_page_title }}{% endblock %}
</code></pre>
<p>in the views.py file I am trying something like this:</p>
<pre><code>class ExampleReportView(BaseReportView):
def build_page_title(self):
return 'Example Page Title'
</code></pre>
<p>This does not seem to be working. I am an absolute beginner in Django Python. Thanks for any help!</p>
| 0 | 2016-07-22T18:18:06Z | 38,533,122 | <p>You don't pass values to the template by defining arbitrary methods on your view class; the template has no access to the view at all. </p>
<p>Instead, the view class will call its own <code>get_context_data</code> to determine the values to pass to the template; you can override that and add your own value.</p>
<pre><code>class ExampleReportView(BaseReportView):
def get_context_data(self, *args, **kwargs):
data = super(ExampleReportView, self).get_context_data(*args, **kwargs)
data['build_page_title'] = 'Example Page Title'
return data
</code></pre>
<p>Of course, you can add as many values as you like inside that method.</p>
| 1 | 2016-07-22T18:27:16Z | [
"python",
"django",
"django-views"
] |
xml libxml2 parsing | 38,533,050 | <p>In the code below, my problem is that it's writing output to all folders based on only one input file. Can some one give me a hint and check if my code is looping properly?</p>
<pre><code>import libxml2
import os.path
from numpy import *
from cfs_utils import *
np=[1,2,3,4,5,6,7,8]
n=[20,30,40,60,80,100,130]
solver=["CG_iluk", "CG_saamg", "CG_ssor", "BiCGSTABL_iluk", "BiCGSTABL_saamg", "BiCGSTABL_ssor", "cholmod", "ilu" ]
file_list=["eval_CG_iluk_default","eval_CG_saamg_default", "eval_CG_ssor_default", "eval_BiCGSTABL_iluk", "eval_BiCGSTABL_saamg", "eval_BiCGSTABL_ssor","simp_cholmod_solver_3D_evaluate ", "simp_ilu_solver_3D_evaluate" ]
for sol in solver:
i=0
for cnt_np in np:
#open write_file= "Graphs/" + "Np"+ cnt_np + "/CG_iluk.dat"
#"Graphs/Np1/CG_iluk.dat"
write_file = open("Graphs/"+ "Np"+ str(cnt_np) + "/" + sol + ".dat", "w")
#loop through different unknowns
for cnt_n in n:
#open file "cfs_calculations_" + cnt_n +"np"+ cnt_np+ "/" + file_list(i) + "_default.info.xml"
read_file = "cfs_calculations_" +str(cnt_n) +"np"+ str(cnt_np) + "/" + file_list[i] + ".info.xml"
#read wall and cpu time and write
if os.path.exists(read_file):
doc = libxml2.parseFile(read_file)
xml = doc.xpathNewContext()
walltime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/setup/timer/@wall")
cputime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/setup/timer/@cpu")
unknowns = 3*cnt_n*cnt_n*cnt_n
write_file.write(str(unknowns) + "\t" + walltime + "\t" + cputime + "\n")
doc.freeDoc()
write_file.close()
i=i+1
</code></pre>
| 0 | 2016-07-22T18:21:51Z | 38,541,613 | <p>Problem solved, I = o, was outside the loop</p>
| 0 | 2016-07-23T12:07:06Z | [
"python",
"xml",
"parsing",
"xml-parsing"
] |
How exactly does the caller see a change in the object? | 38,533,094 | <p>From <a href="https://docs.python.org/3/tutorial/classes.html" rel="nofollow">Chapter "Classes"</a> of the official Python tutorial:</p>
<blockquote>
<p>[...] if a function modifies an object passed as an argument, the caller will see the change â this eliminates the need for two different argument passing mechanisms as in Pascal.</p>
</blockquote>
<p>What would be an example of how exactly the caller will see a change? Or how could it be (not in Python but in general) that the caller doesn't see the change?</p>
| 2 | 2016-07-22T18:25:11Z | 38,533,145 | <blockquote>
<p>What would be an example of how exactly the caller will see a change?</p>
</blockquote>
<pre><code>>>> def modify(x):
... x.append(1)
...
>>> seq = []
>>> print(seq)
[]
>>> modify(seq)
>>> print(seq)
[1]
</code></pre>
<blockquote>
<p>Or how could it be (not in Python but in general) that the caller doesn't see the change?</p>
</blockquote>
<p>Hypothetically, a language could exist where a deep copy of <code>seq</code> is created and assigned to <code>x</code>, and any change made to <code>x</code> has no effect on <code>seq</code>, in which case <code>print(seq)</code> would display <code>[]</code> both times. But this isn't what happens in Python.</p>
<hr>
<p>Edit: note that assigning a new value to an old variable name typically doesn't count as "modification".</p>
<pre><code>>>> def f(x):
... x = x + 1
...
>>> y = 23
>>> f(y)
>>> print(y)
23
</code></pre>
| 3 | 2016-07-22T18:28:56Z | [
"python",
"function",
"object"
] |
How exactly does the caller see a change in the object? | 38,533,094 | <p>From <a href="https://docs.python.org/3/tutorial/classes.html" rel="nofollow">Chapter "Classes"</a> of the official Python tutorial:</p>
<blockquote>
<p>[...] if a function modifies an object passed as an argument, the caller will see the change â this eliminates the need for two different argument passing mechanisms as in Pascal.</p>
</blockquote>
<p>What would be an example of how exactly the caller will see a change? Or how could it be (not in Python but in general) that the caller doesn't see the change?</p>
| 2 | 2016-07-22T18:25:11Z | 38,533,147 | <p>It basically means that if a mutable object is changed, it will change everywhere.</p>
<p>For an example of passing by reference (which is what Python does):</p>
<pre><code>x = []
def foo_adder(y):
y.append('foo')
foo_addr(x)
print(x) # ['foo']
</code></pre>
<p>vs something like Pascal, where you can pass copies of an object as a parameter, instead of the object itself:</p>
<pre><code># Pretend this is Pascal code.
x = []
def foo_adder(y):
y.append('foo')
foo_adder(x)
print(x) # []
</code></pre>
<p>You can get the behavior of the second example in Python if you pass a copy of the object. For lists, you use <code>[:]</code>.</p>
<pre><code># Pretend this is Pascal code.
x = []
def foo_adder(y):
y.append('foo')
foo_adder(x[:])
print(x) # []
</code></pre>
<p>For your second question about how the caller might not see the change, let's take that same <code>foo_adder</code> function and change it a little so that it doesn't modify the object, but instead replaces it.</p>
<pre><code>x = []
def foo_adder(y):
y = y + ['foo']
foo_adder(x)
print(x) # []
</code></pre>
| 3 | 2016-07-22T18:29:10Z | [
"python",
"function",
"object"
] |
How to specify pem file path when using gateway in Fabric | 38,533,102 | <p>I have followed many question related to this topic.</p>
<p>My scenario:</p>
<blockquote>
<p>Local host -> Gateway -> Remote host</p>
</blockquote>
<p>I am using env.gateway variable to specify gateway host.</p>
<p>sample code</p>
<pre><code>env.user = "ec2-user"
env.key_filename = ["/home/ec2-user/.ssh/internal.pem","/home/roshan.r/test.pem","/home/ec2-user/.ssh/test2.pem"]
env.hosts = ['x.x.x.244', 'x.x.x.132']
env.gateway = 'x.x.x.189'
def getdate():
content = run('date')
</code></pre>
<p>My problem is with pem key path.</p>
<p>/home/roshan.r/test.pem is located in current directory. which is used for login into gateway server.</p>
<p>Other two mentioned pem files are located in gateway server.</p>
<p>When i run this program i'm getting file not found error.</p>
<p>Thanks for any help !!</p>
| 0 | 2016-07-22T18:25:42Z | 38,562,686 | <p>I havn't had to do this yet, but what about having a function that fecth those pem file ? something like :</p>
<pre><code>@'x.x.x.189'
def get_pem():
env.key_filename.append(get("/home/ec2-user/.ssh/internal.pem")
env.key_filename.append(get("/home/ec2-user/.ssh/test2.pem")
</code></pre>
<p>Also, I could you try something ? i guess you got a fiel not found because fabric is looking for the <code>/home/ec2-user/.ssh/internal.pem</code> on your computer. It has no way knowing it's on a remote host. What if you try with :
<code>x.x.x.189:/home/ec2-user/.ssh/internal.pem</code></p>
| 1 | 2016-07-25T08:15:07Z | [
"python",
"amazon-web-services",
"fabric"
] |
How to return only one level of keys using Boto3? | 38,533,252 | <p>I have an s3 bucket with a structure like so:</p>
<pre><code>bucket
---key_1
---sub_key_1
---file_a
---sub_key_2
---file_b
---sub_key_3
---file_c
</code></pre>
<p>Where the keys are all separated by /. I want to run a boto 3 command to return just the sub keys. I've tried a few things. Using both the client and session methods of boto 3 mainly focused around this:</p>
<pre><code>for key in s3_bucket.list(Prefix="key_1/", Delimiter="/"):
print(key.key)
objects = client.list_objects(Bucket=bucket, Prefix="pickles/", Delimiter='/')
</code></pre>
<p>I can either include the delimiter and all it returns is the 'key_1' object or I can exclude the delimiter and I get all sub_key objects but all files as well. What can I do to just get the sub keys? </p>
| 0 | 2016-07-22T18:36:34Z | 38,533,378 | <p>I've actually found the answer here: <a href="https://github.com/boto/boto3/issues/134" rel="nofollow">https://github.com/boto/boto3/issues/134</a>. The simplest way is to use the client.list_objects call as posted above and retrieve the CommonPrefixes attribute from it.</p>
| 0 | 2016-07-22T18:44:14Z | [
"python",
"amazon-web-services",
"amazon-s3",
"boto3"
] |
how can argparse set default value of optional parameter to null or empty? | 38,533,258 | <p>I am using python argparse to read an optional parameter that defines an "exemplar file" that users sometimes provide via the command line. I want the default value of the variable to be empty, that is no file found.</p>
<pre><code>parser.add_argument("--exemplar_file", help = "file to inspire book", default = '')
</code></pre>
<p>Will this do it?</p>
| 0 | 2016-07-22T18:37:04Z | 38,533,331 | <p>Just don't set a default:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--exemplar_file', help='file to inspire book')
args = parser.parse_args()
print(args.exemplar_file)
# Output:
# None
</code></pre>
| 2 | 2016-07-22T18:41:51Z | [
"python",
"argparse"
] |
how can argparse set default value of optional parameter to null or empty? | 38,533,258 | <p>I am using python argparse to read an optional parameter that defines an "exemplar file" that users sometimes provide via the command line. I want the default value of the variable to be empty, that is no file found.</p>
<pre><code>parser.add_argument("--exemplar_file", help = "file to inspire book", default = '')
</code></pre>
<p>Will this do it?</p>
| 0 | 2016-07-22T18:37:04Z | 38,534,929 | <p>The default <code>default</code> is <code>None</code> (for the default <code>store</code> action). A nice thing about is that your user can't provide that value (there's no string that converts to <code>None</code>). It is easy to test</p>
<pre><code> if args.examplar_file is None:
# do your thing
# args.exampler_file = 'the real default'
else:
# use args.examplar_file
</code></pre>
<p>But <code>default=''</code> is fine. Try it.</p>
| 1 | 2016-07-22T20:41:13Z | [
"python",
"argparse"
] |
Python pretty print dictionary of lists, abbreviate long lists | 38,533,282 | <p>I have a dictionary of lists and the lists are quite long. How can I print it in a way that only a few elements of the list show up? Obviously, I can write a custom function for that but is there any built-in way or library that can achieve this? For example when printing large data frames, <code>pandas</code> prints it nicely in a short way. </p>
<p>This example better illustrates what I mean:</p>
<pre><code>obj = {'key_1': ['EG8XYD9FVN',
'S2WARDCVAO',
'J00YCU55DP',
'R07BUIF2F7',
'VGPS1JD0UM',
'WL3TWSDP8E',
'LD8QY7DMJ3',
'J36U3Z9KOQ',
'KU2FUGYB2U',
'JF3RQ315BY'],
'key_2': ['162LO154PM',
'3ROAV881V2',
'I4T79LP18J',
'WBD36EM6QL',
'DEIODVQU46',
'KWSJA5WDKQ',
'WX9SVRFO0G',
'6UN63WU64G',
'3Z89U7XM60',
'167CYON6YN']}
</code></pre>
<p>Desired output: something like this:</p>
<pre><code>{'key_1':
['EG8XYD9FVN', 'S2WARDCVAO', '...'],
'key_2':
['162LO154PM', '3ROAV881V2', '...']
}
</code></pre>
| 8 | 2016-07-22T18:38:32Z | 38,533,415 | <p>This recursive function I wrote does something you're asking for.. You can choose the indentation you want too</p>
<pre><code>def pretty(d, indent=0):
for key in sorted(d.keys()):
print '\t' * indent + str(key)
if isinstance(d[key], dict):
pretty(d[key], indent+1)
else:
print '\t' * (indent+1) + str(d[key])
</code></pre>
<p>The output of your dictionary is:</p>
<pre><code>key_1
['EG8XYD9FVN', 'S2WARDCVAO', 'J00YCU55DP', 'R07BUIF2F7', 'VGPS1JD0UM', 'WL3TWSDP8E', 'LD8QY7DMJ3', 'J36U3Z9KOQ', 'KU2FUGYB2U', 'JF3RQ315BY']
key_2
['162LO154PM', '3ROAV881V2', 'I4T79LP18J', 'WBD36EM6QL', 'DEIODVQU46', 'KWSJA5WDKQ', 'WX9SVRFO0G', '6UN63WU64G', '3Z89U7XM60', '167CYON6YN']
</code></pre>
| 2 | 2016-07-22T18:46:56Z | [
"python",
"list",
"dictionary",
"pretty-print"
] |
Python pretty print dictionary of lists, abbreviate long lists | 38,533,282 | <p>I have a dictionary of lists and the lists are quite long. How can I print it in a way that only a few elements of the list show up? Obviously, I can write a custom function for that but is there any built-in way or library that can achieve this? For example when printing large data frames, <code>pandas</code> prints it nicely in a short way. </p>
<p>This example better illustrates what I mean:</p>
<pre><code>obj = {'key_1': ['EG8XYD9FVN',
'S2WARDCVAO',
'J00YCU55DP',
'R07BUIF2F7',
'VGPS1JD0UM',
'WL3TWSDP8E',
'LD8QY7DMJ3',
'J36U3Z9KOQ',
'KU2FUGYB2U',
'JF3RQ315BY'],
'key_2': ['162LO154PM',
'3ROAV881V2',
'I4T79LP18J',
'WBD36EM6QL',
'DEIODVQU46',
'KWSJA5WDKQ',
'WX9SVRFO0G',
'6UN63WU64G',
'3Z89U7XM60',
'167CYON6YN']}
</code></pre>
<p>Desired output: something like this:</p>
<pre><code>{'key_1':
['EG8XYD9FVN', 'S2WARDCVAO', '...'],
'key_2':
['162LO154PM', '3ROAV881V2', '...']
}
</code></pre>
| 8 | 2016-07-22T18:38:32Z | 38,533,419 | <p>You could use the <a href="https://docs.python.org/3/library/pprint.html"><code>pprint</code></a> module:</p>
<pre><code>pprint.pprint(obj)
</code></pre>
<p>Would output:</p>
<pre><code>{'key_1': ['EG8XYD9FVN',
'S2WARDCVAO',
'J00YCU55DP',
'R07BUIF2F7',
'VGPS1JD0UM',
'WL3TWSDP8E',
'LD8QY7DMJ3',
'J36U3Z9KOQ',
'KU2FUGYB2U',
'JF3RQ315BY'],
'key_2': ['162LO154PM',
'3ROAV881V2',
'I4T79LP18J',
'WBD36EM6QL',
'DEIODVQU46',
'KWSJA5WDKQ',
'WX9SVRFO0G',
'6UN63WU64G',
'3Z89U7XM60',
'167CYON6YN']}
</code></pre>
<p>And,</p>
<pre><code>pprint.pprint(obj,depth=1)
</code></pre>
<p>Would output:</p>
<pre><code>{'key_1': [...], 'key_2': [...]}
</code></pre>
<p>And, </p>
<pre><code>pprint.pprint(obj,compact=True)
</code></pre>
<p>would output:</p>
<pre><code>{'key_1': ['EG8XYD9FVN', 'S2WARDCVAO', 'J00YCU55DP', 'R07BUIF2F7',
'VGPS1JD0UM', 'WL3TWSDP8E', 'LD8QY7DMJ3', 'J36U3Z9KOQ',
'KU2FUGYB2U', 'JF3RQ315BY'],
'key_2': ['162LO154PM', '3ROAV881V2', 'I4T79LP18J', 'WBD36EM6QL',
'DEIODVQU46', 'KWSJA5WDKQ', 'WX9SVRFO0G', '6UN63WU64G',
'3Z89U7XM60', '167CYON6YN']}
</code></pre>
| 6 | 2016-07-22T18:47:12Z | [
"python",
"list",
"dictionary",
"pretty-print"
] |
Python pretty print dictionary of lists, abbreviate long lists | 38,533,282 | <p>I have a dictionary of lists and the lists are quite long. How can I print it in a way that only a few elements of the list show up? Obviously, I can write a custom function for that but is there any built-in way or library that can achieve this? For example when printing large data frames, <code>pandas</code> prints it nicely in a short way. </p>
<p>This example better illustrates what I mean:</p>
<pre><code>obj = {'key_1': ['EG8XYD9FVN',
'S2WARDCVAO',
'J00YCU55DP',
'R07BUIF2F7',
'VGPS1JD0UM',
'WL3TWSDP8E',
'LD8QY7DMJ3',
'J36U3Z9KOQ',
'KU2FUGYB2U',
'JF3RQ315BY'],
'key_2': ['162LO154PM',
'3ROAV881V2',
'I4T79LP18J',
'WBD36EM6QL',
'DEIODVQU46',
'KWSJA5WDKQ',
'WX9SVRFO0G',
'6UN63WU64G',
'3Z89U7XM60',
'167CYON6YN']}
</code></pre>
<p>Desired output: something like this:</p>
<pre><code>{'key_1':
['EG8XYD9FVN', 'S2WARDCVAO', '...'],
'key_2':
['162LO154PM', '3ROAV881V2', '...']
}
</code></pre>
| 8 | 2016-07-22T18:38:32Z | 38,533,524 | <p>Use <a href="https://docs.python.org/3/library/reprlib.html" rel="nofollow">reprlib</a>. The formatting is not that pretty, but it actually abbreviates.</p>
<pre><code>> import repr
> repr.repr(map(lambda _: range(100000), range(10)))
'[[0, 1, 2, 3, 4, 5, ...], [0, 1, 2, 3, 4, 5, ...], [0, 1, 2, 3, 4, 5, ...], [0, 1, 2, 3, 4, 5, ...], [0, 1, 2, 3, 4, 5, ...], [0, 1, 2, 3, 4, 5, ...], ...]'
> repr.repr(dict(map(lambda i: (i, range(100000)), range(10))))
'{0: [0, 1, 2, 3, 4, 5, ...], 1: [0, 1, 2, 3, 4, 5, ...], 2: [0, 1, 2, 3, 4, 5, ...], 3: [0, 1, 2, 3, 4, 5, ...], ...}'
</code></pre>
| 2 | 2016-07-22T18:54:54Z | [
"python",
"list",
"dictionary",
"pretty-print"
] |
Python pretty print dictionary of lists, abbreviate long lists | 38,533,282 | <p>I have a dictionary of lists and the lists are quite long. How can I print it in a way that only a few elements of the list show up? Obviously, I can write a custom function for that but is there any built-in way or library that can achieve this? For example when printing large data frames, <code>pandas</code> prints it nicely in a short way. </p>
<p>This example better illustrates what I mean:</p>
<pre><code>obj = {'key_1': ['EG8XYD9FVN',
'S2WARDCVAO',
'J00YCU55DP',
'R07BUIF2F7',
'VGPS1JD0UM',
'WL3TWSDP8E',
'LD8QY7DMJ3',
'J36U3Z9KOQ',
'KU2FUGYB2U',
'JF3RQ315BY'],
'key_2': ['162LO154PM',
'3ROAV881V2',
'I4T79LP18J',
'WBD36EM6QL',
'DEIODVQU46',
'KWSJA5WDKQ',
'WX9SVRFO0G',
'6UN63WU64G',
'3Z89U7XM60',
'167CYON6YN']}
</code></pre>
<p>Desired output: something like this:</p>
<pre><code>{'key_1':
['EG8XYD9FVN', 'S2WARDCVAO', '...'],
'key_2':
['162LO154PM', '3ROAV881V2', '...']
}
</code></pre>
| 8 | 2016-07-22T18:38:32Z | 38,534,524 | <p>If it weren't for the pretty printing, the <a href="https://docs.python.org/3/library/reprlib.html" rel="nofollow"><code>reprlib</code></a> module would be the way to go: Safe, elegant and customizable handling of deeply nested and recursive / self-referencing data structures is what it has been made for.</p>
<p>However, it turns out combining the <a href="https://docs.python.org/3/library/reprlib.html" rel="nofollow"><code>reprlib</code></a> and <a href="https://docs.python.org/2/library/pprint.html" rel="nofollow"><code>pprint</code></a> modules isn't trivial, at least I couldn't come up with a clean way without breaking (some) of the pretty printing aspects.</p>
<p>So instead, here's a solution that just subclasses <a href="https://hg.python.org/cpython/file/2.7/Lib/pprint.py#l84" rel="nofollow"><code>PrettyPrinter</code></a> to crop / abbreviate lists as necessary:</p>
<pre><code>from pprint import PrettyPrinter
obj = {
'key_1': [
'EG8XYD9FVN', 'S2WARDCVAO', 'J00YCU55DP', 'R07BUIF2F7', 'VGPS1JD0UM',
'WL3TWSDP8E', 'LD8QY7DMJ3', 'J36U3Z9KOQ', 'KU2FUGYB2U', 'JF3RQ315BY',
],
'key_2': [
'162LO154PM', '3ROAV881V2', 'I4T79LP18J', 'WBD36EM6QL', 'DEIODVQU46',
'KWSJA5WDKQ', 'WX9SVRFO0G', '6UN63WU64G', '3Z89U7XM60', '167CYON6YN',
],
# Test case to make sure we didn't break handling of recursive structures
'key_3': [
'162LO154PM', '3ROAV881V2', [1, 2, ['a', 'b', 'c'], 3, 4, 5, 6, 7],
'KWSJA5WDKQ', 'WX9SVRFO0G', '6UN63WU64G', '3Z89U7XM60', '167CYON6YN',
]
}
class CroppingPrettyPrinter(PrettyPrinter):
def __init__(self, *args, **kwargs):
self.maxlist = kwargs.pop('maxlist', 6)
return PrettyPrinter.__init__(self, *args, **kwargs)
def _format(self, obj, stream, indent, allowance, context, level):
if isinstance(obj, list):
# If object is a list, crop a copy of it according to self.maxlist
# and append an ellipsis
if len(obj) > self.maxlist:
cropped_obj = obj[:self.maxlist] + ['...']
return PrettyPrinter._format(
self, cropped_obj, stream, indent,
allowance, context, level)
# Let the original implementation handle anything else
# Note: No use of super() because PrettyPrinter is an old-style class
return PrettyPrinter._format(
self, obj, stream, indent, allowance, context, level)
p = CroppingPrettyPrinter(maxlist=3)
p.pprint(obj)
</code></pre>
<hr>
<p>Output with <code>maxlist=3</code>:</p>
<pre><code>{'key_1': ['EG8XYD9FVN', 'S2WARDCVAO', 'J00YCU55DP', '...'],
'key_2': ['162LO154PM',
'3ROAV881V2',
[1, 2, ['a', 'b', 'c'], '...'],
'...']}
</code></pre>
<p>Output with <code>maxlist=5</code> (triggers splitting the lists on separate lines):</p>
<pre><code>{'key_1': ['EG8XYD9FVN',
'S2WARDCVAO',
'J00YCU55DP',
'R07BUIF2F7',
'VGPS1JD0UM',
'...'],
'key_2': ['162LO154PM',
'3ROAV881V2',
'I4T79LP18J',
'WBD36EM6QL',
'DEIODVQU46',
'...'],
'key_3': ['162LO154PM',
'3ROAV881V2',
[1, 2, ['a', 'b', 'c'], 3, 4, '...'],
'KWSJA5WDKQ',
'WX9SVRFO0G',
'...']}
</code></pre>
<hr>
<p>Notes:</p>
<ul>
<li>This will create <strong>copies</strong> of lists. Depending on the size of the data structures, this can be very expensive in terms of memory use. </li>
<li>This only deals with the special case of <strong>lists</strong>. Equivalent behavior would have to be implemented for dicts, tuples, sets, frozensets, ... for this class to be of general use. </li>
</ul>
| 3 | 2016-07-22T20:08:34Z | [
"python",
"list",
"dictionary",
"pretty-print"
] |
Python pretty print dictionary of lists, abbreviate long lists | 38,533,282 | <p>I have a dictionary of lists and the lists are quite long. How can I print it in a way that only a few elements of the list show up? Obviously, I can write a custom function for that but is there any built-in way or library that can achieve this? For example when printing large data frames, <code>pandas</code> prints it nicely in a short way. </p>
<p>This example better illustrates what I mean:</p>
<pre><code>obj = {'key_1': ['EG8XYD9FVN',
'S2WARDCVAO',
'J00YCU55DP',
'R07BUIF2F7',
'VGPS1JD0UM',
'WL3TWSDP8E',
'LD8QY7DMJ3',
'J36U3Z9KOQ',
'KU2FUGYB2U',
'JF3RQ315BY'],
'key_2': ['162LO154PM',
'3ROAV881V2',
'I4T79LP18J',
'WBD36EM6QL',
'DEIODVQU46',
'KWSJA5WDKQ',
'WX9SVRFO0G',
'6UN63WU64G',
'3Z89U7XM60',
'167CYON6YN']}
</code></pre>
<p>Desired output: something like this:</p>
<pre><code>{'key_1':
['EG8XYD9FVN', 'S2WARDCVAO', '...'],
'key_2':
['162LO154PM', '3ROAV881V2', '...']
}
</code></pre>
| 8 | 2016-07-22T18:38:32Z | 38,534,603 | <p>You could use <a href="https://ipython.org/ipython-doc/3/api/generated/IPython.lib.pretty.html" rel="nofollow">IPython.lib.pretty</a>.</p>
<pre><code>from IPython.lib.pretty import pprint
> pprint(obj, max_seq_length=5)
{'key_1': ['EG8XYD9FVN',
'S2WARDCVAO',
'J00YCU55DP',
'R07BUIF2F7',
'VGPS1JD0UM',
...],
'key_2': ['162LO154PM',
'3ROAV881V2',
'I4T79LP18J',
'WBD36EM6QL',
'DEIODVQU46',
...]}
> pprint(dict(map(lambda i: (i, range(i + 5)), range(100))), max_seq_length=10)
{0: [0, 1, 2, 3, 4],
1: [0, 1, 2, 3, 4, 5],
2: [0, 1, 2, 3, 4, 5, 6],
3: [0, 1, 2, 3, 4, 5, 6, 7],
4: [0, 1, 2, 3, 4, 5, 6, 7, 8],
5: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
6: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...],
7: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...],
8: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...],
9: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...],
...}
</code></pre>
<hr>
<p>For older versions of IPython, you might exploit <a href="https://github.com/ipython/ipython/blob/master/IPython/lib/pretty.py" rel="nofollow">RepresentationPrinter</a>:</p>
<pre><code>from IPython.lib.pretty import RepresentationPrinter
import sys
def compact_pprint(obj, max_seq_length=10):
printer = RepresentationPrinter(sys.stdout)
printer.max_seq_length = max_seq_length
printer.pretty(obj)
printer.flush()
</code></pre>
| 1 | 2016-07-22T20:14:25Z | [
"python",
"list",
"dictionary",
"pretty-print"
] |
Pivot Table not writing to pre-existing sheet | 38,533,320 | <p>I have two sheets in my excel workbook. The first sheet contains the data used to make the pivot table, and the second sheet as you can see I create in this section of the code. I would like my pivot table to be drawn into the new sheet that I created (worksheet2).</p>
<p>When I run my code, my excel file contains my data sheet, and then an empty sheet called 'Pivot Table'. How can I get my pivot table into the 'Pivot Table' sheet? This might be an extremely simple question, but i've just started working with pandas today. My pivot table does get created properly. I have printed it to test it and make sure of it.</p>
<p>Thanks.</p>
<pre><code>excel = pd.ExcelFile(filename)
df = pd.read_excel(filename, usecols=['Product Description', 'Supervisor'])
table1 = df[['Product Description', 'Supervisor']].pivot_table(index='Supervisor', columns='Product Description', aggfunc=len, fill_value=0, margins=True, margins_name='Grand Total')
worksheet2 = workbook.create_sheet()
worksheet2.title = 'Pivot Table'
worksheet2 = workbook.active
writer = pd.ExcelWriter(filename, engine='xlsxwriter')
table1.to_excel(writer, worksheet2.title )
writer.save()
workbook.save(filename)
</code></pre>
| 1 | 2016-07-22T18:40:57Z | 38,533,583 | <p>you can do it <a href="http://stackoverflow.com/questions/20219254/how-to-write-to-an-existing-excel-file-without-overwriting-data/20221655#20221655">this way</a>:</p>
<pre><code>df = pd.read_excel(filename, usecols=['Product Description', 'Supervisor'])
table1 = df.pivot_table(index='Supervisor', columns='Product Description', aggfunc=len, fill_value=0, margins=True, margins_name='Grand Total')
writer = pd.ExcelWriter(filename, engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
table1.to_excel(writer, 'Pivot Table')
writer.save()
</code></pre>
| 1 | 2016-07-22T18:59:12Z | [
"python",
"pandas"
] |
django {% url %} with parameters (list of dicts) | 38,533,430 | <p>I'm following the suggestion at: <a href="http://stackoverflow.com/questions/3583534/refresh-div-element-generated-by-a-django-template">Refresh <div> element generated by a django template</a></p>
<p>I'm passing along a few variables, a la:</p>
<pre><code>url: '{% url 'search_results' 'sched_dep_local' flights|escapejs %}',
</code></pre>
<p>The problem is that 'flights' is a list of dicts that the search_results template needs access to, and it's pretty large and contains things like apostrophes</p>
<pre><code>[{'foo': 'bar'}, {'foo': 'baz'}] and so on
</code></pre>
<p>So the only way I can use it with {% url %} appears to be with escapejs to get rid of the apostrophes, but then in views.py, I need it to be a list of dicts again, so I can do things like:</p>
<pre><code>def search_results(request, sort_key, flights):
flights = search_utils.sort(flights, sort_key)
return render_to_response('search_results.html', { 'flights' : flights} )
</code></pre>
<p>Is there a simple way to do this? Alternatively, am I going about this whole thing all wrong? </p>
<p>ETA: See also (explains what I'm trying to do and why):</p>
<pre><code><script>
$(".sort").on("click", function() {
$.ajax({
url: '{% url 'search_results' 'sched_dep_local' flights|escapejs %}',
success: function(data) {
$('#search-results').html(data);
}
});
});
</script>
</code></pre>
<p>I have a template (in search_results.html) that prints some data for each flight in flights. I want to sort that data and rerender the template, but I can't figure out how.</p>
| 0 | 2016-07-22T18:48:00Z | 38,533,637 | <p>This isn't the right way to deal with complex data. Rather than sending it via the URL, you should be using a POST and sending it in the body of the request: since you're using jQuery, you can just do <code>method: "POST"</code> in that call. In the backend, you can deserialize it from JSON.</p>
<p>However, it does seem a bit strange to do this at all; the data is evidently coming from the Django backend already, so it's not clear why you want to post it back there.</p>
| 0 | 2016-07-22T19:02:34Z | [
"python",
"django"
] |
Indicating missing data in Pandas | 38,533,460 | <h2>Update</h2>
<p>So I have been playing around with this, and it seems that this actually happens when I read a different csv file into my program using <code>read_csv()</code>. And what then happens is exactly what the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">docs</a> say will happen:</p>
<blockquote>
<p>By default the following values are interpreted as NaN: ââ, â#N/Aâ, â#N/A N/Aâ, â#NAâ, â-1.#INDâ, â-1.#QNANâ, â-NaNâ, â-nanâ, â1.#INDâ, â1.#QNANâ, âN/Aâ, âNAâ, âNULLâ, âNaNâ, ânanâ.</p>
</blockquote>
<p>So my bad for not considering this step in my code; thanks to everyone who helped out. </p>
<hr>
<h2>Original question</h2>
<p>I'm creating spreadsheets in <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a> by filling columns with the string "NA" (<code>spreadsheet['name']="NA"</code>) and then incrementally replacing those "NA"s with actual datapoints.
Here is how I do that: <code>spreadsheet.loc[spread[match row number here], =inputstring.split("\t")</code></p>
<p>When outputting the data with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow"><code>.to_csv()</code></a>, I was surprised to find out that pandas apparently interprets these "NA" strings to indicate missing data, e.g. it replaces them with whatever I feed into <code>na_rep=</code>. I was mainly using the "NA"s as placeholders and did not expect pandas (which outputs missing data as "Nan") to mess with them. </p>
<p>I could not find anything on the matter in the <a href="http://pandas.pydata.org/pandas-docs/stable/missing_data.html" rel="nofollow">documentation on missing data</a>, where they create NaNs with <code>np.nan</code></p>
<p>Thus,</p>
<ul>
<li><p>Is it correct that Pandas will just interpret a string "NA" anywhere in my spreadsheet as "missing data"? Do they do some kind of string matching?</p></li>
<li><p>If so, what other strings can be used that way? Or what would be the most legit way of representing missing data points?</p></li>
<li><p>If true, this behavior seems kinda dangerous to me / can lead to unexpected behavior. Or is this not true?</p></li>
</ul>
<p>Any help / pointers to the relevant resources are much appreciated!</p>
| 0 | 2016-07-22T18:49:57Z | 38,537,108 | <p>You can try it this way:</p>
<pre><code>spreadsheet = pd.DataFrame({'name': ['NA', 'NA', 'NA', 'NA', 'NA']})
spreadsheet
name
0 NA
1 NA
2 NA
3 NA
4 NA
</code></pre>
<p>Then if you want to replace a couple of the <code>NA</code>s you can just select where you want to replace them.</p>
<pre><code>spreadsheet.loc[1:2] = 'foo'
spreadsheet
name
0 NA
1 foo
2 foo
3 NA
4 NA
</code></pre>
| 0 | 2016-07-23T00:55:15Z | [
"python",
"csv",
"pandas",
"spreadsheet",
null
] |
Inheriting list: Creating division by other lists, integers and floats | 38,533,476 | <p>I wanted to be able to divide entire lists by integers, floats, and other lists of equal length in Python, so I wrote the following little script.</p>
<pre><code>class divlist(list):
def __init__(self, *args, **kwrgs):
super(divlist, self).__init__(*args, **kwrgs)
self.__cont_ = args[0]
self.__len_ = len(args[0])
def __floordiv__(self, other):
""" Adds the ability to floor divide list's indices """
if (isinstance(other, int) or isinstance(other, float)):
return [self.__cont_[i] // other \
for i in xrange(self.__len_)]
elif (isinstance(other, list)):
return [self.__cont_[i] // other[i] \
for i in xrange(self.__len_)]
else:
raise ValueError('Must divide by list, int or float')
</code></pre>
<p>My question: How can I write this in a simpler way? Do I really need the lines <code>self.__cont_</code> and <code>self.__len_</code>? I was looking through the list's 'magic' methods and I couldn't find one that readily held this information.</p>
<p>An example of calling this simple class:</p>
<pre><code>>>> X = divlist([1,2,3,4])
[1, 2, 3, 4]
>>> X // 2
[0, 1, 1, 2]
>>> X // [1,2,3,4]
[1, 1, 1, 1]
>>> X // X
[1, 1, 1, 1]
</code></pre>
| 1 | 2016-07-22T18:50:43Z | 38,533,565 | <blockquote>
<blockquote>
<p>How can I write this in a simpler way?</p>
</blockquote>
</blockquote>
<p>By using <code>self[i]</code> instead of <code>self.__cont_[i]</code>.</p>
<blockquote>
<p>Do I really need the lines self.__cont_ and self.__len_?</p>
</blockquote>
<p>No. Just use the regular methods of referring to a list, for example: <code>[]</code> and <code>len()</code>.</p>
<p>As an aside, you might choose to have <code>.__floordiv__()</code> return a <code>divlist</code> instead of a <code>list</code>, so that you can continue to operate on the result.</p>
<pre><code>class divlist(list):
def __floordiv__(self, other):
""" Adds the ability to floor divide list's indices """
if (isinstance(other, int) or isinstance(other, float)):
return [i // other for i in self]
elif (isinstance(other, list)):
# DANGER: data loss if len(other) != len(self) !!
return [i // j for i,j in zip(self, other)]
else:
raise ValueError('Must divide by list, int or float')
X = divlist([1,2,3,4])
assert X == [1, 2, 3, 4]
assert X // 2 == [0, 1, 1, 2]
assert X // [1,2,3,4] == [1, 1, 1, 1]
assert X // X == [1, 1, 1, 1]
</code></pre>
| 5 | 2016-07-22T18:57:54Z | [
"python",
"list"
] |
Inheriting list: Creating division by other lists, integers and floats | 38,533,476 | <p>I wanted to be able to divide entire lists by integers, floats, and other lists of equal length in Python, so I wrote the following little script.</p>
<pre><code>class divlist(list):
def __init__(self, *args, **kwrgs):
super(divlist, self).__init__(*args, **kwrgs)
self.__cont_ = args[0]
self.__len_ = len(args[0])
def __floordiv__(self, other):
""" Adds the ability to floor divide list's indices """
if (isinstance(other, int) or isinstance(other, float)):
return [self.__cont_[i] // other \
for i in xrange(self.__len_)]
elif (isinstance(other, list)):
return [self.__cont_[i] // other[i] \
for i in xrange(self.__len_)]
else:
raise ValueError('Must divide by list, int or float')
</code></pre>
<p>My question: How can I write this in a simpler way? Do I really need the lines <code>self.__cont_</code> and <code>self.__len_</code>? I was looking through the list's 'magic' methods and I couldn't find one that readily held this information.</p>
<p>An example of calling this simple class:</p>
<pre><code>>>> X = divlist([1,2,3,4])
[1, 2, 3, 4]
>>> X // 2
[0, 1, 1, 2]
>>> X // [1,2,3,4]
[1, 1, 1, 1]
>>> X // X
[1, 1, 1, 1]
</code></pre>
| 1 | 2016-07-22T18:50:43Z | 38,534,820 | <p>Instead of examining the explicit types of each argument, assume that either the second argument is iterable, or it is a suitable value as the denominator for <code>//</code>.</p>
<pre><code>def __floordiv__(self, other):
try:
pairs = zip(self, other)
except TypeError:
pairs = ((x, other) for x in self)
return [x // y for (x, y) in pairs]
</code></pre>
<p>You may want to check that <code>self</code> and <code>other</code> have the same length if the <code>zip</code> succeeds.</p>
| 3 | 2016-07-22T20:32:02Z | [
"python",
"list"
] |
makemigrations failing with django_enumfield in Django 1.9 | 38,533,597 | <p>I just updated my Bitnami Django VM from 1.8.9 to 1.9.7. Everything was working smoothly before the upgrade, but now when I run makemigrations I get the following error:</p>
<pre><code>TypeError: Couldn't reconstruct field role on rapid.GeoViewRole: __init__() takes at least 2 arguments (1 given)
</code></pre>
<p>Here are the relevant classes/imports:</p>
<pre><code>from django_enumfield import enum
class Role(enum.Enum):
VIEWER = 0
EDITOR = 1
OWNER = 2
labels = {
VIEWER: 'Viewer',
EDITOR: 'Editor',
OWNER: 'Owner'
}
class GeoViewRole(models.Model):
token = models.ForeignKey(ApiToken)
role = enum.EnumField(Role)
geo_view = models.ForeignKey(GeoView)
objects = models.GeoManager()
</code></pre>
<p>I can't figure out why I would be getting this error after the upgrade.</p>
| 1 | 2016-07-22T19:00:09Z | 39,468,611 | <p>It was the version of django-enumfield for me. I had</p>
<pre><code>django-enumfield==1.2.1
</code></pre>
<p>Migrations worked after I removed it, and installed</p>
<pre><code>django_enumfield==1.3b2
</code></pre>
| 1 | 2016-09-13T11:12:33Z | [
"python",
"django",
"upgrade",
"typeerror"
] |
Scrap Table HTML with beautifulSoup | 38,533,642 | <p>I'm trying to scrap a website which has been built with tables. Here a link of a page's example: <a href="http://www.rc2.vd.ch/registres/hrcintapp-pub/companyReport.action?rcentId=5947621600000055031025&lang=FR&showHeader=false" rel="nofollow">http://www.rc2.vd.ch/registres/hrcintapp-pub/companyReport.action?rcentId=5947621600000055031025&lang=FR&showHeader=false</a></p>
<p>My goal is to get the name and the last name : Lass Christian (screenshot below).</p>
<p><a href="http://i.stack.imgur.com/q3nMb.png" rel="nofollow"><img src="http://i.stack.imgur.com/q3nMb.png" alt="enter image description here"></a>
I've already scraped many websites but this one I have absolutly no idea how to proceed. There are only 'tables' without any ID/Class tags and I can't figure out where I'm supposed to start. </p>
<p>Here's an exemple of the HTML code : </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><table border="1" cellpadding="1" cellspacing="0" width="100%">
<tbody><tr bgcolor="#f0eef2">
<th colspan="3">Associés, gérants et personnes ayant qualité pour signer</th>
</tr>
<tr bgcolor="#f0eef2">
<th>
<a class="hoverable" onclick="document.forms[0].rcentId.value='5947621600000055031025';document.forms[0].lang.value='FR';document.forms[0].searchLang.value='FR';document.forms[0].order.value='N';document.forms[0].rad.value='N';document.forms[0].goToAdm.value='true';document.forms[0].showHeader.value=false;document.forms[0].submit();event.returnValue=false; return false;">
Nom et Prénoms, Origine, Domicile, Part sociale
</a>
</th>
<th>
<a class="hoverable" onclick="document.forms[0].rcentId.value='5947621600000055031025';document.forms[0].lang.value='FR';document.forms[0].searchLang.value='FR';document.forms[0].order.value='F';document.forms[0].rad.value='N';document.forms[0].goToAdm.value='true';document.forms[0].showHeader.value=false;document.forms[0].submit();event.returnValue=false; return false;">
Fonctions
</a>
<img src="/registres/hrcintapp-pub/img/down_r.png" align="bottom" border="0" alt="">
</th>
<th>Mode Signature</th>
</tr>
<tr bgcolor="#ffffff">
<td>
<span style="text-decoration: none;">
Lass Christian, du Danemark, Ã Yverdon-les-Bains, avec 200 parts de CHF 100
</span>
</td>
<td><span style="text-decoration: none;">associé gérant </span>&nbsp;</td>
<td><span style="text-decoration: none;">signature individuelle</span>&nbsp;</td>
</tr>
</tbody></table></code></pre>
</div>
</div>
</p>
| 1 | 2016-07-22T19:02:44Z | 38,534,279 | <p>Something like this?</p>
<pre><code>results = soup.find_all("tr", {"bgcolor" : "#ffffff"})
for result in results:
the_name = result.td.span.get_text().split(',')[0]
</code></pre>
| 0 | 2016-07-22T19:49:51Z | [
"python",
"html",
"web-scraping",
"beautifulsoup"
] |
Scrap Table HTML with beautifulSoup | 38,533,642 | <p>I'm trying to scrap a website which has been built with tables. Here a link of a page's example: <a href="http://www.rc2.vd.ch/registres/hrcintapp-pub/companyReport.action?rcentId=5947621600000055031025&lang=FR&showHeader=false" rel="nofollow">http://www.rc2.vd.ch/registres/hrcintapp-pub/companyReport.action?rcentId=5947621600000055031025&lang=FR&showHeader=false</a></p>
<p>My goal is to get the name and the last name : Lass Christian (screenshot below).</p>
<p><a href="http://i.stack.imgur.com/q3nMb.png" rel="nofollow"><img src="http://i.stack.imgur.com/q3nMb.png" alt="enter image description here"></a>
I've already scraped many websites but this one I have absolutly no idea how to proceed. There are only 'tables' without any ID/Class tags and I can't figure out where I'm supposed to start. </p>
<p>Here's an exemple of the HTML code : </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><table border="1" cellpadding="1" cellspacing="0" width="100%">
<tbody><tr bgcolor="#f0eef2">
<th colspan="3">Associés, gérants et personnes ayant qualité pour signer</th>
</tr>
<tr bgcolor="#f0eef2">
<th>
<a class="hoverable" onclick="document.forms[0].rcentId.value='5947621600000055031025';document.forms[0].lang.value='FR';document.forms[0].searchLang.value='FR';document.forms[0].order.value='N';document.forms[0].rad.value='N';document.forms[0].goToAdm.value='true';document.forms[0].showHeader.value=false;document.forms[0].submit();event.returnValue=false; return false;">
Nom et Prénoms, Origine, Domicile, Part sociale
</a>
</th>
<th>
<a class="hoverable" onclick="document.forms[0].rcentId.value='5947621600000055031025';document.forms[0].lang.value='FR';document.forms[0].searchLang.value='FR';document.forms[0].order.value='F';document.forms[0].rad.value='N';document.forms[0].goToAdm.value='true';document.forms[0].showHeader.value=false;document.forms[0].submit();event.returnValue=false; return false;">
Fonctions
</a>
<img src="/registres/hrcintapp-pub/img/down_r.png" align="bottom" border="0" alt="">
</th>
<th>Mode Signature</th>
</tr>
<tr bgcolor="#ffffff">
<td>
<span style="text-decoration: none;">
Lass Christian, du Danemark, Ã Yverdon-les-Bains, avec 200 parts de CHF 100
</span>
</td>
<td><span style="text-decoration: none;">associé gérant </span>&nbsp;</td>
<td><span style="text-decoration: none;">signature individuelle</span>&nbsp;</td>
</tr>
</tbody></table></code></pre>
</div>
</div>
</p>
| 1 | 2016-07-22T19:02:44Z | 38,534,706 | <p>This will get the name from the page, the table is right after the anchor with the <em>id</em> <em>adm</em>, once you have that you have numerous ways to get what you need:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
r = requests.get('http://www.rc2.vd.ch/registres/hrcintapp-pub/companyReport.action?rcentId=5947621600000055031025&lang=FR&showHeader=false')
soup = BeautifulSoup(r.content,"lxml")
table = soup.select_one("#adm").find_next("table")
name = table.select_one("td span[style^=text-decoration:]").text.split(",", 1)[0].strip()
print(name)
</code></pre>
<p>Output:</p>
<pre><code>Lass Christian
</code></pre>
<p>Or:</p>
<pre><code>table = soup.select_one("#adm").find_next("table")
name = table.find("tr",bgcolor="#ffffff").td.span.text.split(",", 1)[0].strip()
</code></pre>
| 2 | 2016-07-22T20:21:32Z | [
"python",
"html",
"web-scraping",
"beautifulsoup"
] |
Include requirements.txt file in Python wheel | 38,533,669 | <p>To avoid specifying dependencies in two places, I have a Python project whose setup.py parses a requirements.txt file to generate the list of install_requires packages. This works great until I try to upload a wheel to a devpi server and then install it - I get the error that requirements.txt is not found.</p>
<p>Is it possible to build a distribution with the requirements.txt files next to setup.py? I've tried package_data and data_files, but the resulting distribution still didn't contain those files.</p>
| 1 | 2016-07-22T19:04:29Z | 38,533,721 | <p>Just add a <code>MANIFEST.in</code> in the project folder with the content:</p>
<pre><code>include requirements.txt
</code></pre>
<p>And it would include the file. You can also use wildcards like <code>*</code> too.</p>
| 1 | 2016-07-22T19:07:32Z | [
"python",
"pip",
"setuptools",
"python-wheel",
"devpi"
] |
ImportError: No module named helpers (Python 2.7.12- Windows 10) | 38,533,715 | <p>I am having a problem running below code in python; </p>
<pre><code>from helpers import process_titanic_line
print(process_titanic_line(lines[0]))
</code></pre>
<p>The error which I am getting is; </p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-29-56917437b562> in <module>()
1 #NOT WORKING
2
----> 3 from helpers import process_titanic_line
4 print(process_titanic_line(lines[0]))
ImportError: No module named helpers
</code></pre>
<p>Any help will be greatly appreciated.</p>
<p>Thank you</p>
| 0 | 2016-07-22T19:07:15Z | 38,535,424 | <p>I had to upload helpers module to python. I did it by uploading helpers.py file for the project. I would like to thank all who contributed to the discussion. </p>
| 0 | 2016-07-22T21:21:42Z | [
"python",
"scikit-learn"
] |
Why data frame can not plotted in 3d graph in matplotlib? | 38,533,726 | <p>thank you for reading this. I am beginner with python and English.
I wanted to graph 3D graph with datasets- X,Y and Z from loaded csv file.
so I set x as the second column from csv file:</p>
<pre><code>mpl.rcParams['legend.fontsize'] = 10
fig = plt.figure()
f = fig.gca(projection='3d')
x = df[[1]]
y = df[[2]]
z = df[[3]]
f.plot(x, y, z, label='vector')
plt.show()
</code></pre>
<p>but this code gave me:
KeyError: 0
How can I meke ithis to a graph? (each dataframe has 292307 rows.)</p>
<p>Thank you so much.</p>
| 0 | 2016-07-22T19:07:57Z | 38,534,303 | <p>Your error is with pandas, not matplotlib. Use the following to get your column names:</p>
<pre><code>df.keys()
</code></pre>
<p>and then you need to extract out the columns:
say for example my column names are ["hi", "bye", "world"], then my commands are:</p>
<pre><code>x = df["hi"]
y = df["bye"]
z = df["world"]
</code></pre>
<p>read through the pandas <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow">indexing and selecting</a> docs.</p>
| 1 | 2016-07-22T19:51:20Z | [
"python",
"csv",
"matplotlib",
"graph",
"keyerror"
] |
Why does unified_diff method from the difflib library in Python leave out some characters? | 38,533,751 | <p>I am trying to check for differences between lines. This is my code:</p>
<pre><code>from difflib import unified_diff
s1 = ['a', 'b', 'c', 'd', 'e', 'f']
s2 = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'i', 'k', 'l', 'm', 'n']
for line in unified_diff(s1, s2):
print line
</code></pre>
<p>It prints:</p>
<pre><code>---
+++
@@ -4,3 +4,9 @@
d
e
f
+g
+i
+k
+l
+m
+n
</code></pre>
<p>What happened to 'a', 'b', and 'c'? Thanks!</p>
| 1 | 2016-07-22T19:09:44Z | 38,534,581 | <p>If you take a look at <code>unified_diff</code> code you will find description about a parameter called <code>n</code>:</p>
<blockquote>
<p>Unified diffs are a compact way of showing line changes and a few
lines of context. The number of context lines is set by 'n' which
defaults to three.</p>
</blockquote>
<p>In you case <code>n</code> basically indicates numbers of characters. If you assign a value to <code>n</code> then you will get the correct output. This code:</p>
<pre><code>from difflib import unified_diff
s1 = ['a', 'b', 'c', 'd', 'e', 'f']
s2 = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'i', 'k', 'l', 'm', 'n']
for line in unified_diff(s1, s2,n=6):
print line
</code></pre>
<p>Will generate:</p>
<pre><code>---
+++
@@ -1,6 +1,12 @@
a
b
c
d
e
f
+g
+i
+k
+l
+m
+n
</code></pre>
| 0 | 2016-07-22T20:12:56Z | [
"python",
"difflib",
"unified-diff"
] |
python time.time() differences in python 2 vs 3 | 38,533,756 | <p>I am wondering why <code>python 2.7</code> uses <code>gettimeofday()</code> when running <code>time.time()</code> but yet in <code>python 3.4</code> it does not?</p>
<p>It appears when running strace that it may be querying /etc/localtime</p>
| 3 | 2016-07-22T19:09:57Z | 38,534,282 | <p>Python 3 will use <code>gettimeofday()</code> when your system has been detected to support this <em>at compile time</em>. However, on POSIX systems it'll only use that if <a href="http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_gettime.html" rel="nofollow"><code>clock_gettime(CLOCK_REALTIME)</code></a> is not available instead; according to the <a href="http://pubs.opengroup.org/onlinepubs/9699919799/functions/gettimeofday.html" rel="nofollow">POSIX 2008 standard</a> the latter is preferred as <code>gettimeofday()</code> is considered obsolete.</p>
<p>At runtime, you can query what Python thought your system could support at compile time by using the <a href="https://docs.python.org/3/library/time.html#time.get_clock_info" rel="nofollow"><code>time.get_clock_info()</code> function</a>, which returns a <code>namedtuple</code> instance with a <code>implementation</code> field:</p>
<blockquote>
<p><em>implementation</em>: The name of the underlying C function used to get the clock value</p>
</blockquote>
<p>On my OSX 10.11 system, for the <code>'time'</code> clock, that produces <code>gettimeofday()</code>:</p>
<pre><code>>>> time.get_clock_info('time').implementation
'gettimeofday()'
</code></pre>
<p>You can read through the <a href="https://hg.python.org/cpython/file/v3.5.2/Python/pytime.c#l451" rel="nofollow"><code>pygettimeofday()</code> C implementation</a> to see what implementations may be used; on Windows <code>GetSystemTimeAsFileTime()</code> is used for example.</p>
| 3 | 2016-07-22T19:50:04Z | [
"python",
"python-2.7",
"python-3.x",
"python-internals",
"gettimeofday"
] |
Complex Python object to JSON Conversion | 38,533,832 | <p>I need to convert a complex python object to JSON, by complex I mean an object that contains int variables, string variables, and 2 lists of custom objects. </p>
<p>My Python object's constructor is:</p>
<pre><code> def __init__(self, skills="",vid=""):
self.Skills = list([])
for skillID in skills.split("-"):
if not skillID == "":
tmpSkill = Skill()
tmpSkillObj = DBCommands.getSkill(skillID)
tmpSkill.ID = tmpSkillObj[0][0] #tmpSkillObj[0][0]
tmpSkill.Name = tmpSkillObj[0][1]
tmpSkill.isMain = True
tmpSkill.CurrentlyTesting = False
tmpSkill.isSub = False
tmpSkill.Level = 0
tmpSkill.Tested = False
tmpSkill.Score = 0
tmpSkill.Confidence = 0
tmpSkill.BestScore = 0
tmpSkill.ParentID = 0
self.Skills.append(tmpSkill)
self.AskedQuestions.append(tmpSkill)
self.Skills = list(self.Skills)
if not skills == "":
self.Skills[0].CurrentlyTesting = True #Start testing the first skill
if not vid == "":
self.VacancyID = int(vid)
self.PlayerID = 0
self.Score = float(0)
self.AskedQuestions = list([])
self.MaxLevel = 0
self.AssessmentIsFinished = False
</code></pre>
<p>I need a mechanism to encode the object and decode it.</p>
| 0 | 2016-07-22T19:15:24Z | 38,535,271 | <p>Encode:</p>
<pre><code>import base64
import pickle
token = base64.b64encode(pickle.dumps(token,-1))
</code></pre>
<p>Decode:</p>
<pre><code>import pickle
import base64
Obj = pickle.loads(base64.b64decode(token))
</code></pre>
| 0 | 2016-07-22T21:09:30Z | [
"python",
"json",
"class",
"oop"
] |
Django custom Widget failing on choices above 1 (enter whole number) | 38,533,860 | <p>I wanted a multiState clickbox. So I spend some free time on a nice Django solution that makes it:</p>
<pre><code>class MultiStateChoiceInput(forms.widgets.ChoiceInput):
input_type = 'radio'
def __init__(self, name, value, attrs, choice, index, label_id):
# Override to use the label_id which is upped with 1
if 'id' in attrs:
self.label_id = attrs['id']+ "_%d" % label_id
super(MultiStateChoiceInput, self).__init__(name, value, attrs, choice, index)
self.value = force_text(self.value)
@property
def id_for_label(self):
return self.label_id
def render(self, name=None, value=None, attrs=None, choices=()):
if self.id_for_label:
label_for = format_html(' for="{}"', self.id_for_label)
else:
label_for = ''
attrs = dict(self.attrs, **attrs) if attrs else self.attrs
return format_html(
'{} <label{}>{}</label>', self.tag(attrs), label_for, self.choice_label
)
class MultiStateRenderer(forms.widgets.ChoiceFieldRenderer):
choice_input_class = MultiStateChoiceInput
outer_html = '<span class="cyclestate">{content}</span>'
inner_html = '{choice_value}{sub_widgets}'
def render(self):
"""
Outputs a <ul> for this set of choice fields.
If an id was given to the field, it is applied to the <ul> (each
item in the list will get an id of `$id_$i`).
# upgraded with the label_id
"""
id_ = self.attrs.get('id')
output = []
for i, choice in enumerate(self.choices):
choice_value, choice_label = choice
if isinstance(choice_label, (tuple, list)):
attrs_plus = self.attrs.copy()
if id_:
attrs_plus['id'] += '_{}'.format(i)
sub_ul_renderer = self.__class__(
name=self.name,
value=self.value,
attrs=attrs_plus,
choices=choice_label,
label_id = (i+1) % (len(self.choices)) # label_id is next one
)
sub_ul_renderer.choice_input_class = self.choice_input_class
output.append(format_html(self.inner_html, choice_value=choice_value,
sub_widgets=sub_ul_renderer.render()))
else:
w = self.choice_input_class(self.name, self.value,
self.attrs.copy(), choice, i, label_id = (i+1) % (len(self.choices))) # label_id is next one
output.append(format_html(self.inner_html,
choice_value=force_text(w), sub_widgets=''))
return format_html(self.outer_html,
id_attr=format_html(' id="{}"', id_) if id_ else '',
content=mark_safe('\n'.join(output)))
class MultiStateSelectWidget(forms.widgets.RendererMixin, forms.widgets.Select):
''' This widget enables multistate clickable toggles
Requires some css as well (see .cyclestate)
'''
renderer = MultiStateRenderer
</code></pre>
<p>This creates a form like is explained here <a href="http://stackoverflow.com/a/33455783/3849359">http://stackoverflow.com/a/33455783/3849359</a> and where a click toggles the next state until it reached the and and then continues at the beginning.</p>
<p>The form is called in my view like:</p>
<pre><code>SomeFormSet= modelformset_factory(myModel, form=myModelForm, extra=0)
SomeFormSet.form = staticmethod(curry(myModelForm, somevariable=somevariable))
formset = SomeFormSet(request.POST or None, queryset=somequeryset)
</code></pre>
<p>And forms.py is:</p>
<pre><code>class myModelForm(forms.ModelForm):
CHOICES = (
(0, _('a')),
(1, _('b')),
(2, _('c')),
(3, _('d')),
)
field = forms.IntegerField(widget=MultiStateSelectWidget(choices=CHOICES))
class Meta:
model = MyModal
fields = ('field',)
widgets = {'id': forms.HiddenInput(),
}
def __init__(self, *args, **kwargs):
self.variable= kwargs.pop('variable')
super(myModelForm, self).__init__(*args, **kwargs)
for field in myModelForm.fields:
if self.instance.pk:
if not getattr(self.instance, field):
self.initial[field]= 0
else:
self.initial[field]= 1
if anothercondition:
self.initial[field] = 3
else:
self.initial[field] = 2
</code></pre>
<p>I thought it worked very well. And the clicking and saving does work wel (I have a custom save method). Except when the form field has a value of 2 or 3, then it suddenly failes with the error message: 'field' should be a whole number.</p>
<p>If anyone could help that would be great, as I'm out of ideas!</p>
<p>EDIT: Just in case... I have checked the POST and it is great. The only problem is that Django somewhere in parsing the POST loses the value completely (it becomes None) if the value is a 2 and I have no idea why.</p>
<p>EDIT2: It seems that the Django ModelForm does also do model validation. And the model is a BooleanField, which is the reason why it fails. If anyone knows a good way to override it, that would be nice!</p>
| 0 | 2016-07-22T19:17:08Z | 38,543,119 | <p>@edgarzamora Your comment is not the answer, but it is close!</p>
<p>I removed the 'field' from the Form class Meta, so it looked like:</p>
<pre><code>class Meta:
model = MyModal
fields = ('',)
widgets = {'id': forms.HiddenInput(),
}
</code></pre>
<p>And now everything works, because I have my custom save method... So stupid, it costed me hours! Thanks!</p>
| 0 | 2016-07-23T14:57:59Z | [
"python",
"html",
"css",
"django",
"django-forms"
] |
Pandas groupby on two conditions | 38,533,882 | <p>I have the following table:</p>
<p><a href="http://i.stack.imgur.com/EWOD0.png" rel="nofollow"><img src="http://i.stack.imgur.com/EWOD0.png" alt="enter image description here"></a></p>
<p>I'm attempting to do two things with the table:</p>
<p>1) If a call only appears once, make it so that any of these single-call entries that also have a zipcode entry get a 1 under order. </p>
<pre><code>#work with unique data
import pandas as pd
def order_chk(x):
if pd.isnull(x['ORDER_TIMESTAMP']) or pd.isnull(x['ZIP']):
return 0
return 1
calls_t = calls.groupby('ANI').filter(lambda x: len(x) < 2).apply(lambda row: order_chk(row), axis=1)
</code></pre>
<p>2) It gets trickier when there are two calls but only one order; in these cases i want the call that was closer to the order to get the 1 under the order column (the delta column is timedelta objects)</p>
<p>So final table looks like this (yellow shading to show the 1)</p>
<p><a href="http://i.stack.imgur.com/iHtSU.png" rel="nofollow"><img src="http://i.stack.imgur.com/iHtSU.png" alt="enter image description here"></a></p>
<p>Let me know if I can clarify anything, I have a feeling I'm missing something really silly with .apply on groups. </p>
<pre><code> DATE TIMESTAMP ANI DNIS VENDOR ORDER_TIMESTAMP ZIP delta ORDER CALLS
0 7/13/2016 2016-07-13 00:19:09 7249534228 8009894581 CORNERSTONE NaT NaN NaT 0 1
1 7/13/2016 2016-07-13 00:19:10 9207482180 8009894581 CORNERSTONE NaT NaN NaT 0 1
2 7/13/2016 2016-07-13 00:19:22 2405870965 8009894581 CORNERSTONE NaT NaN NaT 0 1
3 7/13/2016 2016-07-13 00:19:29 6192537800 8009894581 CORNERSTONE NaT NaN NaT 0 1
4 7/13/2016 2016-07-13 00:21:00 2405870965 8009894581 CORNERSTONE NaT NaN NaT 0 1
5 7/13/2016 2016-07-13 11:31:19 9857140062 8009136242 ACE NaT NaN NaT 0 1
6 7/13/2016 2016-07-13 12:50:12 5802260487 8009137764 ACE NaT NaN NaT 0 1
7 7/13/2016 2016-07-13 14:13:08 Unavailable 8009135189 CORNERSTONE NaT NaN NaT 0 1
8 7/13/2016 2016-07-13 16:29:13 7172665487 8009140816 CORNERSTONE NaT NaN NaT 0 1
9 7/13/2016 2016-07-13 17:02:25 8079819744 8009131719 CORNERSTONE NaT NaN NaT 0 1
10 7/13/2016 2016-07-13 19:21:54 8435466441 8009135302 CORNERSTONE NaT NaN NaT 0 1
11 7/13/2016 2016-07-13 20:41:28 9063462078 8009894581 CORNERSTONE NaT NaN NaT 0 1
12 7/13/2016 2016-07-13 20:50:19 6143772125 8009084876 CORNERSTONE NaT NaN NaT 0 1
13 7/13/2016 2016-07-13 20:50:20 8148563460 8009084876 CORNERSTONE NaT NaN NaT 0 1
14 7/13/2016 2016-07-13 20:50:22 5616837515 8009084876 CORNERSTONE NaT NaN NaT 0 1
15 7/13/2016 2016-07-13 20:53:07 9032270226 8009084876 CORNERSTONE NaT NaN NaT 0 1
16 7/13/2016 2016-07-13 23:58:38 9283779292 8009131653 CORNERSTONE 2016-07-13 23:59:26 223032109 00:00:48 0 1
17 7/13/2016 2016-07-13 21:14:08 9283779292 8009131653 CORNERSTONE 2016-07-13 23:59:26 223032109 02:45:18 0 1
</code></pre>
| 1 | 2016-07-22T19:18:27Z | 38,535,938 | <p>If I understand correctly, the first part works for you, and for the second part you want to mark the lines with the lowest delta value (per call).
The following code fetched the line numbers of these calls, and then assigns ORDER=1 on those lines.</p>
<pre><code>cond = calls.groupby(['ANI'])['delta'].transform(min) == df['delta']
calls.loc[cond, 'ORDER'] = 1
</code></pre>
<p>Hope this helps.</p>
| 1 | 2016-07-22T22:09:59Z | [
"python",
"pandas",
"group-by"
] |
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint | 38,533,897 | <p>I'm running Ubuntu 12.04, Python 2.7.3 and trying to run <a href="https://github.com/awslabs/chalice" rel="nofollow">chalice</a></p>
<p>However, when I run </p>
<blockquote>
<p>chalice deploy</p>
</blockquote>
<p>I get back:</p>
<blockquote>
<p>botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "<a href="https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/helloworld" rel="nofollow">https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/helloworld</a>"</p>
</blockquote>
<p>I can't find any information on what's causing this. My AWS configuration was setup and tested to be working with boto.</p>
<p>My config:</p>
<p><code>[default]
output = table
region = us-west-1</code></p>
| 0 | 2016-07-22T19:19:14Z | 38,534,529 | <p>As per your config you have setup the region to <code>region = us-west-1</code> and you are trying to access a Lambda function in region us-east-1.</p>
<p>Change that region to us-east-1 in the config and then give it a try it will work.</p>
| 1 | 2016-07-22T20:08:58Z | [
"python",
"amazon-web-services",
"aws-lambda",
"boto3"
] |
Non-Blocking Server Apache Thrift Python | 38,533,973 | <p>In one Python Module A I am doing some stuff. In the middle of doing that stuff I am creating a Thrift connection. The problem is after that connection starts, the program gets stuck in the network logic. (i.e blocking).</p>
<p>In module A I have: </p>
<pre><code>stuff = "do some stuff"
network.ConnectionManager(host, port, ...)
stuff = "do more stuff" # not getting to this point
</code></pre>
<p>In network...</p>
<pre><code>ConnectionManager.start_service_handler()
def start_service_handler(self):
handler = ServiceHandler(self)
processor = Service.Processor(handler)
transport = TSocket.TServerSocket(port=self.port)
tfactory = TTransport.TBufferedTransportFactory()
pfactory = TBinaryProtocol.TBinaryProtocolFactory()
# server = TServer.TThreadedServer(processor, transport, tfactory, pfactory)
server = TNonblockingServer(processor, transport, tfactory, pfactory)
logger().info('starting server...')
server.serve()
</code></pre>
<p>I try this, but yet the code in module A does not continue as soon as the connection code starts.</p>
<p>I thought TNonblockingServer would do the trick, but unfortunately did not.</p>
| 1 | 2016-07-22T19:25:50Z | 38,544,162 | <p>The code blocks at <code>server.serve()</code> which is by design, across all target languages supported by Thrift. The usual use case is to run a server like this (pseudo code):</p>
<pre><code>init server
setup thrift protocol/tramsport stack
server.serve()
shutdown code
</code></pre>
<p>The "nonblocking" does not refer to the <code>server.serve()</code> call, rather to the code taking the actual client call. With a <code>TSimpleServer</code>, the server can only handle one call at a time. In contrast, the <code>TNonblockingServer</code> is <a href="https://wiki.apache.org/thrift/ThriftUsageC%2B%2B" rel="nofollow">designed to accept a number of connections in parallel</a>.</p>
<p>Conclusion: If you want to run a Thrift server and also have some other work to do in parallel, or need to start and stop the server on the fly during program run, you will need another thread to achieve that.</p>
| 2 | 2016-07-23T16:46:18Z | [
"python",
"multithreading",
"apache",
"thrift",
"nonblocking"
] |
use existing field as _id using elasticsearch dsl python DocType | 38,533,990 | <p>I have class, where I try to set <code>student_id</code> as <code>_id</code> field in <a href="/questions/tagged/elasticsearch" class="post-tag" title="show questions tagged 'elasticsearch'" rel="tag"><img src="//i.stack.imgur.com/817gJ.png" height="16" width="18" alt="" class="sponsor-tag-img">elasticsearch</a>. I am referring <a href="https://elasticsearch-dsl.readthedocs.io/en/latest/#persistence-example" rel="nofollow">persistent example</a> from elasticsearch-dsl docs.</p>
<pre><code>from elasticsearch_dsl import DocType, String
ELASTICSEARCH_INDEX = 'student_index'
class StudentDoc(DocType):
'''
Define mapping for Student type
'''
student_id = String(required=True)
name = String(null_value='')
class Meta:
# id = student_id
index = ELASTICSEARCH_INDEX
</code></pre>
<p>I tied by setting <code>id</code> in <code>Meta</code> but it not works.</p>
<p>I get solution as override <a href="https://github.com/elastic/elasticsearch-dsl-py/issues/360" rel="nofollow"><code>save</code> method</a> and I achieve this</p>
<pre><code>def save(self, **kwargs):
'''
Override to set metadata id
'''
self.meta.id = self.student_id
return super(StudentDoc, self).save(**kwargs)
</code></pre>
<p>I am creating this object as </p>
<pre><code>>>> a = StudentDoc(student_id=1, tags=['test'])
>>> a.save()
</code></pre>
<p>Is there any direct way to set from <code>Meta</code> without override <code>save</code> method ?</p>
| 0 | 2016-07-22T19:27:10Z | 38,651,072 | <p>There are a few ways to assign an id:</p>
<p>You can do it like this</p>
<pre><code>a = StudentDoc(meta={'id':1}, student_id=1, tags=['test'])
a.save()
</code></pre>
<p>Like this:</p>
<pre><code>a = StudentDoc(student_id=1, tags=['test'])
a.meta.id = 1
a.save()
</code></pre>
<p>Also note that before ES 1.5, one was able to <a href="https://www.elastic.co/guide/en/elasticsearch/reference/1.7/mapping-id-field.html#_path" rel="nofollow">specify a field</a> to use as the document <code>_id</code> (in your case, it could have been <code>student_id</code>), but this has been deprecated in 1.5 and from then onwards you must explicitly provide an ID or let ES pick one for you.</p>
| 1 | 2016-07-29T05:26:51Z | [
"python",
"elasticsearch",
"elasticsearch-dsl"
] |
django: customized decorator running before django-rest-framework authentication class | 38,534,019 | <p>My REST APIs:</p>
<pre><code>class FileView(APIView):
parser_classes = (MultiPartParser,)
authentication_classes = (BasicAuthentication,)
permission_classes = (IsAuthenticated,)
@method_decorator(csrf_exempt)
@method_decorator(operation_logger)
def dispatch(self, request, *args, **kwargs):
return super(FileView, self).dispatch(request, *args, **kwargs)
def post(self, request):
print "xxxxpost"
</code></pre>
<p>The customized decorator:</p>
<pre><code>def operation_logger(view_func):
@wraps(view_func)
def wrapper(request, *args, **kwargs):
print "xxxx"
comments = []
psfile = None
op = None
remote_addr = request.META.get('REMOTE_ADDR')
if request.user:
user = request.user
print request.user.username
else:
print "request.user is None"
return view_func(request, *args, **kwargs)
return wrapper
</code></pre>
<p>It seems that my decorator running before authentication finished. How to fix it?
Thanks</p>
<p><strong>UPDATE</strong>
I added my middleware list below. But I am not sure it is related, because I created a branch new app for APIs, which is using djangorestframework. </p>
<pre><code>MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'lib.middleware.SessionTimeout',
'lib.middleware.ForceTemporaryPasswordChange'
)
</code></pre>
| 1 | 2016-07-22T19:29:47Z | 38,537,258 | <p>You can change the order your decorators are applied: the one nearest to <code>def</code> line will run first.</p>
<pre><code>@method_decorator(operation_logger)
@method_decorator(csrf_exempt)
def dispatch(self, request, *args, **kwargs):
return super(FileView, self).dispatch(request, *args, **kwargs)
</code></pre>
| 0 | 2016-07-23T01:23:11Z | [
"python",
"django",
"django-rest-framework"
] |
How to find and query a specific build in Jenkins using the Python Jenkins API | 38,534,171 | <p>We have a Jenkins job that runs builds using specific parameters.
Two of these parameters are important for me: the machine that the build is being deployed on, and the version number of the package that is deployed.</p>
<blockquote>
<p><a href="https://jenkinsurl/job/folder_level1/job/folder_level2/job/folder_level3/job_id/" rel="nofollow">https://jenkinsurl/job/folder_level1/job/folder_level2/job/folder_level3/job_id/</a></p>
</blockquote>
<p>Here is a sample of json output of the url:</p>
<blockquote>
<p><a href="https://jenkinsurl/job/folder_level1/job/folder_level2/job/folder_level3/job_id/api/json" rel="nofollow">https://jenkinsurl/job/folder_level1/job/folder_level2/job/folder_level3/job_id/api/json</a></p>
</blockquote>
<pre><code>{"actions":[{"parameters":[{"name":"lab_name","value":"labA"},{"name":"version_no","value":"1.1"}]}
</code></pre>
<p>Using the Jenkins REST API or the Python Jenkins wrapper, how would I search for the job if I know the folder_level1 and would like to match the lab name to a job in folder_level3 to finally get the version from that URL?</p>
| 2 | 2016-07-22T19:41:50Z | 40,026,577 | <p>Use the /api/xml format:</p>
<pre><code>https://jenkinsurl/job/folder_level1/api/xml
</code></pre>
<p>which returns the <code>action</code> XML node which can be queried via XPath:</p>
<ul>
<li><a href="https://jenkinsurl/job/folder_level1/api/xml?xpath=//action&wrapper=root" rel="nofollow">https://jenkinsurl/job/folder_level1/api/xml?xpath=//action&wrapper=root</a></li>
</ul>
<p>Take the matching name from there to search for the data in question:</p>
<ul>
<li>builtOn - the machine that the build is being deployed on</li>
<li>number - the version number of the package that is deployed</li>
</ul>
<p>Using an XPath for each, along with a wrapper node for grouping, such as the following for builtOn:</p>
<pre><code>https://jenkinsurl/job/folder_level1/api/xml?depth=3&xpath=//fullDisplayName[contains(text(),'foo')]/following-sibling::builtOn&wrapper=builtOn_results
</code></pre>
<p>and another for version:</p>
<pre><code>https://jenkinsurl/job/folder_level1/api/xml?depth=3&xpath=//fullDisplayName[contains(text(),'foo')]/following-sibling::number&wrapper=version_results
</code></pre>
<p><strong>References</strong></p>
<ul>
<li><p><a href="https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API" rel="nofollow">Jenkins Wiki: Remote Access API</a></p></li>
<li><p><a href="https://www.cloudbees.com/blog/taming-jenkins-json-api-depth-and-tree" rel="nofollow">Taming the Jenkins JSON API with Depth and "Tree" | CloudBees</a></p></li>
<li><p><a href="https://media.readthedocs.org/pdf/python-jenkins/latest/python-jenkins.pdf" rel="nofollow">python-jenkins API(pdf)</a></p></li>
<li><a href="https://docs.python.org/2/library/xml.etree.elementtree.html#supported-xpath-syntax" rel="nofollow">xml.tree.elementtree: Supported XPath Syntax</a></li>
</ul>
| 1 | 2016-10-13T16:39:20Z | [
"python",
"json",
"jenkins",
"querying",
"jenkins-api"
] |
transform irregular timeseries into zscores relative to closest neighbors | 38,534,194 | <p>I have a time series with an irregularly spaced index. I want to transform the data by subtracting a mean and dividing by a standard deviation for every point. However, I only want to calculate the means and standard deviations using those data values that are a predefined time distance away. In my example below, I used regularly spaced distances but I want this to accommodate irregular ones as well.</p>
<p>For example:</p>
<pre><code>n = 20
ts = pd.Series(np.random.rand(n),
pd.date_range('2014-05-01', periods=n, freq='T', name='Time'))
</code></pre>
<p>Lets say I want the zscore for each point relative to all points within one minute of that point.</p>
<p>The final result should look like the following series.</p>
<pre><code>Time
2014-05-01 00:00:00 0.707107
2014-05-01 00:01:00 -0.752435
2014-05-01 00:02:00 0.866662
2014-05-01 00:03:00 -0.576136
2014-05-01 00:04:00 -0.580471
2014-05-01 00:05:00 -0.253403
2014-05-01 00:06:00 -0.076657
2014-05-01 00:07:00 1.054413
2014-05-01 00:08:00 0.095783
2014-05-01 00:09:00 -1.030982
2014-05-01 00:10:00 1.041127
2014-05-01 00:11:00 -1.028084
2014-05-01 00:12:00 0.198363
2014-05-01 00:13:00 0.851951
2014-05-01 00:14:00 -1.152701
2014-05-01 00:15:00 1.070238
2014-05-01 00:16:00 -0.395849
2014-05-01 00:17:00 -0.968585
2014-05-01 00:18:00 0.077004
2014-05-01 00:19:00 0.707107
Freq: T, dtype: float64
</code></pre>
| 3 | 2016-07-22T19:43:39Z | 38,534,472 | <p>This is something I've been working on. Keep in mind this is related to but different than (as I suspect you know, otherwise you probably wouldn't be asking the question) pandas <code>rolling</code> feature. For your the regularly spaced data you gave, it would tie out pretty well and we can use that to compare.</p>
<p>What I'll do is use <code>np.subtract.outer</code> to compute the distances of all items in a series with itself.</p>
<p>Assume we have your time series <code>ts</code></p>
<pre><code>import pandas as pd
import numpy as np
n = 20
np.random.seed([3,1415])
data = np.random.rand(n)
tidx = pd.date_range('2014-05-01', periods=n, freq='T', name='Time')
# ^
# |
# Minute Frequency
ts = pd.Series(data, tidx, name='Bliggles')
</code></pre>
<p>Now I can use the time index to calculate distaces like so</p>
<pre><code>distances = pd.DataFrame(np.subtract.outer(tidx, tidx), tidx, tidx).abs()
</code></pre>
<p>From here, I test what is less than a desired distance. Say that distance is called <code>delta</code></p>
<pre><code>lt_delta = (distances <= delta).stack()
lt_delta = lt_delta[lt_delta]
</code></pre>
<p>Finally, I take the values from the index of <code>lt_delta</code> and find what the corresponding values were in <code>ts</code></p>
<pre><code>pd.Series(ts.ix[lt_delta.index.to_series().str.get(1)].values, lt_delta.index)
</code></pre>
<p>I return a <code>groupby</code> object so it looks and feels like calling <code>rolling</code>. When I wrap it in a function, it looks like</p>
<h2>Super Function</h2>
<pre><code>def groupbydelta(ts, delta):
tidx = ts.index
distances = pd.DataFrame(np.subtract.outer(tidx, tidx), tidx, tidx).abs()
lt_delta = (distances <= delta).stack()
lt_delta = lt_delta[lt_delta]
closest = pd.Series(ts.ix[lt_delta.index.to_series().str.get(1)].values, lt_delta.index)
return closest.groupby(level=0)
</code></pre>
<h3>Inspired by root's answer, I wrote an improved pandas/numpy solution.</h3>
<pre><code>def groupbydelta(ts, delta):
tidx = ts.index
iv = pd.DataFrame({'lo': tidx - delta, 'hi': tidx + delta}, tidx)
return pd.concat([ts.loc[r.lo:r.hi] for i, r in iv.iterrows()],
keys=iv.index).groupby(level=0)
</code></pre>
<p>Let's test it out. I'll use a <code>delta=pd.Timedelta(1, 'm')</code> (that's one minute). For the time series I created, for every date time index, I should see that index, the minute prior, and the minute after. This should be equivalent to <code>ts.rolling(3, center=True)</code> with the exceptions at the edges. I'll do both and compare.</p>
<pre><code>gbdelta = groupbydelta(ts, pd.Timedelta(1, 'm')).mean()
rolling = ts.rolling(3, center=True).mean()
pd.concat([gbdelta, rolling], axis=1, keys=['Delta', 'Rolling']).head()
</code></pre>
<p><a href="http://i.stack.imgur.com/eEo1e.png" rel="nofollow"><img src="http://i.stack.imgur.com/eEo1e.png" alt="enter image description here"></a></p>
<p>That looks great! Difference between the two being that <code>rolling</code> has <code>NaN</code> at the edges while <code>gbdelta</code> doesn't require a specific number of elements, but that was by design.</p>
<p>What about irregular indices?</p>
<pre><code>np.random.seed([3,1415])
n = 7200
data = np.random.rand(n)
tidx = (pd.to_datetime(['2013-02-06']) + np.random.rand(n) * pd.Timedelta(1, 'd'))
irregular_series = pd.Series(data, tidx, name='Sketch').sort_index()
</code></pre>
<p>And plot the <code>irregular_series</code> and some filtered versions based on closest neighbors.</p>
<p><a href="http://i.stack.imgur.com/DVKQM.png" rel="nofollow"><img src="http://i.stack.imgur.com/DVKQM.png" alt="enter image description here"></a></p>
<p>But you asked for zscores:</p>
<pre><code>zd = (irregular_series - gbirr.mean()) / gbirr.std()
</code></pre>
<p>This z-scoring is a bit tricky. I had to find the grouped means and standard deviations and then use them with the original series. I'm still thinking about a smother way. But that's smooth enough.</p>
<p>What does it look like?</p>
<pre><code>fig, axes = plt.subplots(1, 2, sharey=True, figsize=[10, 5])
irregular_series.plot(style='.', ax=axes[0], title='Original')
zd.plot(style='.', ax=axes[1], title='Z-Scored')
</code></pre>
<p><a href="http://i.stack.imgur.com/glMr3.png" rel="nofollow"><img src="http://i.stack.imgur.com/glMr3.png" alt="enter image description here"></a></p>
<hr>
<h3>Answer</h3>
<p>Finally, you asked about the z-score for your data example. To ensure I got the right answer...</p>
<pre><code>gbd = groupbydelta(ts, pd.Timedelta(1, 'm'))
ts.sub(gbd.mean()).div(gbd.std())
Time
2014-05-01 00:00:00 0.707107
2014-05-01 00:01:00 -0.752435
2014-05-01 00:02:00 0.866662
2014-05-01 00:03:00 -0.576136
2014-05-01 00:04:00 -0.580471
2014-05-01 00:05:00 -0.253403
2014-05-01 00:06:00 -0.076657
2014-05-01 00:07:00 1.054413
2014-05-01 00:08:00 0.095783
2014-05-01 00:09:00 -1.030982
2014-05-01 00:10:00 1.041127
2014-05-01 00:11:00 -1.028084
2014-05-01 00:12:00 0.198363
2014-05-01 00:13:00 0.851951
2014-05-01 00:14:00 -1.152701
2014-05-01 00:15:00 1.070238
2014-05-01 00:16:00 -0.395849
2014-05-01 00:17:00 -0.968585
2014-05-01 00:18:00 0.077004
2014-05-01 00:19:00 0.707107
Freq: T, dtype: float64
</code></pre>
<hr>
<h3>Timing</h3>
<p>Inspired by root's answer I rewrote my function to be interval based. It made sense that it would be more efficient than finding the outer difference for certain length time series.</p>
<p><em>code</em></p>
<pre><code>def pirsquared(ts, delta):
gbd = groupbydelta(ts, delta)
return ts.sub(gbd.mean()).div(gbd.std())
cols = ['pirsquared', 'root']
ts_len = [500, 1000, 2000, 3000, 4000]
dt_len = [1, 5, 10, 20]
summary = pd.DataFrame([], pd.MultiIndex.from_product([ts_len, dt_len], names=['Points', 'Delta']), cols)
for n in ts_len:
for d in dt_len:
np.random.seed([3,1415])
data = np.random.rand(n)
tidx = (pd.to_datetime(['2013-02-06']) + np.random.rand(n) * pd.Timedelta(1, 'd'))
ts = pd.Series(data, tidx, name='Sketch').sort_index()
delta = pd.Timedelta(d, 'm')
pt = timeit(lambda: pirsquared(ts, delta), number=2) / 2
rt = timeit(lambda: root(ts, delta), number=2) / 2
summary.loc[(n, d), cols] = pt, rt
summary.unstack().swaplevel(0, 1, 1).sort_index(1)
</code></pre>
<p><a href="http://i.stack.imgur.com/gwYrH.png" rel="nofollow"><img src="http://i.stack.imgur.com/gwYrH.png" alt="enter image description here"></a></p>
| 4 | 2016-07-22T20:04:42Z | [
"python",
"numpy",
"pandas"
] |
transform irregular timeseries into zscores relative to closest neighbors | 38,534,194 | <p>I have a time series with an irregularly spaced index. I want to transform the data by subtracting a mean and dividing by a standard deviation for every point. However, I only want to calculate the means and standard deviations using those data values that are a predefined time distance away. In my example below, I used regularly spaced distances but I want this to accommodate irregular ones as well.</p>
<p>For example:</p>
<pre><code>n = 20
ts = pd.Series(np.random.rand(n),
pd.date_range('2014-05-01', periods=n, freq='T', name='Time'))
</code></pre>
<p>Lets say I want the zscore for each point relative to all points within one minute of that point.</p>
<p>The final result should look like the following series.</p>
<pre><code>Time
2014-05-01 00:00:00 0.707107
2014-05-01 00:01:00 -0.752435
2014-05-01 00:02:00 0.866662
2014-05-01 00:03:00 -0.576136
2014-05-01 00:04:00 -0.580471
2014-05-01 00:05:00 -0.253403
2014-05-01 00:06:00 -0.076657
2014-05-01 00:07:00 1.054413
2014-05-01 00:08:00 0.095783
2014-05-01 00:09:00 -1.030982
2014-05-01 00:10:00 1.041127
2014-05-01 00:11:00 -1.028084
2014-05-01 00:12:00 0.198363
2014-05-01 00:13:00 0.851951
2014-05-01 00:14:00 -1.152701
2014-05-01 00:15:00 1.070238
2014-05-01 00:16:00 -0.395849
2014-05-01 00:17:00 -0.968585
2014-05-01 00:18:00 0.077004
2014-05-01 00:19:00 0.707107
Freq: T, dtype: float64
</code></pre>
| 3 | 2016-07-22T19:43:39Z | 38,535,525 | <p>This isn't a <code>pandas</code>/<code>numpy</code> solution but should give decent performance. Essentially, to find closest points you can build an <a href="https://en.wikipedia.org/wiki/Interval_tree" rel="nofollow">Interval Tree</a> using the <a href="https://pypi.python.org/pypi/intervaltree" rel="nofollow"><code>intervaltree</code></a> package on PyPI. </p>
<p>The <code>intervaltree</code> package is fairly simple to use, and is syntactically quite similar to a dictionary. One thing to keep in mind with this package is that upper bounds are not included in intervals, so you'll need to pad the upper bounds when building the tree. Note in my code below that I add an extra nanosecond to the upper bound.</p>
<pre><code>import intervaltree
def get_ts_zscore(ts, delta):
# Get the upper and lower bounds, padding the upper bound.
lower = ts.index - delta
upper = ts.index + delta + pd.Timedelta(1, 'ns')
# Build the interval tree.
t = intervaltree.IntervalTree().from_tuples(zip(lower, upper, ts))
# Extract the overlaping data points for each index value.
ts_grps = [[iv.data for iv in t[idx]]for idx in ts.index]
# Compute the z-scores.
ts_data = [(x - np.mean(grp))/np.std(grp, ddof=1) for x, grp in zip(ts, ts_grps)]
return pd.Series(ts_data, ts.index)
</code></pre>
<p>I'm not able to replicate your exact expected output, maybe due to how I'm randomly generating the data? My output exactly matches what I get running @piRSquared's code though, so I'm pretty sure it's right. </p>
<p><strong>Timings</strong></p>
<p>Timings on the sample data (<code>n=20</code>):</p>
<pre><code>%timeit get_ts_zscore(ts, pd.Timedelta(1, 'm'))
100 loops, best of 3: 2.89 ms per loop
%%timeit
gbd = groupbydelta(ts, pd.Timedelta(1, 'm'))
ts.sub(gbd.mean()).div(gbd.std())
100 loops, best of 3: 7.13 ms per loop
</code></pre>
<p>Timings on larger data (<code>n=10**4</code>):</p>
<pre><code>%timeit get_ts_zscore(ts, pd.Timedelta(1, 'm'))
1 loops, best of 3: 1.44 s per loop
%%timeit
gbd = groupbydelta(ts, pd.Timedelta(1, 'm'))
ts.sub(gbd.mean()).div(gbd.std())
1 loops, best of 3: 5.92 s per loop
</code></pre>
| 3 | 2016-07-22T21:30:16Z | [
"python",
"numpy",
"pandas"
] |
Why limit DB Connection Pool Size in SQLAlchemy? | 38,534,203 | <p>This is a follow-up to a <a href="http://stackoverflow.com/questions/38515488/load-testing-sql-alchemy-timeouterror-queuepool-limit-of-size-3-overflow-0-re">question</a> I posted earlier about DB Connection Pooling errors in SQLAlchemy.</p>
<p>According to the SQLAlchemy <a href="http://docs.sqlalchemy.org/en/latest/core/pooling.html#api-documentation-available-pool-implementations" rel="nofollow">docs</a> the <code>sqlalchemy.pool.QueuePool.__init__()</code> method takes the following argument:</p>
<blockquote>
<p><strong>pool_size</strong> â The size of the pool to be maintained, defaults to 5. This
is the largest number of connections that will be kept persistently in
the pool. Note that the pool begins with no connections; once this
number of connections is requested, that number of connections will
remain. pool_size can be set to 0 to indicate no size limit; to
disable pooling, use a NullPool instead.</p>
</blockquote>
<p>What are the drawbacks to setting pool_size=0? What is the benefit of limiting the connection pool size? Is it just to save memory? The database shouldn't really care if a large number of unused connections are open, right?</p>
| 1 | 2016-07-22T19:44:07Z | 38,534,463 | <p>The main drawback to not limiting the pool size would be the potential for a runaway program to create too many connections. </p>
<p>There are real limits, both at the database level, and O/S level to how many connections the database will support. Each connection will also use additional memory. Limiting the pool size helps protect your database against both bugs in your program, or a malicious attack on your server. Either of those could bring your database server to a standstill by using too many connections or too much memory.</p>
<p>Under <em>normal</em> circumstances, the additional memory each connection uses shouldn't be too much of an issue, but it's best to limit it to the maximum number you think you'll use concurrently (maybe plus a few for good measure).</p>
| 2 | 2016-07-22T20:03:59Z | [
"python",
"database",
"sqlalchemy"
] |
Ordering a nested dictionary in python | 38,534,228 | <p>I have a nested dictionary (category and subcategories), <code>dict</code>, that I am having trouble sorting. </p>
<p>The output of <code>dict</code> is:</p>
<pre><code>{u'sports': {u'basketball': {'name': u'Basketball', 'slug': u'basketball'}, u'baseball': {'name': u'Baseball', 'slug': u'baseball'}}, u'dance': {u'salsa': {'name': u'Salsa', 'slug': u'salsa'}}, u'arts': {u'other-5': {'name': u'Other', 'slug': u'other-5'}, u'painting': {'name': u'Painting', 'slug': u'painting'}}, u'music': {u'cello': {'name': u'Cello', 'slug': u'cello'}, u'accordion': {'name': u'Accordion', 'slug': u'accordion'}}}
</code></pre>
<p>How can I sort this dictionary so that the 'other' subcategory always shows up at the end of the nested dictionary. For example the order for the "arts" category should be:</p>
<pre><code>..., u'arts': {u'painting': {'name': u'Painting', 'slug': u'painting'}, u'other-5': {'name': u'Other', 'slug': u'other-5'}}...
</code></pre>
| 0 | 2016-07-22T19:46:06Z | 38,534,313 | <p>You have some major concept misunderstanding about dictionary. Dictionary in python is like a <a href="https://en.wikipedia.org/wiki/Hash_table" rel="nofollow">hash table</a>, hash table has no order. The output of the dict is really environment dependent so you couldn't depend on that. You might see one way of the output while other people see another way. You should consider using <a href="https://docs.python.org/2/library/collections.html#collections.OrderedDict" rel="nofollow"><code>OrderedDict</code></a> instead.</p>
| 3 | 2016-07-22T19:52:14Z | [
"python",
"django"
] |
Ordering a nested dictionary in python | 38,534,228 | <p>I have a nested dictionary (category and subcategories), <code>dict</code>, that I am having trouble sorting. </p>
<p>The output of <code>dict</code> is:</p>
<pre><code>{u'sports': {u'basketball': {'name': u'Basketball', 'slug': u'basketball'}, u'baseball': {'name': u'Baseball', 'slug': u'baseball'}}, u'dance': {u'salsa': {'name': u'Salsa', 'slug': u'salsa'}}, u'arts': {u'other-5': {'name': u'Other', 'slug': u'other-5'}, u'painting': {'name': u'Painting', 'slug': u'painting'}}, u'music': {u'cello': {'name': u'Cello', 'slug': u'cello'}, u'accordion': {'name': u'Accordion', 'slug': u'accordion'}}}
</code></pre>
<p>How can I sort this dictionary so that the 'other' subcategory always shows up at the end of the nested dictionary. For example the order for the "arts" category should be:</p>
<pre><code>..., u'arts': {u'painting': {'name': u'Painting', 'slug': u'painting'}, u'other-5': {'name': u'Other', 'slug': u'other-5'}}...
</code></pre>
| 0 | 2016-07-22T19:46:06Z | 38,534,406 | <p>Python dictionaries (regular <code>dict</code> instances) are not sorted. If you want to sort your dict, you could:</p>
<pre><code>from collections import OrderedDict
mynewdict = OrderedDict(sorted(yourdict.items()))
</code></pre>
<p>An OrderedDict does not provide a sorting mechanism, but only respect the order of the keys being inserted on it (we sort those keys calling sorted beforehand).</p>
<p>Since you need a specific criteria (lets say your keys are alphabetically ordered except for the "other" key which goes to the end), you need to declare it:</p>
<pre><code>def mycustomsort(key):
return (0 if key != 'other' else 1, key)
mynewdict = OrderedDict(sorted(yourdict.items(), key=mycustomsort))
</code></pre>
<p>This way you are creating a tuple for the nested criteria: the first criteria is other vs. no-other, and so the 0 or 1 (since 1 is greater, other comes later), while the second criteria is the key itself. You can remove the second criteria and not return a tuple if you want, but only the 0 and 1, and the code will work without alphabetical sort.</p>
<p><strong>This solution will not work</strong> if you plan to edit the dictionary later, and there is no standard class supporting that. </p>
| 0 | 2016-07-22T19:59:11Z | [
"python",
"django"
] |
How nested filters work? | 38,534,246 | <p>I guess I don't fully understand how nested filters work. </p>
<p>I've created highly nested (and slightly silly) filter object:</p>
<pre><code>L = iter(range(100000))
for i in range(10000):
L = filter(lambda x, i=i: x != i, L)
</code></pre>
<p>Each of the additional filter level just trim the iterator some more (actually by one item).</p>
<p>Now when I call this filter object I expected all the nested conditions to be tested with each <code>next</code> call. How else can we know that the <code>next</code> value successfully passes all these condition? Indeed the first call takes a very long time to execute, but then each additional iteration is considerably shorter:</p>
<pre><code>import time
j = 0
lasttime = time.time()
for x in L:
curtime = time.time()
print(x, curtime - lasttime)
lasttime = curtime
j += 1
if j > 10:
break
</code></pre>
<p>The result is:</p>
<pre><code>10000 9.558015823364258
10001 0.0020017623901367188
10002 0.002501964569091797
10003 0.0020017623901367188
10004 0.0025022029876708984
10005 0.0025017261505126953
10006 0.0020020008087158203
10007 0.002001047134399414
10008 0.002501249313354492
10009 0.002002716064453125
10010 0.0
</code></pre>
<p>What's under the hood? How is this happening? I'll appreciate some explanation into the inner working that creates this.</p>
| 0 | 2016-07-22T19:47:36Z | 38,534,404 | <p>The first iteration has to apply about 50 million predicate tests to reject the first 10 thousand elements, so it takes ages. Every iteration after that only needs to apply 10 thousand tests to accept the next element, so they're about 5000 times faster. The variation you see between later iterations is just noise; it's not significant.</p>
| 2 | 2016-07-22T19:59:00Z | [
"python",
"python-3.x"
] |
python-plotly-boxplot Why not showing the max and minimum value on graph | 38,534,310 | <p>This is my data:[0, 45, 47, 46, 47,47,43,100].
And picture looks like this:
<a href="https://plot.ly/~zfrancica/8/" rel="nofollow">https://plot.ly/~zfrancica/8/</a></p>
<p>I want to have a picture like this: <a href="https://plot.ly/~zfrancica/9/" rel="nofollow">https://plot.ly/~zfrancica/9/</a>
(an out come picture of [0, 45, 47, 46, 100] )</p>
<p>I want to have a fixed 0 minimum and 100 maximum.(May be this is not the correct box plot but I want to fix the minimum and maximum.) How should I do this?</p>
<p>(If ploty can't do this, a matplotlib plot code would also be fine.)
My code:</p>
<pre><code>import plotly.plotly as py
import plotly.graph_objs as go
def box_plot(**kwargs):
print kwargs
num = len(kwargs['circum'])
#WornPercentage = go.Box(x=kwargs['circum'])
#data = [WornPercentage]
#py.iplot(data)
data = [
go.Box(
x=kwargs['circum'],
boxpoints='all',
jitter=0.3,
pointpos=-1.8
)
]
py.iplot(data)
process_dict={'circum':[0, 45, 47, 46, 47,47,43,100]}
box_plot(**process_dict)
</code></pre>
| 1 | 2016-07-22T19:52:02Z | 39,922,298 | <h3>Credentials</h3>
<p>For those who don't know how to save credentials permanently;</p>
<pre><code>import plotly.tools as tls
tls.set_credentials_file(username='username', api_key='api-key')
</code></pre>
<p>Assuming you have already saved your credentials;</p>
<h3>A - Simple Solution</h3>
<p>For a single one dimensional data;</p>
<pre><code>import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Box(
y=[0, 45, 47, 46, 47, 47, 43, 100], # Data provided
name = 'Provided Data Chart',
marker = dict(
color = 'rgb(0, 0, 255)' # Blue : #0000FF
)
)
data = [trace0]
py.plot(data) # Using plot instead of iplot
</code></pre>
<h3>A - Output</h3>
<p><a href="http://i.stack.imgur.com/VaUWd.png" rel="nofollow"><img src="http://i.stack.imgur.com/VaUWd.png" alt="enter image description here"></a></p>
<h3>B - Using Layout</h3>
<p>And you can also check this out for a nicer layout, there is a <code>range</code> option you can use, for when <code>autorange</code> is set to <code>False</code>;</p>
<pre><code>import plotly.plotly as py
import plotly.graph_objs as go
from plotly.graph_objs import Layout
from plotly.graph_objs import Font
from plotly.graph_objs import YAxis
from plotly.graph_objs import Figure
trace0 = go.Box(
y=[0, 45, 47, 46, 47, 47, 43, 100], # Data provided
name = 'Provided Data Chart',
marker = dict(
color = 'rgb(0, 0, 255)' # Blue : #00FF00
)
)
data = [trace0]
layout = Layout(
autosize=True,
font=Font(
family='"Droid Sans", sans-serif'
),
height=638,
title='Plotly Box Demo',
width=1002,
yaxis=YAxis(
autorange=False, # Autorange set to False
range=[0, 100], # Custom Range
title='Y units', # To be changed, units
type='linear' # Use 'log' for appropriate data
)
)
fig = Figure(data=data, layout=layout)
plot_url = py.plot(fig) # Using plot instead of iplot
</code></pre>
<h3>B - Output</h3>
<p><a href="http://i.stack.imgur.com/ioAWj.png" rel="nofollow"><img src="http://i.stack.imgur.com/ioAWj.png" alt="enter image description here"></a></p>
<p>Hope that it helps.</p>
| 0 | 2016-10-07T16:48:20Z | [
"python",
"matplotlib",
"plot",
"boxplot",
"plotly"
] |
Tensorflow Mac GPU pywrap_tensorflow ignored in restricted program | 38,534,364 | <p>When running <code>python -c "import tensorflow"</code> after following tensorflow's Mac GPU installation instructions and building the package from source, I'm getting </p>
<pre><code>dyld: warning, LC_RPATH $ORIGIN/../../_solib_darwin/_U_S_Sthird_Uparty_Sgpus_Scuda_Ccudart___Uthird_Uparty_Sgpus_Scuda_Slib in /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so being ignored in restricted program because it is a relative path
dyld: warning, LC_RPATH third_party/gpus/cuda/lib in /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so being ignored in restricted program because it is a relative path
dyld: warning, LC_RPATH third_party/gpus/cuda/extras/CUPTI/lib in /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so being ignored in restricted program because it is a relative path
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Library/Python/2.7/site-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/Library/Python/2.7/site-packages/tensorflow/python/__init__.py", line 48, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Library/Python/2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 21, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/Library/Python/2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 20, in swig_import_helper
return importlib.import_module('_pywrap_tensorflow')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named _pywrap_tensorflow
</code></pre>
<p>I've tried rebuilding the package a couple times and have been sure to run the python command outside of the tensorflow source directory but am stuck. </p>
<p>Thanks in advance for any ideas on how to solve this.</p>
| 2 | 2016-07-22T19:56:06Z | 38,535,656 | <p>Have you tried v 0.9 ?
<code>sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.9.0rc0-py2-none-any.whl</code></p>
| 1 | 2016-07-22T21:42:29Z | [
"python",
"osx",
"tensorflow",
"nvidia",
"dyld"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.