title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Passing a C++ pointer between C and Python | 38,825,958 | <p>I have a python extension module written in C++, which contains multiple functions. One of these generates an instance of a custom structure, which I then want to use with other functions of my module in Python as follows</p>
<pre><code>import MyModule
var = MyModule.genFunc()
MyModule.readFunc(var)
</code></pre>
<p>To do this, I've tried using PyCapsule objects to pass a pointer to these objects between Python and C, but this produces errors when attempting to read them in the second C function ("PyCapsule_GetPointer called with invalid PyCapsule object"). Python, however, if asked to print the PyCapsule object (var) correctly identifies it as a "'capsule object "testcapsule"'. My C code appears as follows:</p>
<pre><code>struct MyStruct {
int value;
};
static PyObject* genFunc(PyObject* self, PyObject *args) {
MyStruct var;
PyObject *capsuleTest;
var.value = 1;
capsuleTest = PyCapsule_New(&var, "testcapsule", NULL);
return capsuleTest;
}
static PyObject* readFunc(PyObject* self, PyObject *args) {
PyCapsule_GetPointer(args, "testcapsule");
return 0;
}
</code></pre>
<p>Thank you for your help.</p>
| 0 | 2016-08-08T09:45:55Z | 38,885,475 | <p>Like stated in a comment to your question, you'll run into an issue when reading data from the local variable <code>MyStruct var</code>. For this you can use the third destructor to <code>PyCapsule_New</code>.</p>
<p>But that's not the reason for your problem just now. You're using <code>PyCapsule_GetPointer(args, "testcapsule")</code> on the <code>args</code> parameter. And since it's not a capsule, even though <code>var</code> is one, you might have defined the signature of the function as <code>METH_VARARGS</code>. Instead you need to unpack the tuple or use <code>METH_O</code>.</p>
| 1 | 2016-08-11T00:20:24Z | [
"python",
"c++",
"c",
"python-c-api"
] |
Tastypie decimal and datetime filters not working | 38,826,031 | <p>The following <strong>lte</strong> and <strong>gte</strong> filter queries returns 0 objects:</p>
<pre><code>curl http://localhost/river/river/?runoff__lte=100.0&runoff__gte=150.0
curl http://localhost/river/river/?runoff__lte=100&runoff__gte=150
http://localhost/river/river/?dt_timestamp__lte=2015-01-01T03:00&dt_timestamp__gte=2015-01-07T18:00&format=json
</code></pre>
<p>Here's <strong>models.py</strong></p>
<pre><code>class River(models.Model):
dt_timestamp = models.DateTimeField()
stage = models.DecimalField(max_digits=10, decimal_places=3, blank=True, null=True)
runoff = models.DecimalField(max_digits=10, decimal_places=3)
</code></pre>
<p><strong>api.py</strong></p>
<pre><code>class RiverResults(ModelResource):
class Meta:
queryset = River.objects.all()
resource_name = 'river'
authorization = Authorization()
filtering = {
'user': ALL_WITH_RELATIONS,
'dt_timestamp': ALL
'stage': ALL,
'runoff': ALL,
}
</code></pre>
<p>In settings.py <strong>USE_TZ = False</strong></p>
<p>Am running Postgresql <strong>9.3</strong>, Django <strong>1.6</strong> and Tastypie <strong>0.12.2</strong>.
Not sure what am doing wrong.</p>
<p>Regards,
Allan</p>
| 0 | 2016-08-08T09:48:51Z | 38,837,149 | <p>I guess You need to select rivers where <code>runoff</code> are between 100 and 150 or <code>dt_timestamp</code> between 2015-01-01T03:00 and 2015-01-07T18:00. In this case try:</p>
<pre><code>http://localhost/river/river/?runoff__gte=100.0&runoff__lte=150.0
http://localhost/river/river/?runoff__gte=100&runoff__lte=150
http://localhost/river/river/?dt_timestamp__gte=2015-01-01T03:00&dt_timestamp__lte=2015-01-07T18:00
</code></pre>
<p>If You need to select rivers where runoff are lower than 100 or greater than 150, then You need to overwrite <code>build_filters</code> function:</p>
<pre><code>def build_filters(self, filters=None):
qs_filters = super(RiverResults, self).build_filters(filters)
if filters.get('runoff_not_between') is not None:
runoff_not_between = filters.get('runoff_not_between').split(',')
qs_filters = qs_filters.update(Q(runoff__lte=runoff_not_between[0]) | Q(runoff__gte=runoff_not_between[1]))
return qs_filters
</code></pre>
<p>and use:</p>
<pre><code>http://localhost/river/river/?runoff_not_between=100.0,150.0
http://localhost/river/river/?runoff_not_between=100,150
</code></pre>
| 0 | 2016-08-08T19:32:48Z | [
"python",
"django",
"postgresql",
"tastypie"
] |
Python27 - Convert tuple time to datetime object | 38,826,071 | <p>I'm trying to get a timestamp from an email like this:</p>
<pre><code>Received: by 10.64.149.4 with SMTP id tw4csp1211013ieb;
Thu, 4 Aug 2016 07:02:01 -0700 (PDT)
</code></pre>
<p>First of all, I parse the timestamp with:</p>
<pre><code>d = email.utils.parsedate('Thu, 4 Aug 2016 07:02:01 -0700 (PDT)')
Result: (2016, 8, 4, 7, 2, 1, 0, 1, -1)
</code></pre>
<p>Here comes the problem. I try to convert the result to a datetime, but in vain. </p>
<pre><code>d = email.utils.parsedate('Thu, 4 Aug 2016 07:02:01 -0700 (PDT)')
date_object = datetime(d)
Result: Traceback (most recent call last):
File "data.py", line 12, in <module>
date_object = datetime(d)
TypeError: an integer is required
</code></pre>
<p>What's the problem?</p>
| -1 | 2016-08-08T09:51:30Z | 38,826,256 | <p>Check out <a href="https://docs.python.org/2/library/calendar.html#calendar.timegm" rel="nofollow"><code>calendar.timegm</code></a> or <a href="https://docs.python.org/3/library/time.html#time.mktime" rel="nofollow"><code>time.mktime</code></a> for converting a struct_time tuple to a float. You can then use <a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.fromtimestamp" rel="nofollow"><code>datetime.fromtimestamp</code></a> with that float to create a DateTime object.</p>
| 1 | 2016-08-08T10:00:26Z | [
"python",
"datetime"
] |
Python27 - Convert tuple time to datetime object | 38,826,071 | <p>I'm trying to get a timestamp from an email like this:</p>
<pre><code>Received: by 10.64.149.4 with SMTP id tw4csp1211013ieb;
Thu, 4 Aug 2016 07:02:01 -0700 (PDT)
</code></pre>
<p>First of all, I parse the timestamp with:</p>
<pre><code>d = email.utils.parsedate('Thu, 4 Aug 2016 07:02:01 -0700 (PDT)')
Result: (2016, 8, 4, 7, 2, 1, 0, 1, -1)
</code></pre>
<p>Here comes the problem. I try to convert the result to a datetime, but in vain. </p>
<pre><code>d = email.utils.parsedate('Thu, 4 Aug 2016 07:02:01 -0700 (PDT)')
date_object = datetime(d)
Result: Traceback (most recent call last):
File "data.py", line 12, in <module>
date_object = datetime(d)
TypeError: an integer is required
</code></pre>
<p>What's the problem?</p>
| -1 | 2016-08-08T09:51:30Z | 38,826,356 | <p>Last two items of the tuple are strange, they don't look like timezone data. But if you don't need timezone aware <code>datetime</code> object, you can do something like this <code>datetime(*d[:-2])</code></p>
| 1 | 2016-08-08T10:04:07Z | [
"python",
"datetime"
] |
Python27 - Convert tuple time to datetime object | 38,826,071 | <p>I'm trying to get a timestamp from an email like this:</p>
<pre><code>Received: by 10.64.149.4 with SMTP id tw4csp1211013ieb;
Thu, 4 Aug 2016 07:02:01 -0700 (PDT)
</code></pre>
<p>First of all, I parse the timestamp with:</p>
<pre><code>d = email.utils.parsedate('Thu, 4 Aug 2016 07:02:01 -0700 (PDT)')
Result: (2016, 8, 4, 7, 2, 1, 0, 1, -1)
</code></pre>
<p>Here comes the problem. I try to convert the result to a datetime, but in vain. </p>
<pre><code>d = email.utils.parsedate('Thu, 4 Aug 2016 07:02:01 -0700 (PDT)')
date_object = datetime(d)
Result: Traceback (most recent call last):
File "data.py", line 12, in <module>
date_object = datetime(d)
TypeError: an integer is required
</code></pre>
<p>What's the problem?</p>
| -1 | 2016-08-08T09:51:30Z | 38,826,655 | <p><code>email.utils.parsedate</code> <a href="https://docs.python.org/2/library/email.util.html#email.utils.parsedate" rel="nofollow">returns a 9 tuple similar to the structure <code>struct_time</code> but with the index 6,7 and 8 unusable</a></p>
<p><a href="https://docs.python.org/2/library/time.html#time.struct_time" rel="nofollow"><code>struct_time</code>:</a></p>
<pre><code>Index Attribute Values
0 tm_year (for example, 1993)
1 tm_mon range [1, 12]
2 tm_mday range [1, 31]
3 tm_hour range [0, 23]
4 tm_min range [0, 59]
5 tm_sec range [0, 61]; see (2) in strftime() description
6 tm_wday range [0, 6], Monday is 0
7 tm_yday range [1, 366]
8 tm_isdst 0, 1 or -1
</code></pre>
<p>And <code>datetime</code> objects require different values for its constructor</p>
<p><code>datetime.datetime(year, month, day[, hour[, minute[, second[, microsecond[, tzinfo]]]]])</code></p>
<p>You could directly create a <code>datetime</code> using the useful parts of your tuple as</p>
<p><code>date_object = datetime(*d[0:6])</code></p>
<hr>
<p>Edit: Careful with this, because this will create the object in local time, disregarding the time zone information.</p>
<hr>
<p>Edit 2: You can solve this by using <code>strptime</code>, you just need to cut the <code>(PDT)</code> from the end of your string, since PDT is not a valid name for <code>tzinfo</code>, but <code>-0700</code> is enough</p>
| 1 | 2016-08-08T10:18:06Z | [
"python",
"datetime"
] |
Django : 'WSGIRequest' object has no attribute 'user'? - AuthenticationMiddleware & SessionAuthenticationMiddleware are in sequence | 38,826,118 | <p>Getting the following error while try to access the django admin panel.</p>
<pre><code>Environment:
Request Method: GET
Request URL: http://localhost:8000/admin/
Django Version: 1.9.8
Python Version: 2.7.10
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'bookapp']
Installed Middleware:
['django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware']
Traceback:
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response
149. response = self.process_exception_by_middleware(e, request)
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response
147. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py" in wrapper
265. return self.admin_view(view, cacheable)(*args, **kwargs)
File "/Library/Python/2.7/site-packages/django/utils/decorators.py" in _wrapped_view
149. response = view_func(request, *args, **kwargs)
File "/Library/Python/2.7/site-packages/django/views/decorators/cache.py" in _wrapped_view_func
57. response = view_func(request, *args, **kwargs)
File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py" in inner
233. if not self.has_permission(request):
File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py" in has_permission
173. return request.user.is_active and request.user.is_staff
Exception Type: AttributeError at /admin/
Exception Value: 'WSGIRequest' object has no attribute 'user'
</code></pre>
<p>Here is my middleware settings in settings.py</p>
<pre><code>MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
</code></pre>
<p>I have also tried with clearing cache like cleared migrations and deleted database. But it won't work for me.</p>
<p>Can anybody knows what's the issue here?</p>
| 3 | 2016-08-08T09:53:35Z | 38,994,304 | <p>change the name <code>MIDDLEWARE</code> to <code>MIDDLEWARE_CLASSES</code>.</p>
| 4 | 2016-08-17T10:38:23Z | [
"python",
"django",
"wsgi"
] |
How do I have the index of an item in a python list which exactly matches a given regular expression pattern? | 38,826,135 | <p>Suppose I have a list of string items like the following:</p>
<pre><code>lst = ['apple', 'mango', 'MIME']
p = r"MIME" # regex pattern
</code></pre>
<p>Now, I want to have the index of the item which exactly matches the pattern <code>p</code>. Clearly, the answer is <code>2</code>. How do I do it?</p>
| -2 | 2016-08-08T09:54:23Z | 38,826,305 | <p>Here is you code re module is for regular expressions</p>
<pre><code>import re
lst = ['apple', 'mango', 'MIME']
p = r"MIME" # regex pattern
for i in range(0,len(lst)):
if re.compile(p).match(lst[i]):
print i
</code></pre>
| 0 | 2016-08-08T10:02:09Z | [
"python",
"regex",
"python-3.x"
] |
How do I have the index of an item in a python list which exactly matches a given regular expression pattern? | 38,826,135 | <p>Suppose I have a list of string items like the following:</p>
<pre><code>lst = ['apple', 'mango', 'MIME']
p = r"MIME" # regex pattern
</code></pre>
<p>Now, I want to have the index of the item which exactly matches the pattern <code>p</code>. Clearly, the answer is <code>2</code>. How do I do it?</p>
| -2 | 2016-08-08T09:54:23Z | 38,826,724 | <pre><code>import re
r = re.compile(p)
try:
print(next(i for i in range(0, len(lst)) if r.match(lst[i])))
except StopIteration:
print('Not found')
</code></pre>
| 0 | 2016-08-08T10:21:50Z | [
"python",
"regex",
"python-3.x"
] |
Python-Download video from youtube and convert it to mp3 | 38,826,243 | <p>This code downloads the video and convert it to mp3 file. However, the mp3 audio will become 2 times longer than normal video. How can I solve this problem?</p>
<pre class="lang-python prettyprint-override"><code>import pafy
import os
import moviepy.editor as mp
print "[+] Welcome to Youtube downloader."
download_url = raw_input("URL :")
video = pafy.new(download_url)
best = video.streams
file_name = video.streams[0]
print file_name
directory = "downloaded-music"
if not os.path.exists(directory):
os.makedirs(directory)
x = file_name.download(filepath = directory)
clip = mp.VideoFileClip(x)
print clip.size
clip.audio.write_audiofile(x + ".mp3")
os.remove(x)
</code></pre>
| 0 | 2016-08-08T09:59:39Z | 38,826,731 | <p>It's the value "clip.size" that is twice bigger than the real one, or it's the real lenght of the file ?</p>
| 0 | 2016-08-08T10:22:12Z | [
"python"
] |
Including new app causing AttributeError: 'str' object has no attribute '_meta' in Python/Django app | 38,826,246 | <p>Using Python 2.7 and Django 1.9.9 I'm getting the following error when I try in include an app I am developing within my INSTALLED_APS</p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 398, in execute
self.check()
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 426, in check
include_deployment_checks=include_deployment_checks,
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/core/checks/registry.py", line 75, in run_checks
new_errors = check(app_configs=app_configs)
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/core/checks/model_checks.py", line 28, in check_all_models
errors.extend(model.check(**kwargs))
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/db/models/base.py", line 1180, in check
errors.extend(cls._check_long_column_names())
File "/var/www/cltc/env/local/lib/python2.7/site-packages/django/db/models/base.py", line 1631, in _check_long_column_names
for m2m in f.remote_field.through._meta.local_fields:
AttributeError: 'str' object has no attribute '_meta'
</code></pre>
<p>This is, I believe being caused by something wrong in models.py which looks like this:</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
from django.conf import settings
import datetime
from django.core.exceptions import ValidationError
from django.utils.translation import ugettext_lazy as _
class Category(models.Model):
name = models.CharField('Category', max_length=30)
age = models.IntegerField('Member age at start of Subscription', default=18)
class Subscription(models.Model):
name = models.CharField('Subscription', max_length=30)
cost = models.DecimalField('Price', max_digits=6, decimal_places=2, default=0.00)
start = models.DateField('Start Date')
end = models.DateField('End Date')
category = models.ManyToManyField(
Category,
through = 'SubscritptionCategory',
related_name = 'category',
verbose_name = 'Membership Category',
help_text = 'Membership Categories included in the Subscription'
)
def __unicode__(self):
return u'%s' % (self.name)
def clean (self):
if self.start > self.end:
raise ValidationError(
_("Start Date must be earlier than End Date"),
)
def is_live(self):
if self.end >= datetime.datetime.now().date():
return True
else:
return False
class SubscriptionCategory (models.Model):
subscription = models.ForeignKey(
Subscription,
verbose_name = 'Subscription',
help_text = 'A class of membership (which could include several members, eg Family).',
)
category = models.ForeignKey(
Category,
verbose_name = 'Category',
help_text = 'A class of member (eg adult)',
)
</code></pre>
<p>Any help most welcome</p>
| 0 | 2016-08-08T09:59:51Z | 38,826,333 | <p>You have a typo in your declaration of the <code>through</code> attribute of Subscription.category: "SubscritptionCategory" rather than "SubscriptionCategory". Because of that, Django can't find the model you're referencing.</p>
<p>Note however that since you don't define any extra fields on that through model, there's not much point having it; your code would be simpler, and many of Django's functions would work better, if you didn't define it.</p>
| 1 | 2016-08-08T10:03:04Z | [
"python",
"django",
"python-2.7"
] |
How to use python-ldap to modify configuration DIT of openldap? | 38,826,302 | <p>For example, I can use the following command to change the RootDN password:</p>
<pre><code>sudo ldapmodify -H ldapi:// -Y EXTERNAL -f newpasswd.ldif
</code></pre>
<p>The contend of newpasswd.ldif is:</p>
<pre><code>dn: olcDatabase={1}mdb,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}/Z6e+b4L6ucglrlI4KsNaX142WDCH6de
</code></pre>
<p>My question is, how can I implement it using python-ldap? I searched for a while, but could not find an answer.</p>
| 0 | 2016-08-08T10:02:03Z | 38,910,813 | <p>I find the solution, here is my code.</p>
<pre><code>def ldap_modify_root():
conn = ldap.initialize("ldapi://")
conn.sasl_external_bind_s()
old = {'olcRootPW': 'xxx'}
new = {'olcRootPW': '{SSHA}/Z6e+b4L6ucglrlI4KsNaX142WDCH6de'}
ldif = modlist.modifyModlist(old, new)
dn = "olcDatabase={1}mdb,cn=config"
conn.modify_s(dn, ldif)
conn.unbind()
</code></pre>
| 0 | 2016-08-12T05:50:18Z | [
"python",
"openldap",
"python-ldap"
] |
Python could not open port | 38,826,334 | <p>I want to communicate with my serial port in python. I installed pyserial for linux:</p>
<pre><code>import thread
import serial
PORT = '/dev/rfcomm0'
BAUDRATE = 921600
TIMEOUT = 1
port = serial.Serial(port=PORT, baudrate=BAUDRATE, timeout=TIMEOUT)
port.open()
...
port.close()
</code></pre>
<p>It gives the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/dnaphone/PycharmProjects/test/BluetoothClient.py", line 12, in <module>
port = serial.Serial(port=PORT, baudrate=BAUDRATE, timeout=TIMEOUT)
File "/usr/local/lib/python2.7/dist-packages/serial/serialutil.py", line 182, in __init__
self.open()
File "/usr/local/lib/python2.7/dist-packages/serial/serialposix.py", line 247, in open
raise SerialException(msg.errno, "could not open port {}: {}".format(self._port, msg))
serial.serialutil.SerialException: [Errno 2] could not open port /dev/rfcomm0: [Errno 2] No such file or directory: '/dev/rfcomm0'
</code></pre>
| 0 | 2016-08-08T10:03:05Z | 38,842,193 | <p>/dev/rfcomm0 seems a BlueZ registered virtual device port.
Can you list this device on your system? and did your Bluetooth starts well?</p>
| 0 | 2016-08-09T04:38:27Z | [
"python",
"linux",
"python-2.7",
"bluetooth"
] |
How to access request parameters from SET value of on_delete in django model | 38,826,392 | <p>I have about 10 models each of which use User as a Foreign Key. When the user is deleted, I want to set the values in each of these tables to the one who deletes the user. The users are being deleted through django admin area by authorized staff.</p>
<p>So, I guess I have to call a function and use SET of on_delete method to set the value.
This is the code as per documentation.</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
def get_sentinel_user():
return User.objects.get_or_create(username='deleted')[0]
class MyModel(models.Model):
user = models.ForeignKey(User, on_delete=models.SET(get_sentinel_user))
</code></pre>
<p>In my case, I want to get the user who deletes the user. That can be obtained only from Request parameters. How can I access request parameters in get_sentinel_user function?</p>
| 2 | 2016-08-08T10:06:02Z | 38,826,751 | <p>First of all, <code>on_delete.SET</code> designed that so it will call callable without any parameters, check it's <a href="https://docs.djangoproject.com/en/dev/_modules/django/db/models/deletion/#SET" rel="nofollow">source code</a>.</p>
<p>The actual solution to your problem can be achieved in many different ways, but good one I think is when you add column <code>deleted_by</code> to your user and make it as <code>ForeignKey</code> to user which deleted this entry. Then instead of actual deletion you add column named <code>is_active</code> and set it to False when someone deletes that user. </p>
<p>You can change delete behavior to update by reading <a href="https://blog.esharesinc.com/supercharging-django-productivity-at-eshares-8dbf9042825e#.711zt81xm" rel="nofollow">this article</a> for example. It contains nice examples and github gists links.</p>
<p>And then what you really need is to pass <code>self</code> to <code>SET</code> method which can be done by using <code>signals</code>. Couple question on StackOverflow: <a href="http://stackoverflow.com/q/34918828/3606603">one</a>, <a href="http://stackoverflow.com/q/36571834/3606603">two</a>.</p>
| 2 | 2016-08-08T10:23:30Z | [
"python",
"django",
"django-models",
"django-rest-framework"
] |
How to access request parameters from SET value of on_delete in django model | 38,826,392 | <p>I have about 10 models each of which use User as a Foreign Key. When the user is deleted, I want to set the values in each of these tables to the one who deletes the user. The users are being deleted through django admin area by authorized staff.</p>
<p>So, I guess I have to call a function and use SET of on_delete method to set the value.
This is the code as per documentation.</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
def get_sentinel_user():
return User.objects.get_or_create(username='deleted')[0]
class MyModel(models.Model):
user = models.ForeignKey(User, on_delete=models.SET(get_sentinel_user))
</code></pre>
<p>In my case, I want to get the user who deletes the user. That can be obtained only from Request parameters. How can I access request parameters in get_sentinel_user function?</p>
| 2 | 2016-08-08T10:06:02Z | 38,826,998 | <p>Since a staff member is deleting other users and effectively taking ownership of the objects that belongs to the deleted user the only effective solution is to override the <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model" rel="nofollow">delete_model</a> method in <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/admin/" rel="nofollow">Admin</a>.</p>
<pre><code>def delete_model(self, request, obj) :
'''
deletes a single item. changes ownership
'''
MyModel.objects.filter(user_id=obj.pk).update(user_id=request.user.id)
obj.delete()
</code></pre>
<p>Notice that the get_sentinel_user is no longer required because delete_model already has access to request.user</p>
<p>Howveer you are not out of the woods yet. The admin has a bulk delete action. You actually have to <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/admin/actions/#django.contrib.admin.ModelAdmin.get_actions" rel="nofollow">disable the bulk delete action</a> for the user admin and replace it with one of your own for you to be in full control.</p>
<blockquote>
<p>The âdelete selected objectsâ action uses QuerySet.delete() for
efficiency reasons, which has an important caveat: your modelâs
delete() method will not be called.</p>
</blockquote>
<p>Please note that bulk delete does not fire any signals for each object that's deleted so the only other django based solution that works here is to override the delete method on the User manager. However the user manager will not have access to the request.user instance ether.</p>
| 2 | 2016-08-08T10:35:08Z | [
"python",
"django",
"django-models",
"django-rest-framework"
] |
Create set and lists with the positions in the set efficently | 38,826,426 | <p>I need create a set of the IDs of some messages, and the positions in the original list. The code is used to sort the messages an later handle them according to ID. </p>
<p>The following works, is readable, but slow. </p>
<pre><code>import numpy as np
IDs=np.array([354,45,45,34,354])#example, the actual array is huge
Dict={}
for counter in xrange(len(IDs)):
try:
Dict[IDs[counter]].append(counter)
except:
Dict[IDs[counter]]=[counter]
print(Dict)
#{354: [0, 4], 34: [3], 45: [1, 2]}
</code></pre>
<p>Any ideas how to speed it up? There is no need for the lists to be sorted. Later in the code is used as follows, and after that the dict is discarded</p>
<pre><code>for item in Dict.values():
Position_of_ID=Position[np.array(item)]
...
</code></pre>
| 0 | 2016-08-08T10:08:04Z | 38,826,574 | <p>Try to use <code>defaultdict</code> and <code>enumerate</code>:</p>
<pre><code>from collections import defaultdict
Dict = defaultdict(list)
for i,id in enumerate(IDs):
Dict[id].append(i)
</code></pre>
<p>(using <code>try</code> and <code>except</code> is a bad idea <a href="http://stackoverflow.com/a/8108440/2069380">if the exceptions aren't rare</a>)</p>
| 1 | 2016-08-08T10:14:30Z | [
"python",
"dictionary"
] |
Create set and lists with the positions in the set efficently | 38,826,426 | <p>I need create a set of the IDs of some messages, and the positions in the original list. The code is used to sort the messages an later handle them according to ID. </p>
<p>The following works, is readable, but slow. </p>
<pre><code>import numpy as np
IDs=np.array([354,45,45,34,354])#example, the actual array is huge
Dict={}
for counter in xrange(len(IDs)):
try:
Dict[IDs[counter]].append(counter)
except:
Dict[IDs[counter]]=[counter]
print(Dict)
#{354: [0, 4], 34: [3], 45: [1, 2]}
</code></pre>
<p>Any ideas how to speed it up? There is no need for the lists to be sorted. Later in the code is used as follows, and after that the dict is discarded</p>
<pre><code>for item in Dict.values():
Position_of_ID=Position[np.array(item)]
...
</code></pre>
| 0 | 2016-08-08T10:08:04Z | 38,843,179 | <p>The fastest code I came up with, is this. It does much more math, is not as readable, and I am not proud, but it is a lot quicker (even with large arrays):</p>
<pre><code> Sorted_positions_of_IDs=np.argsort(IDs,kind='mergesort')
SortedIDs=IDs[Sorted_positions_of_IDs]
Position=0
Position_last=-1
Dict={}
while(Position<len(Sorted_positions_of_IDs)):
ID=SortedIDs[Position]
Position_last=np.searchsorted(SortedIDs,ID,side='right')
Dict[ID]=Sorted_positions_of_IDs[Position:Position_last]
Position=Position_last
</code></pre>
<p>Anyway, good Ideas will be appreciated.</p>
| 0 | 2016-08-09T06:04:23Z | [
"python",
"dictionary"
] |
How to convert certain categorical values from a DataFrame to numerical(int) in python? | 38,826,468 | <p>I have a dataframe with multiple columns and categorical data in it which I want to assign a numerical (int) value in order to proceed with the data clean-up I need to do.</p>
<p>e.g. I want the cells in the column OldValue & NewValue containing "1st Call" to have a value of 2, "2nd Call" to have a value of 3, and so on...</p>
<p>I post a <a href="http://i.stack.imgur.com/M9T4w.png" rel="nofollow">Screenshot</a> of my dataframe so you understand what I mean.</p>
<p>I am new to programming languages hence if you could please put a practical example to your answer it would be of huge help.</p>
| -1 | 2016-08-08T10:09:49Z | 38,826,810 | <p>You may use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow">replace</a> and passing a dictinary which maps each category on a numerical value and then add the new column to your DataFrame:</p>
<pre><code>df['OldValueInt'] = df['OldValue'].replace( {'1st Call attempted': 2, '2nd Call attempted': 3})
</code></pre>
<p>Example:</p>
<pre><code>df = pd.DataFrame([['a','x'],['b','x'],['a','y']], columns=['ab','xy'])
df['abInt'] = df['ab'].replace('a': 1, 'b': 2)
print df
</code></pre>
<p>which yields</p>
<pre><code> ab xy abInt
0 a x 1
1 b x 2
2 a y 1
</code></pre>
<p>or if you want to replace multiple columns:</p>
<pre><code>df[['ab','xy']] = df.replace( {'ab': {'a': 1, 'b': 2},
'xy': {'x': 2, 'y': 3}} )
</code></pre>
| 0 | 2016-08-08T10:26:33Z | [
"python",
"dataframe",
"converter"
] |
Use of Dictonary in python | 38,826,477 | <p>I am doing a Coursera python exercise and having trouble writing my code.</p>
<p>The question is as following:</p>
<p>Write a program to read through the mbox-short.txt and figure out who has the sent the greatest number of mail messages. The program looks for 'From ' lines and takes the second word of those lines as the person who sent the mail.</p>
<p>The program creates a Python dictionary that maps the sender's mail address to a count of the number of times they appear in the file. After the dictionary is produced, the program reads through the dictionary using a maximum loop to find the most prolific committer.
The sample text file is in this line: <a href="http://www.pythonlearn.com/code/mbox-short.txt" rel="nofollow">http://www.pythonlearn.com/code/mbox-short.txt</a></p>
<p>And the expected output should be:</p>
<p>cwen@iupui.edu 5</p>
<p>This is my code:</p>
<pre><code> name = raw_input("Enter file:")
if len(name) < 1 : name = "mbox-short.txt"
name="mbox-short.txt"
handle=open(name)
text=handle.read()
for line in handle:
line=line.rstrip()
words=line.split()
if words==[]: continue
if words[0]!='From':continue
words2=words[1]
words3=words2.split()
counts=dict()
for word in words3:
counts[word]=counts.get(word,0)+1
bigcount=None
bigword=None
for key,val in counts.items():
if val>bigcount:
bigword=key
bigcount=val
print bigword,bigcount
</code></pre>
<p>My Output is:
cwen@iupui.edu 1</p>
<p>Please suggest where is the error in my code. </p>
| -3 | 2016-08-08T10:10:13Z | 38,826,798 | <p>Here is the code you need, you're not storing the words2 output in the list, and as mentioned in the comments you're recursing the file in a wrong manner too. </p>
<p>Hope this will help you.</p>
<pre><code>name = raw_input("Enter file:")
if len(name) < 1 : name = "mbox-short.txt"
name="mbox-short.txt"
handle=open(name)
words3 = []
for line in handle:
line=line.rstrip()
words=line.split()
if words==[]: continue
if words[0]!='From':continue
words2=words[1]
words3.append(words2.split()[0])
# print words
counts=dict()
for word in words3:
counts[word]=counts.get(word,0)+1
bigcount=None
bigword=None
for key,val in counts.items():
if val>bigcount:
bigword=key
bigcount=val
print bigword,bigcount
</code></pre>
| 0 | 2016-08-08T10:25:49Z | [
"python",
"python-2.7",
"python-3.x",
"dictionary"
] |
Python Xarray add DataArray to Dataset | 38,826,505 | <p>Very simple question but I can't find the answer online. I have a <code>Dataset</code> and I just want to add a named <code>DataArray</code> to it. Something like <code>dataset.add({"new_array": new_data_array})</code>. I know about <code>merge</code> and <code>update</code> and <code>concatenate</code>, but my understanding is that <code>merge</code> is for merging two or more <code>Dataset</code>s and <code>concatenate</code> is for concatenating two or more <code>DataArray</code>s to form another <code>DataArray</code>, and I haven't quite fully understood <code>update</code> yet. I've tried <code>dataset.update({"new_array": new_data_array})</code> but I get the following error.</p>
<pre><code>InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>I've also tried <code>dataset["new_array"] = new_data_array</code> and I get the same error.</p>
<h1>Update</h1>
<p>I've now found out that the problem is that some of my coordinates have duplicate values, which I didn't know about. Coordinates are used as index, so Xarray gets confused (understandably) when trying to combine the shared coordinates. Below is an example that works.</p>
<pre><code>names = ["joaquin", "manolo", "xavier"]
n = xarray.DataArray([23, 98, 23], coords={"name": names})
print(n)
print("======")
m = numpy.random.randint(0, 256, (3, 4, 4)).astype(numpy.uint8)
mm = xarray.DataArray(m, dims=["name", "row", "column"], coords=[names, range(4), range(4)])
print(mm)
print("======")
n_dataset = n.rename("number").to_dataset()
n_dataset["mm"] = mm
print(n_dataset)
</code></pre>
<p>Output:</p>
<pre><code><xarray.DataArray (name: 3)>
array([23, 98, 23])
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'xavier'
======
<xarray.DataArray (name: 3, row: 4, column: 4)>
array([[[ 55, 63, 250, 211],
[204, 151, 164, 237],
[182, 24, 211, 12],
[183, 220, 35, 78]],
[[208, 7, 91, 114],
[195, 30, 108, 130],
[ 61, 224, 105, 125],
[ 65, 1, 132, 137]],
[[ 52, 137, 62, 206],
[188, 160, 156, 126],
[145, 223, 103, 240],
[141, 38, 43, 68]]], dtype=uint8)
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'xavier'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
======
<xarray.Dataset>
Dimensions: (column: 4, name: 3, row: 4)
Coordinates:
* name (name) object 'joaquin' 'manolo' 'xavier'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
Data variables:
number (name) int64 23 98 23
mm (name, row, column) uint8 55 63 250 211 204 151 164 237 182 24 ...
</code></pre>
<p>The above code uses <code>names</code> as the index. If I change the code a little bit, so that <code>names</code> has a duplicate, say <code>names = ["joaquin", "manolo", "joaquin"]</code>, then I get an <code>InvalidIndexError</code>.</p>
<p>Code:</p>
<pre><code>names = ["joaquin", "manolo", "joaquin"]
n = xarray.DataArray([23, 98, 23], coords={"name": names})
print(n)
print("======")
m = numpy.random.randint(0, 256, (3, 4, 4)).astype(numpy.uint8)
mm = xarray.DataArray(m, dims=["name", "row", "column"], coords=[names, range(4), range(4)])
print(mm)
print("======")
n_dataset = n.rename("number").to_dataset()
n_dataset["mm"] = mm
print(n_dataset)
</code></pre>
<p>Output:</p>
<pre><code><xarray.DataArray (name: 3)>
array([23, 98, 23])
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'joaquin'
======
<xarray.DataArray (name: 3, row: 4, column: 4)>
array([[[247, 3, 20, 141],
[ 54, 111, 224, 56],
[144, 117, 131, 192],
[230, 44, 174, 14]],
[[225, 184, 170, 248],
[ 57, 105, 165, 70],
[220, 228, 238, 17],
[ 90, 118, 87, 30]],
[[158, 211, 31, 212],
[ 63, 172, 190, 254],
[165, 163, 184, 22],
[ 49, 224, 196, 244]]], dtype=uint8)
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'joaquin'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
======
---------------------------------------------------------------------------
InvalidIndexError Traceback (most recent call last)
<ipython-input-12-50863379cefe> in <module>()
8 print("======")
9 n_dataset = n.rename("number").to_dataset()
---> 10 n_dataset["mm"] = mm
11 print(n_dataset)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in __setitem__(self, key, value)
536 raise NotImplementedError('cannot yet use a dictionary as a key '
537 'to set Dataset values')
--> 538 self.update({key: value})
539
540 def __delitem__(self, key):
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in update(self, other, inplace)
1434 dataset.
1435 """
-> 1436 variables, coord_names, dims = dataset_update_method(self, other)
1437
1438 return self._replace_vars_and_dims(variables, coord_names, dims,
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/merge.py in dataset_update_method(dataset, other)
492 priority_arg = 1
493 indexes = dataset.indexes
--> 494 return merge_core(objs, priority_arg=priority_arg, indexes=indexes)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes)
373 coerced = coerce_pandas_values(objs)
374 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes,
--> 375 skip_single_target=True)
376 expanded = expand_variable_dicts(aligned)
377
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in deep_align(list_of_variable_maps, join, copy, indexes, skip_single_target)
162
163 aligned = partial_align(*targets, join=join, copy=copy, indexes=indexes,
--> 164 skip_single_target=skip_single_target)
165
166 for key, aligned_obj in zip(keys, aligned):
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in partial_align(*objects, **kwargs)
122 valid_indexers = dict((k, v) for k, v in joined_indexes.items()
123 if k in obj.dims)
--> 124 result.append(obj.reindex(copy=copy, **valid_indexers))
125
126 return tuple(result)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in reindex(self, indexers, method, tolerance, copy, **kw_indexers)
1216
1217 variables = alignment.reindex_variables(
-> 1218 self.variables, self.indexes, indexers, method, tolerance, copy=copy)
1219 return self._replace_vars_and_dims(variables)
1220
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in reindex_variables(variables, indexes, indexers, method, tolerance, copy)
234 target = utils.safe_cast_to_index(indexers[name])
235 indexer = index.get_indexer(target, method=method,
--> 236 **get_indexer_kwargs)
237
238 to_shape[name] = len(target)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/indexes/base.py in get_indexer(self, target, method, limit, tolerance)
2080
2081 if not self.is_unique:
-> 2082 raise InvalidIndexError('Reindexing only valid with uniquely'
2083 ' valued Index objects')
2084
InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>So it's not a bug in Xarray as such. Nevertheless, I wasted many hours trying to find this bug, and I wish the error message was more informative. I hope the Xarray collaborators will fix this soon. (Put in a uniqueness check on the coordinates before attempting to merge.)</p>
<p>In any case, the method provided by my answer below still works.</p>
| 0 | 2016-08-08T10:11:17Z | 38,826,718 | <p>OK I found one way to do it but I don't know if this is the canonical way or the best way, so please criticise and advise. It doesn't feel like a good way of doing it.</p>
<pre><code>dataset = xarray.merge([dataset, new_data_array.rename("new_array")])
</code></pre>
| 0 | 2016-08-08T10:21:40Z | [
"python",
"python-xarray"
] |
Python Xarray add DataArray to Dataset | 38,826,505 | <p>Very simple question but I can't find the answer online. I have a <code>Dataset</code> and I just want to add a named <code>DataArray</code> to it. Something like <code>dataset.add({"new_array": new_data_array})</code>. I know about <code>merge</code> and <code>update</code> and <code>concatenate</code>, but my understanding is that <code>merge</code> is for merging two or more <code>Dataset</code>s and <code>concatenate</code> is for concatenating two or more <code>DataArray</code>s to form another <code>DataArray</code>, and I haven't quite fully understood <code>update</code> yet. I've tried <code>dataset.update({"new_array": new_data_array})</code> but I get the following error.</p>
<pre><code>InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>I've also tried <code>dataset["new_array"] = new_data_array</code> and I get the same error.</p>
<h1>Update</h1>
<p>I've now found out that the problem is that some of my coordinates have duplicate values, which I didn't know about. Coordinates are used as index, so Xarray gets confused (understandably) when trying to combine the shared coordinates. Below is an example that works.</p>
<pre><code>names = ["joaquin", "manolo", "xavier"]
n = xarray.DataArray([23, 98, 23], coords={"name": names})
print(n)
print("======")
m = numpy.random.randint(0, 256, (3, 4, 4)).astype(numpy.uint8)
mm = xarray.DataArray(m, dims=["name", "row", "column"], coords=[names, range(4), range(4)])
print(mm)
print("======")
n_dataset = n.rename("number").to_dataset()
n_dataset["mm"] = mm
print(n_dataset)
</code></pre>
<p>Output:</p>
<pre><code><xarray.DataArray (name: 3)>
array([23, 98, 23])
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'xavier'
======
<xarray.DataArray (name: 3, row: 4, column: 4)>
array([[[ 55, 63, 250, 211],
[204, 151, 164, 237],
[182, 24, 211, 12],
[183, 220, 35, 78]],
[[208, 7, 91, 114],
[195, 30, 108, 130],
[ 61, 224, 105, 125],
[ 65, 1, 132, 137]],
[[ 52, 137, 62, 206],
[188, 160, 156, 126],
[145, 223, 103, 240],
[141, 38, 43, 68]]], dtype=uint8)
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'xavier'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
======
<xarray.Dataset>
Dimensions: (column: 4, name: 3, row: 4)
Coordinates:
* name (name) object 'joaquin' 'manolo' 'xavier'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
Data variables:
number (name) int64 23 98 23
mm (name, row, column) uint8 55 63 250 211 204 151 164 237 182 24 ...
</code></pre>
<p>The above code uses <code>names</code> as the index. If I change the code a little bit, so that <code>names</code> has a duplicate, say <code>names = ["joaquin", "manolo", "joaquin"]</code>, then I get an <code>InvalidIndexError</code>.</p>
<p>Code:</p>
<pre><code>names = ["joaquin", "manolo", "joaquin"]
n = xarray.DataArray([23, 98, 23], coords={"name": names})
print(n)
print("======")
m = numpy.random.randint(0, 256, (3, 4, 4)).astype(numpy.uint8)
mm = xarray.DataArray(m, dims=["name", "row", "column"], coords=[names, range(4), range(4)])
print(mm)
print("======")
n_dataset = n.rename("number").to_dataset()
n_dataset["mm"] = mm
print(n_dataset)
</code></pre>
<p>Output:</p>
<pre><code><xarray.DataArray (name: 3)>
array([23, 98, 23])
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'joaquin'
======
<xarray.DataArray (name: 3, row: 4, column: 4)>
array([[[247, 3, 20, 141],
[ 54, 111, 224, 56],
[144, 117, 131, 192],
[230, 44, 174, 14]],
[[225, 184, 170, 248],
[ 57, 105, 165, 70],
[220, 228, 238, 17],
[ 90, 118, 87, 30]],
[[158, 211, 31, 212],
[ 63, 172, 190, 254],
[165, 163, 184, 22],
[ 49, 224, 196, 244]]], dtype=uint8)
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'joaquin'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
======
---------------------------------------------------------------------------
InvalidIndexError Traceback (most recent call last)
<ipython-input-12-50863379cefe> in <module>()
8 print("======")
9 n_dataset = n.rename("number").to_dataset()
---> 10 n_dataset["mm"] = mm
11 print(n_dataset)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in __setitem__(self, key, value)
536 raise NotImplementedError('cannot yet use a dictionary as a key '
537 'to set Dataset values')
--> 538 self.update({key: value})
539
540 def __delitem__(self, key):
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in update(self, other, inplace)
1434 dataset.
1435 """
-> 1436 variables, coord_names, dims = dataset_update_method(self, other)
1437
1438 return self._replace_vars_and_dims(variables, coord_names, dims,
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/merge.py in dataset_update_method(dataset, other)
492 priority_arg = 1
493 indexes = dataset.indexes
--> 494 return merge_core(objs, priority_arg=priority_arg, indexes=indexes)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes)
373 coerced = coerce_pandas_values(objs)
374 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes,
--> 375 skip_single_target=True)
376 expanded = expand_variable_dicts(aligned)
377
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in deep_align(list_of_variable_maps, join, copy, indexes, skip_single_target)
162
163 aligned = partial_align(*targets, join=join, copy=copy, indexes=indexes,
--> 164 skip_single_target=skip_single_target)
165
166 for key, aligned_obj in zip(keys, aligned):
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in partial_align(*objects, **kwargs)
122 valid_indexers = dict((k, v) for k, v in joined_indexes.items()
123 if k in obj.dims)
--> 124 result.append(obj.reindex(copy=copy, **valid_indexers))
125
126 return tuple(result)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in reindex(self, indexers, method, tolerance, copy, **kw_indexers)
1216
1217 variables = alignment.reindex_variables(
-> 1218 self.variables, self.indexes, indexers, method, tolerance, copy=copy)
1219 return self._replace_vars_and_dims(variables)
1220
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in reindex_variables(variables, indexes, indexers, method, tolerance, copy)
234 target = utils.safe_cast_to_index(indexers[name])
235 indexer = index.get_indexer(target, method=method,
--> 236 **get_indexer_kwargs)
237
238 to_shape[name] = len(target)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/indexes/base.py in get_indexer(self, target, method, limit, tolerance)
2080
2081 if not self.is_unique:
-> 2082 raise InvalidIndexError('Reindexing only valid with uniquely'
2083 ' valued Index objects')
2084
InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>So it's not a bug in Xarray as such. Nevertheless, I wasted many hours trying to find this bug, and I wish the error message was more informative. I hope the Xarray collaborators will fix this soon. (Put in a uniqueness check on the coordinates before attempting to merge.)</p>
<p>In any case, the method provided by my answer below still works.</p>
| 0 | 2016-08-08T10:11:17Z | 38,833,440 | <p>You need to make sure that the dimensions of your new DataArray are the same as in your dataset. Then the following should work:</p>
<pre><code>dataset['new_array_name'] = new_array
</code></pre>
<p>Here is a complete example to try it out:</p>
<pre><code># Create some dimensions
x = np.linspace(-10,10,10)
y = np.linspace(-20,20,20)
(yy, xx) = np.meshgrid(y,x)
# Make two different DataArrays with equal dimensions
var1 = xray.DataArray(np.random.randn(len(x),len(y)),coords=[x, y],dims=['x','y'])
var2 = xray.DataArray(-xx**2+yy**2,coords=[x, y],dims=['x','y'])
# Save one DataArray as dataset
ds = var1.to_dataset(name = 'var1')
# Add second DataArray to existing dataset (ds)
ds['var2'] = var2
</code></pre>
| 2 | 2016-08-08T15:42:40Z | [
"python",
"python-xarray"
] |
Python Xarray add DataArray to Dataset | 38,826,505 | <p>Very simple question but I can't find the answer online. I have a <code>Dataset</code> and I just want to add a named <code>DataArray</code> to it. Something like <code>dataset.add({"new_array": new_data_array})</code>. I know about <code>merge</code> and <code>update</code> and <code>concatenate</code>, but my understanding is that <code>merge</code> is for merging two or more <code>Dataset</code>s and <code>concatenate</code> is for concatenating two or more <code>DataArray</code>s to form another <code>DataArray</code>, and I haven't quite fully understood <code>update</code> yet. I've tried <code>dataset.update({"new_array": new_data_array})</code> but I get the following error.</p>
<pre><code>InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>I've also tried <code>dataset["new_array"] = new_data_array</code> and I get the same error.</p>
<h1>Update</h1>
<p>I've now found out that the problem is that some of my coordinates have duplicate values, which I didn't know about. Coordinates are used as index, so Xarray gets confused (understandably) when trying to combine the shared coordinates. Below is an example that works.</p>
<pre><code>names = ["joaquin", "manolo", "xavier"]
n = xarray.DataArray([23, 98, 23], coords={"name": names})
print(n)
print("======")
m = numpy.random.randint(0, 256, (3, 4, 4)).astype(numpy.uint8)
mm = xarray.DataArray(m, dims=["name", "row", "column"], coords=[names, range(4), range(4)])
print(mm)
print("======")
n_dataset = n.rename("number").to_dataset()
n_dataset["mm"] = mm
print(n_dataset)
</code></pre>
<p>Output:</p>
<pre><code><xarray.DataArray (name: 3)>
array([23, 98, 23])
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'xavier'
======
<xarray.DataArray (name: 3, row: 4, column: 4)>
array([[[ 55, 63, 250, 211],
[204, 151, 164, 237],
[182, 24, 211, 12],
[183, 220, 35, 78]],
[[208, 7, 91, 114],
[195, 30, 108, 130],
[ 61, 224, 105, 125],
[ 65, 1, 132, 137]],
[[ 52, 137, 62, 206],
[188, 160, 156, 126],
[145, 223, 103, 240],
[141, 38, 43, 68]]], dtype=uint8)
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'xavier'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
======
<xarray.Dataset>
Dimensions: (column: 4, name: 3, row: 4)
Coordinates:
* name (name) object 'joaquin' 'manolo' 'xavier'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
Data variables:
number (name) int64 23 98 23
mm (name, row, column) uint8 55 63 250 211 204 151 164 237 182 24 ...
</code></pre>
<p>The above code uses <code>names</code> as the index. If I change the code a little bit, so that <code>names</code> has a duplicate, say <code>names = ["joaquin", "manolo", "joaquin"]</code>, then I get an <code>InvalidIndexError</code>.</p>
<p>Code:</p>
<pre><code>names = ["joaquin", "manolo", "joaquin"]
n = xarray.DataArray([23, 98, 23], coords={"name": names})
print(n)
print("======")
m = numpy.random.randint(0, 256, (3, 4, 4)).astype(numpy.uint8)
mm = xarray.DataArray(m, dims=["name", "row", "column"], coords=[names, range(4), range(4)])
print(mm)
print("======")
n_dataset = n.rename("number").to_dataset()
n_dataset["mm"] = mm
print(n_dataset)
</code></pre>
<p>Output:</p>
<pre><code><xarray.DataArray (name: 3)>
array([23, 98, 23])
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'joaquin'
======
<xarray.DataArray (name: 3, row: 4, column: 4)>
array([[[247, 3, 20, 141],
[ 54, 111, 224, 56],
[144, 117, 131, 192],
[230, 44, 174, 14]],
[[225, 184, 170, 248],
[ 57, 105, 165, 70],
[220, 228, 238, 17],
[ 90, 118, 87, 30]],
[[158, 211, 31, 212],
[ 63, 172, 190, 254],
[165, 163, 184, 22],
[ 49, 224, 196, 244]]], dtype=uint8)
Coordinates:
* name (name) <U7 'joaquin' 'manolo' 'joaquin'
* row (row) int64 0 1 2 3
* column (column) int64 0 1 2 3
======
---------------------------------------------------------------------------
InvalidIndexError Traceback (most recent call last)
<ipython-input-12-50863379cefe> in <module>()
8 print("======")
9 n_dataset = n.rename("number").to_dataset()
---> 10 n_dataset["mm"] = mm
11 print(n_dataset)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in __setitem__(self, key, value)
536 raise NotImplementedError('cannot yet use a dictionary as a key '
537 'to set Dataset values')
--> 538 self.update({key: value})
539
540 def __delitem__(self, key):
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in update(self, other, inplace)
1434 dataset.
1435 """
-> 1436 variables, coord_names, dims = dataset_update_method(self, other)
1437
1438 return self._replace_vars_and_dims(variables, coord_names, dims,
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/merge.py in dataset_update_method(dataset, other)
492 priority_arg = 1
493 indexes = dataset.indexes
--> 494 return merge_core(objs, priority_arg=priority_arg, indexes=indexes)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes)
373 coerced = coerce_pandas_values(objs)
374 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes,
--> 375 skip_single_target=True)
376 expanded = expand_variable_dicts(aligned)
377
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in deep_align(list_of_variable_maps, join, copy, indexes, skip_single_target)
162
163 aligned = partial_align(*targets, join=join, copy=copy, indexes=indexes,
--> 164 skip_single_target=skip_single_target)
165
166 for key, aligned_obj in zip(keys, aligned):
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in partial_align(*objects, **kwargs)
122 valid_indexers = dict((k, v) for k, v in joined_indexes.items()
123 if k in obj.dims)
--> 124 result.append(obj.reindex(copy=copy, **valid_indexers))
125
126 return tuple(result)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/dataset.py in reindex(self, indexers, method, tolerance, copy, **kw_indexers)
1216
1217 variables = alignment.reindex_variables(
-> 1218 self.variables, self.indexes, indexers, method, tolerance, copy=copy)
1219 return self._replace_vars_and_dims(variables)
1220
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xarray/core/alignment.py in reindex_variables(variables, indexes, indexers, method, tolerance, copy)
234 target = utils.safe_cast_to_index(indexers[name])
235 indexer = index.get_indexer(target, method=method,
--> 236 **get_indexer_kwargs)
237
238 to_shape[name] = len(target)
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/indexes/base.py in get_indexer(self, target, method, limit, tolerance)
2080
2081 if not self.is_unique:
-> 2082 raise InvalidIndexError('Reindexing only valid with uniquely'
2083 ' valued Index objects')
2084
InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>So it's not a bug in Xarray as such. Nevertheless, I wasted many hours trying to find this bug, and I wish the error message was more informative. I hope the Xarray collaborators will fix this soon. (Put in a uniqueness check on the coordinates before attempting to merge.)</p>
<p>In any case, the method provided by my answer below still works.</p>
| 0 | 2016-08-08T10:11:17Z | 39,049,786 | <p>Thanks to your detailed report, this issue has now been fixed in the latest release of xarray (v0.8.2).</p>
<p>We fixed the behavior in two ways:</p>
<ol>
<li><p>Alignment operations between xarray objects now succeed even with non-unique indexes, as long as the non-unique indexes take on identical values on all objects.</p></li>
<li><p>If you attempt to align objects with non-unique indexes that are <em>not</em> identical, you now get an informative error message reporting the name of the index with duplicate values, e.g., <code>ValueError: cannot reindex or align along dimension 'x' because the index has duplicate values</code>.</p></li>
</ol>
| 2 | 2016-08-20T02:05:15Z | [
"python",
"python-xarray"
] |
Check for letter after a substring | 38,826,710 | <p>I'm trying to find a way to check for a letter after a sub-string. I have searched for it both on the internet and google. So if the sub-string is: <code>sub</code>, and the text is: <code>substrings are cool</code>, it should return <code>True</code> and if the text is: <code>sub dafdgnjgf</code> it should return <code>False</code>. </p>
| -1 | 2016-08-08T10:21:14Z | 38,827,072 | <p>If you're looking for <em>any</em> valid letter after a given sub-string, you can first find the position of that sub-string inside the string with <code>str.find</code>, then check if the index after the sub-string matches <code>ascii_letters</code> from <code>string</code>:</p>
<pre><code>from string import ascii_letters
sub = 'sub'
s = 'substrings are cool'
</code></pre>
<p>Now, your check can look like this, index the string after at the position <code>str.find(sub) + len(sub)</code> i.e the position after the sub-string:</p>
<pre><code>if s[s.find(sub) + len(sub)] in ascii_letters:
print(True)
else:
print(False)
</code></pre>
<p>This prints <code>True</code> if <code>sub</code> is followed by a letter, if not:</p>
<pre><code>s = 'sub dafdgnjgf'
</code></pre>
<p>it prints <code>False</code>.</p>
| 3 | 2016-08-08T10:38:45Z | [
"python",
"string",
"python-3.x"
] |
Check for letter after a substring | 38,826,710 | <p>I'm trying to find a way to check for a letter after a sub-string. I have searched for it both on the internet and google. So if the sub-string is: <code>sub</code>, and the text is: <code>substrings are cool</code>, it should return <code>True</code> and if the text is: <code>sub dafdgnjgf</code> it should return <code>False</code>. </p>
| -1 | 2016-08-08T10:21:14Z | 38,827,436 | <p>You could use a <a href="https://docs.python.org/3/library/re.html" rel="nofollow">regular expression</a> with the substring as a lookbehind, matching a character:</p>
<pre><code>>>> p = re.compile(r"(?<=sub)([a-zA-Z])")
>>> p.search("substrings are cool")
<_sre.SRE_Match at 0x7f3a50dbc300>
>>> p.search("substrings are cool").group()
's'
>>> p.search("sub dafdgnjgf")
None
</code></pre>
| 1 | 2016-08-08T10:57:42Z | [
"python",
"string",
"python-3.x"
] |
parameters constraint in numpy lstsq | 38,826,726 | <p>I'm fitting a set of data with <code>numpy.lstsq()</code>:</p>
<pre><code>numpy.linalg.lstsq(a,b)[0]
</code></pre>
<p>returns something like:</p>
<pre><code>array([ -0.02179386, 0.08898451, -0.17298247, 0.89314904])
</code></pre>
<p>Note the fitting solution is a mix of positive and negative float.</p>
<p>Unfortunately, in my physical model, the fitting solutions represent a mass: consequently I'd like to force <code>lstsq()</code> to return a set of positive values as a solution of the fitting. Is it possible to do this?</p>
<p>i.e.</p>
<pre><code>solution = {a_1, ... a_i, ... a_N} with a_i > 0 for i = {1, ..., N}
</code></pre>
| 2 | 2016-08-08T10:21:59Z | 38,828,588 | <p><em>Non-negative least squares</em> is implemented in <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.nnls.html" rel="nofollow"><code>scipy.optimize.nnls</code></a>.</p>
<pre><code>from scipy.optimize import nnls
solution = nnls(a, b)[0]
</code></pre>
| 1 | 2016-08-08T11:55:34Z | [
"python",
"numpy"
] |
How to refer to types in Python? | 38,826,791 | <p>I know how to refer to some types, i.e. <code>str</code> for <code>type('')</code>, <code>int</code> for <code>type(1)</code> etc. But what about other types, such as <code>type(lambda: None)</code>?</p>
<p>I know to refer to it as <code>type(f) == type(lambda: None)</code> for comparison, but, is there any other way, except that? (No silly answer such as code-golf, use another return value for the lambda, etc.)</p>
<p><strong>Edit</strong>: I just found out how to utilize the accepted answer!</p>
<pre><code>import types
function = types.FunctionType
builtin_function_or_method = types.BuiltinFunctionType
classobj = types.ClassType
generator = types.GeneratorType
object = type
del types
</code></pre>
| -2 | 2016-08-08T10:25:24Z | 38,826,916 | <p>If you want to test if a certain value is <em>a lambda</em>:</p>
<pre><code>import types
foo = lambda: None
print(isinstance(foo, types.LambdaType))
</code></pre>
<p>See <a href="https://docs.python.org/3/library/types.html" rel="nofollow">https://docs.python.org/3/library/types.html</a>.</p>
<p>You usually use <a href="https://docs.python.org/3/library/functions.html#isinstance" rel="nofollow"><code>isinstance</code></a> for testing <em>if something is something</em>, <code>type() == type()</code> is very frowned upon.</p>
| 2 | 2016-08-08T10:31:30Z | [
"python",
"types"
] |
How to refer to types in Python? | 38,826,791 | <p>I know how to refer to some types, i.e. <code>str</code> for <code>type('')</code>, <code>int</code> for <code>type(1)</code> etc. But what about other types, such as <code>type(lambda: None)</code>?</p>
<p>I know to refer to it as <code>type(f) == type(lambda: None)</code> for comparison, but, is there any other way, except that? (No silly answer such as code-golf, use another return value for the lambda, etc.)</p>
<p><strong>Edit</strong>: I just found out how to utilize the accepted answer!</p>
<pre><code>import types
function = types.FunctionType
builtin_function_or_method = types.BuiltinFunctionType
classobj = types.ClassType
generator = types.GeneratorType
object = type
del types
</code></pre>
| -2 | 2016-08-08T10:25:24Z | 38,826,919 | <p>To get the type of various builtin types in Python 2, you can use the <code>types</code> module.</p>
<pre><code>import types
l = lambda: 0
function_type = types.FunctionType
if isinstance(l, function_type):
do_stuff()
</code></pre>
| 1 | 2016-08-08T10:31:38Z | [
"python",
"types"
] |
How to refer to types in Python? | 38,826,791 | <p>I know how to refer to some types, i.e. <code>str</code> for <code>type('')</code>, <code>int</code> for <code>type(1)</code> etc. But what about other types, such as <code>type(lambda: None)</code>?</p>
<p>I know to refer to it as <code>type(f) == type(lambda: None)</code> for comparison, but, is there any other way, except that? (No silly answer such as code-golf, use another return value for the lambda, etc.)</p>
<p><strong>Edit</strong>: I just found out how to utilize the accepted answer!</p>
<pre><code>import types
function = types.FunctionType
builtin_function_or_method = types.BuiltinFunctionType
classobj = types.ClassType
generator = types.GeneratorType
object = type
del types
</code></pre>
| -2 | 2016-08-08T10:25:24Z | 38,827,555 | <p>Checking if something is a function is different to checking if it is a callable.
Most likely you want to check if the object is callable (can I use this object like a function?). </p>
<p>A function is one of several types of callable. They are:</p>
<ol>
<li>Pure python functions</li>
<li>Methods</li>
<li>Classes / instances with <code>__call__</code> methods</li>
<li>Builtin (C) functions</li>
</ol>
<p>A pure python function is one that is either a <code>lambda</code>, or defined using a <code>def</code> statement. A method is a function that exists on a class, and has been access via an instance -- basically a function with it's first argument bound as an instance). Classes and objects can be called if their class implements a <code>__call__</code> method (all classes are callable by default, but not all objects are). Builtin functions are just functions written in C, rather than Python.</p>
<p>If you want to check if something is callable, then use the <code>callable</code> function. eg.</p>
<pre><code>>>> callable(lambda: None)
True
>>> class X:
def f(self):
pass
>>> callable(X().f)
True
>>> callable(object)
True
>>> callable(len)
True
</code></pre>
<p>If you want to check if a an object is one of the specific subtypes of callable then use the <code>types</code> module.</p>
<pre><code>>>> from types import FunctionType, BuiltinFunctionType, MethodType
>>> isinstance((lambda: None), FunctionType)
True
>>> class X:
def f(self):
pass
>>> isinstance(X().f, FunctionType)
False
>>> isinstance(X.f, FunctionType) # False in Python 2.x
True
>>> isinstance(object, FunctionType)
False
>>> isinstance(len, FunctionType)
False
</code></pre>
<p>For other types, you may wish to use the <code>collections.abc</code> module. The classes defined here are abstract base classes that check instances of subclasses confirm to the specification, or can be used to check if an object can act as an instance of the type). eg.</p>
<pre><code>from collections.abc import Generator
def my_generator():
yield
assert isinstance(my_generator(), Generator)
assert type(my_generator()) is not Generator
</code></pre>
| 1 | 2016-08-08T11:02:51Z | [
"python",
"types"
] |
how to get a folder name and file name in python | 38,827,110 | <p>I have a python program named <code>myscript.py</code> which would give me the list of files and folders in the path provided.</p>
<pre><code>import os
import sys
def get_files_in_directory(path):
for root, dirs, files in os.walk(path):
print(root)
print(dirs)
print(files)
path=sys.argv[1]
get_files_in_directory(path)
</code></pre>
<p>the path i provided is <code>D:\Python\TEST</code> and there are some folders and sub folder in it as you can see in the output provided below :</p>
<pre><code>C:\Python34>python myscript.py "D:\Python\Test"
D:\Python\Test
['D1', 'D2']
[]
D:\Python\Test\D1
['SD1', 'SD2', 'SD3']
[]
D:\Python\Test\D1\SD1
[]
['f1.bat', 'f2.bat', 'f3.bat']
D:\Python\Test\D1\SD2
[]
['f1.bat']
D:\Python\Test\D1\SD3
[]
['f1.bat', 'f2.bat']
D:\Python\Test\D2
['SD1', 'SD2']
[]
D:\Python\Test\D2\SD1
[]
['f1.bat', 'f2.bat']
D:\Python\Test\D2\SD2
[]
['f1.bat']
</code></pre>
<p>I need to get the output this way :</p>
<pre><code>D1-SD1-f1.bat
D1-SD1-f2.bat
D1-SD1-f3.bat
D1-SD2-f1.bat
D1-SD3-f1.bat
D1-SD3-f2.bat
D2-SD1-f1.bat
D2-SD1-f2.bat
D2-SD2-f1.bat
</code></pre>
<p>how do i get the output this way.(Keep in mind the directory structure here is just an example. The program should be flexible for any path). How do i do this.
Is there any os command for this. Can you Please help me solve this? (Additional Information : I am using Python3.4)</p>
| 0 | 2016-08-08T10:40:36Z | 38,827,188 | <p>You could try using the <code>glob</code> module instead:</p>
<pre><code>import glob
glob.glob('D:\Python\Test\D1\*\*\*.bat')
</code></pre>
<p>Or, to just get the filenames</p>
<pre><code>import os
import glob
[os.path.basename(x) for x in glob.glob('D:\Python\Test\D1\*\*\*.bat')]
</code></pre>
| 1 | 2016-08-08T10:45:41Z | [
"python",
"python-3.x",
"path"
] |
how to get a folder name and file name in python | 38,827,110 | <p>I have a python program named <code>myscript.py</code> which would give me the list of files and folders in the path provided.</p>
<pre><code>import os
import sys
def get_files_in_directory(path):
for root, dirs, files in os.walk(path):
print(root)
print(dirs)
print(files)
path=sys.argv[1]
get_files_in_directory(path)
</code></pre>
<p>the path i provided is <code>D:\Python\TEST</code> and there are some folders and sub folder in it as you can see in the output provided below :</p>
<pre><code>C:\Python34>python myscript.py "D:\Python\Test"
D:\Python\Test
['D1', 'D2']
[]
D:\Python\Test\D1
['SD1', 'SD2', 'SD3']
[]
D:\Python\Test\D1\SD1
[]
['f1.bat', 'f2.bat', 'f3.bat']
D:\Python\Test\D1\SD2
[]
['f1.bat']
D:\Python\Test\D1\SD3
[]
['f1.bat', 'f2.bat']
D:\Python\Test\D2
['SD1', 'SD2']
[]
D:\Python\Test\D2\SD1
[]
['f1.bat', 'f2.bat']
D:\Python\Test\D2\SD2
[]
['f1.bat']
</code></pre>
<p>I need to get the output this way :</p>
<pre><code>D1-SD1-f1.bat
D1-SD1-f2.bat
D1-SD1-f3.bat
D1-SD2-f1.bat
D1-SD3-f1.bat
D1-SD3-f2.bat
D2-SD1-f1.bat
D2-SD1-f2.bat
D2-SD2-f1.bat
</code></pre>
<p>how do i get the output this way.(Keep in mind the directory structure here is just an example. The program should be flexible for any path). How do i do this.
Is there any os command for this. Can you Please help me solve this? (Additional Information : I am using Python3.4)</p>
| 0 | 2016-08-08T10:40:36Z | 38,829,770 | <p>To get what you want, you could do the following:</p>
<pre><code>def get_files_in_directory(path):
# Get the root dir (in your case: test)
rootDir = path.split('\\')[-1]
# Walk through all subfolder/files
for root, subfolder, fileList in os.walk(path):
for file in fileList:
# Skip empty dirs
if file != '':
# Get the full path of the file
fullPath = os.path.join(root,file)
# Split the path and the file (May do this one and the step above in one go
path, file = os.path.split(fullPath)
# For each subfolder in the path (in REVERSE order)
subfolders = []
for subfolder in path.split('\\')[::-1]:
# As long as it isn't the root dir, append it to the subfolders list
if subfolder == rootDir:
break
subfolders.append(subfolder)
# Print the list of subfolders (joined by '-')
# + '-' + file
print('{}-{}'.format( '-'.join(subfolders), file) )
path=sys.argv[1]
get_files_in_directory(path)
</code></pre>
<p>My test folder:</p>
<pre><code>SD1-D1-f1.bat
SD1-D1-f2.bat
SD2-D1-f1.bat
SD3-D1-f1.bat
SD3-D1-f2.bat
</code></pre>
<p>It may not be the best way to do it, but it will get you what you want.</p>
| 0 | 2016-08-08T12:53:54Z | [
"python",
"python-3.x",
"path"
] |
Pandas dataframe generate column with different row info, but no apply function | 38,827,141 | <p>maybe question name is not accurate (sorry for that because I don't find any accurate word to describe my question...), let me make an example:</p>
<p>The following dataframe is income with "week_id" and "user_id":</p>
<pre><code>week_id user income
1 1 100
1 2 50
2 1 200
2 2 30
2 3 150
3 1 100
3 2 150
....
</code></pre>
<p>I want to add a new column, which contains "income" of previous week, looks like:</p>
<pre><code>week_id user income previous_week_income
1 1 100 0
1 2 50 0
2 1 200 100
2 2 30 50
2 3 150 0
3 1 100 200
3 2 150 30
....
</code></pre>
<p>It looks like to generate new column with information from other rows, other than current row.</p>
<p>I know solution with apply function, but as it's row by row, it seems to be too slow for my case ( origin dataframe may be tens of millions of rows ), I wonder other fast solution to get the result?</p>
<p>The background is to generate factor for predictive analysis, so I want to use previous week income as one variable when predict current week income.</p>
<p>Thanks in advance :)</p>
| 1 | 2016-08-08T10:42:34Z | 38,827,173 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.shift.html" rel="nofollow"><code>DataFrameGroupBy.shift</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a> if each <code>week_id</code> has unique <code>users</code>:</p>
<pre><code>df['previous_week_income'] = df.groupby('user')['income'].shift().fillna(0)
print (df)
week_id user income previous_week_income
0 1 1 100 0.0
1 1 2 50 0.0
2 2 1 200 100.0
3 2 2 30 50.0
4 2 3 150 0.0
5 3 1 100 200.0
6 3 2 150 30.0
</code></pre>
| 0 | 2016-08-08T10:45:10Z | [
"python",
"pandas"
] |
How to use TensorFlow reader and queue to read two file at same time? | 38,827,264 | <p>My training set contains two kinds of file: training image with file name like "1.png" and label file with name like "1.label.txt".</p>
<p>I found some usage of Queue and Reader in tutorials like this:
</p>
<pre><code>filename_queue = tf.train.string_input_producer(filenames)
result.key, value = reader.read(filename_queue)
</code></pre>
<p>However, because my training set contains two kinds of file, one correspond to one. How can I make use of Queue and Reader like code above?</p>
<hr>
<p><strong>EDIT</strong></p>
<p>I am thinking about using one queue containing base names to feed to another two queue, which is image and label respectively. Code like this:
</p>
<pre><code>with tf.Session() as sess:
base_name_queue = tf.train.string_input_producer(['image_names'], num_epochs=20)
base_name = base_name_queue.dequeue()
image_name = base_name + ".png"
image_name_queue = data_flow_ops.FIFOQueue(32, image_name.dtype.base_dtype)
image_name_queue.enqueue([image_name])
x = image_name_queue.dequeue()
print_op = tf.Print(image_name, [image_name])
qr = tf.train.QueueRunner(base_name_queue, [base_name_queue] * 4)
coord = tf.train.Coordinator()
enqueue_threads = qr.create_threads(sess, coord=coord, start=True)
for step in range(1000000):
if coord.should_stop():
break
print(sess.run(print_op))
coord.request_stop()
coord.join(enqueue_threads)
</code></pre>
<p>But running this code would result in an error:</p>
<blockquote>
<p>TypeError: Fetch argument of has invalid type , must be a string or Tensor. (Can not convert a FIFOQueue into a Tensor or Operation.)</p>
</blockquote>
<p>and the error point to this line:</p>
<pre><code>coord.join(enqueue_threads)
</code></pre>
<p>I think I must misunderstand how TensorFlow queue works.</p>
| 0 | 2016-08-08T10:50:14Z | 38,870,337 | <p><em>I have figured out the solution to my problem. I would like post answer here instead of delete my question, hoping this will help people who is new to TensorFlow.</em></p>
<p>The answer contains two parts:</p>
<h1>Part 1: How to read files pair by pair using TensorFlow's queue</h1>
<p>The solution is simple:</p>
<ol>
<li>Use 2 queue to store two set of files. Note that the two set should be ordered in the same way.</li>
<li>Do some preprocessing respectively using <code>dequeue</code>. </li>
<li>Combine two preprocessed tensor into one list and pass the list to <code>shuffle_batch</code></li>
</ol>
<p>Code here:</p>
<pre><code>base_names = ['file1', 'file2']
base_tensor = tf.convert_to_tensor(base_names)
image_name_queue = tf.train.string_input_producer(
tensor + '.png',
shuffle=False # Note: must set shuffle to False
)
label_queue = tf.train.string_input_producer(
tensor + '.lable.txt',
shuffle=False # Note: must set shuffle to False
)
# use reader to read file
image_reader = tf.WholeFileReader()
image_key, image_raw = image_reader.read(image_name_queue)
image = tf.image.decode_png(image_raw)
label_reader = tf.WholeFileReader()
label_key, label_raw = label_reader.read(image_name_queue)
label = tf.image.decode_raw(label_raw)
# preprocess image
processed_image = tf.image.per_image_whitening(image)
batch = tf.train.shuffle_batch([processed_image, label], 10, 100, 100)
# print batch
queue_threads = queue_runner.start_queue_runners()
print(sess.run(batch))
</code></pre>
<h1>Part 2: Queue, QueueRunner, Coordinator and helper functions</h1>
<p>Queue is really a queue (seems meaningless). A queue has two method: <code>enqueue</code> and <code>dequeue</code>. The input of <code>enqueue</code> is <code>Tensor</code> (well, you can enqueue normal data, but it will be converted to <code>Tensor</code> internally). The return value of <code>dequeue</code> is a <code>Tensor</code>. So you can make pipeline of queues like this:</p>
<pre><code>q1 = data_flow_ops.FIFOQueue(32, tf.int)
q2 = data_flow_ops.FIFOQueue(32, tf.int)
enq1 = q1.enqueue([1,2,3,4,5])
v1 = q1.dequeue()
enq2 = q2.enqueue(v1)
</code></pre>
<p>The benefit of using queue in TensorFlow is to asynchronously load data, which will improve performance and save memory. The code above is not runnable, because there is no thread running those operations. QueueRunner is designed to describe how to <code>enqueue</code> data in parallel. So the parameter of initializing QueueRunner is an <code>enqueue</code> operation (the output of <code>enqueue</code>).</p>
<p>After setting up all the <code>QueueRunner</code>s, you have to start all the threads. One way is to start them when creating them:</p>
<pre><code>enqueue_threads = qr.create_threads(sess, coord=coord, start=True)
</code></pre>
<p>or, you can start all threads after all the setting up works done:</p>
<pre><code># add queue runner
queue_runner.add_queue_runner(queue_runner.QueueRunner(q, [enq]))
# start all queue runners
queue_threads = queue_runner.start_queue_runners()
</code></pre>
<p>When all the threads started, you have to decide when to exit. Coordinator is here to do this. <code>Coordinator</code> is like a shared flag between all the running threads. if one of them finished or run into error, it will call <code>coord.request_stop()</code>, then all the thread will get <code>True</code> when calling <code>coord.should_stop()</code>. So the pattern of using <code>Coordinator</code> is:</p>
<pre><code>coord = tf.train.Coordinator()
for step in range(1000000):
if coord.should_stop():
break
print(sess.run(print_op))
coord.request_stop()
coord.join(enqueue_threads)
</code></pre>
| 0 | 2016-08-10T10:08:58Z | [
"python",
"tensorflow"
] |
Why does pandas.Dataframe.drop() returns None? | 38,827,316 | <p>Here in my code I read the data from CSV:</p>
<pre><code>data = pandas.read_csv('dataset/job_functions.csv', names=["job","category"] ,skiprows=1).dropna().reindex()
num_jobs = data["job"].size
</code></pre>
<p>Then I want to drop the rows which 'category' label does not equal to <code>i</code>:</p>
<pre><code>data = data.drop(data[data.category!=i].index,inplace = True)
print(data.head())
</code></pre>
<p>Even dropping by the list of index returns None:</p>
<pre><code>data = data.drop(data.index[[1,2,3]],inplace = True)
</code></pre>
<p>Error message:</p>
<pre>Traceback (most recent call last):
File "sample.py", line 162, in
delete_common_words(27)
File "sample.py", line 92, in delete_common_words
print(data.head())
AttributeError: 'NoneType' object has no attribute 'head'
</pre>
<p>Here is the data until I use the <code>drop()</code>:</p>
<pre><code> job category
0 оÑÐ¸Ñ Ð¼ÐµÐ½ÐµÐ´Ð¶ÐµÑ ÑеализаÑÐ¸Ñ Ð³ÐµÑбиÑидовоÑоÑмлени... 2
1 Ð¼ÐµÐ½ÐµÐ´Ð¶ÐµÑ Ð¾Ñдел пÑодажа ÑабоÑа Ñ ÑÑÑеÑÑвÑÑÑий... 27
2 ведÑÑий бÑÑ
галÑÐµÑ ÑабоÑа Ñ Ð²ÐµÐ½Ð´ÐµÑ Ð¸ поÑÑавÑи... 1
3 Ð¼ÐµÐ½ÐµÐ´Ð¶ÐµÑ Ð¿Ð¾ пÑодажа и пÑодвижение пÑодÑÐºÑ ÑÑ... 27
4 ÑÑиÑÑ Ð¿Ñоведение ÑÑидиÑеÑкий ÑкÑпеÑÑиза пÑое... 13
</code></pre>
| 0 | 2016-08-08T10:52:39Z | 38,827,860 | <p>It looks like need <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>import pandas as pd
data = pd.DataFrame({'category':['a','b', 'c']})
print (data)
category
0 a
1 b
2 c
i = 'a'
print (data[data.category != i])
category
1 b
2 c
print (data[~data.category.isin(['b','c'])])
category
0 a
</code></pre>
<p>And as <a href="http://stackoverflow.com/questions/38827316/why-does-pandas-dataframe-drop-returns-none#comment65020368_38827316"><code>EdChum</code></a> explains, if use <code>inplace=True</code> it return <code>None</code>, so you can use:</p>
<pre><code>#omit inplace=True
data = data.drop(data[data.category!=i].index)
</code></pre>
<p>Or:</p>
<pre><code>#remove assigning
data.drop(data[data.category!=i].index,inplace = True)
</code></pre>
| 0 | 2016-08-08T11:17:46Z | [
"python",
"pandas",
"dataframe"
] |
Python ctypes.BigEndianStructure can't store a value | 38,827,440 | <p>I am in trouble with ctypes.BigEndianStructure. I can't get the value that I set to one the fields. My code is like this.</p>
<pre><code>import ctypes
class MyStructure(ctypes.BigEndianStructure):
_pack_ = 1
_fields_ = [
('fx', ctypes.c_uint, 7),
('fy', ctypes.c_ubyte, 1)
]
x = MyStructure()
</code></pre>
<p>It prints 0 as excepted:</p>
<pre><code>print x.fy # Prints 0
</code></pre>
<p>then I set a value to it but it still prints 0:</p>
<pre><code>x.fy = 1
print x.fy # Still prints 0
</code></pre>
| 2 | 2016-08-08T10:57:53Z | 38,827,971 | <p>I don't know why what your doing doesn't work and it is certainly strange behavior. I think this alternative code works.</p>
<pre><code>import ctypes
class MyStructure(ctypes.BigEndianStructure):
_pack_ = 1
def __init__(self):
self.fx=ctypes.c_uint(7)
self.fy = ctypes.c_ubyte(1)
x = MyStructure()
x.fy = 7
print x.fy # prints 7
</code></pre>
<p>Or without the constructor::</p>
<pre><code>import ctypes
class MyStructure(ctypes.BigEndianStructure):
_pack_ = 1
fx = ctypes.c_uint(7)
fy = ctypes.c_ubyte(1)
x = MyStructure()
x.fy = 7
print x.fy # prints 7
</code></pre>
<p>I have personally never used the <em>fields</em> attribute so I can't speak to the odd behavior.</p>
| -1 | 2016-08-08T11:23:17Z | [
"python",
"structure",
"ctypes"
] |
How to fetch resultant of a POST request in a web-scrapper? | 38,827,476 | <p>I am trying to scrape out data from <a href="http://ngo.india.gov.in/sector_ngolist_ngo.php?psid=&records=" rel="nofollow">this website.</a> </p>
<p>There is a table in which there are different organisations listed and the name each organisation is a link to a webpage with more information about the organisation.</p>
<p>Those links instead of being hard-coded hyperlinks they are call to javascript function which is computed when the function is called.</p>
<pre><code><a href="javascript:view_ngo('4309','','1','0')" class="bluelink11px">
BISWASUK SEVASRAM SANGHA
</a>
</code></pre>
<p>So it is not possible to scrape out information by just following the links. </p>
<p>Is there any workaround to execute the javascript function and get the HTML of the resultant webpage? I am using Python 3, and using Beautiful Soup as web scraper.</p>
| 0 | 2016-08-08T10:59:34Z | 38,829,861 | <p>First, the javascript will not be executed <em>server side</em> but always client side. Here, you should simply use the debugging facilities or a browser (Firefox function F12 is enough) to see what happens when you click on one of the links. You immediately see that the javascript code only prepares and send a POST request</p>
<p>So <code>view_ngo(a, b, c, d)</code> generates the following POST request:</p>
<pre><code>POST http://ngo.india.gov.in/view_ngo_details_ngo.php
</code></pre>
<p>With the following data:</p>
<pre><code>ngo_id=a&records_no=b&page_no=c&page_val=1&issueid=&ngo_black=d&records=
</code></pre>
<p>You can also see that it uses a session cookie, so you should take provisions for it in your scraping code.</p>
<p>Scraping could be like:</p>
<pre><code>cookieProcessor = urllib.request.CookieProcessor()
opener = urllib.request.build_opener(cookieProcessor)
soup = BeautifulSoup(opener.open(
'http://ngo.india.gov.in/sector_ngolist_ngo.php?psid=&records='))
# find the relevant links and iterate through them for view_ngo(a, b, c, d)
data = urllib.parse.urlencode({'ngo_id': a, 'records_no': b, 'page_no': c,
'page_val': '1', 'issue_id':'', 'ngo_black':d, 'records_no':'' }).encode()
soup2 = BeautifulSoup(opener.open('http://ngo.india.gov.in/view_ngo_details_ngo.php',
data))
</code></pre>
| 0 | 2016-08-08T12:58:22Z | [
"python",
"python-3.x",
"web-scraping",
"beautifulsoup"
] |
Detecting colored circle and it's center using OpenCV | 38,827,505 | <p>I am trying to detect BLUE colored CIRCLE and it's CENTER. Then draw a circle on the detected circle and a very small circle on it's center. But I get a few errors. (I am using OpenCV 3.1.0, Python 2.7 Anaconda 64 bits, PyCharm as an IDE) (Please help me using python codes)
I run the following code:</p>
<pre><code>import cv2
import numpy as np
cap = cv2.VideoCapture(0)
if cap.isOpened():
while(True):
frame, _ = cap.read()
# blurring the frame that's captured
frame_gau_blur = cv2.GaussianBlur(frame, (3, 3), 0)
# converting BGR to HSV
hsv = cv2.cvtColor(frame_gau_blur, cv2.COLOR_BGR2HSV)
# the range of blue color in HSV
lower_blue = np.array([110, 50, 50])
higher_blue = np.array([130, 255, 255])
# getting the range of blue color in frame
blue_range = cv2.inRange(hsv, lower_blue, higher_blue)
# getting the V channel which is the gray channel
blue_s_gray = blue_range[::2]
# applying HoughCircles
circles = cv2.HoughCircles(blue_s_gray, cv2.HOUGH_GRADIENT, 1, 10, 100, 30, 5, 50)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# drawing on detected circle and its center
cv2.circle(frame,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(frame,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('circles', frame)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
else:
print "Can't find camera"
</code></pre>
<p>The error I get when I run the code is: </p>
<blockquote>
<p>OpenCV Error: Assertion failed (depth == CV_8U || depth == CV_16U || depth == CV_32F) in cv::cvtColor, file C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\imgproc\src\color.cpp, line 7935
Traceback (most recent call last):
File "C:/Users/Meliodas/PycharmProjects/OpenCV_By_Examples/code_tester.py", line 11, in
hsv = cv2.cvtColor(frame_gau_blur, cv2.COLOR_BGR2HSV)
cv2.error: C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\imgproc\src\color.cpp:7935: error: (-215) depth == CV_8U || depth == CV_16U || depth == CV_32F in function cv::cvtColor</p>
</blockquote>
<p>Thanks a lot in advance for your help!</p>
| 1 | 2016-08-08T11:00:47Z | 38,831,028 | <p>Change <code>frame, _ = cap.read()</code> to <code>ret,frame = cap.read()</code></p>
<pre><code>import cv2
import numpy as np
cap = cv2.VideoCapture(0)
if cap.isOpened():
while(True):
ret,frame= cap.read()
# blurring the frame that's captured
frame_gau_blur = cv2.GaussianBlur(frame, (3, 3), 0)
# converting BGR to HSV
hsv = cv2.cvtColor(frame_gau_blur, cv2.COLOR_BGR2HSV)
# the range of blue color in HSV
lower_blue = np.array([110, 50, 50])
higher_blue = np.array([130, 255, 255])
# getting the range of blue color in frame
blue_range = cv2.inRange(hsv, lower_blue, higher_blue)
# getting the V channel which is the gray channel
blue_s_gray = blue_range[::2]
# applying HoughCircles
circles = cv2.HoughCircles(blue_s_gray, cv2.HOUGH_GRADIENT, 1, 10, 100, 30, 5, 50)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# drawing on detected circle and its center
cv2.circle(frame,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(frame,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('circles', frame)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
</code></pre>
| 0 | 2016-08-08T13:52:10Z | [
"python",
"python-2.7",
"opencv",
"computer-vision",
"opencv3.1"
] |
Detecting colored circle and it's center using OpenCV | 38,827,505 | <p>I am trying to detect BLUE colored CIRCLE and it's CENTER. Then draw a circle on the detected circle and a very small circle on it's center. But I get a few errors. (I am using OpenCV 3.1.0, Python 2.7 Anaconda 64 bits, PyCharm as an IDE) (Please help me using python codes)
I run the following code:</p>
<pre><code>import cv2
import numpy as np
cap = cv2.VideoCapture(0)
if cap.isOpened():
while(True):
frame, _ = cap.read()
# blurring the frame that's captured
frame_gau_blur = cv2.GaussianBlur(frame, (3, 3), 0)
# converting BGR to HSV
hsv = cv2.cvtColor(frame_gau_blur, cv2.COLOR_BGR2HSV)
# the range of blue color in HSV
lower_blue = np.array([110, 50, 50])
higher_blue = np.array([130, 255, 255])
# getting the range of blue color in frame
blue_range = cv2.inRange(hsv, lower_blue, higher_blue)
# getting the V channel which is the gray channel
blue_s_gray = blue_range[::2]
# applying HoughCircles
circles = cv2.HoughCircles(blue_s_gray, cv2.HOUGH_GRADIENT, 1, 10, 100, 30, 5, 50)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# drawing on detected circle and its center
cv2.circle(frame,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(frame,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('circles', frame)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
else:
print "Can't find camera"
</code></pre>
<p>The error I get when I run the code is: </p>
<blockquote>
<p>OpenCV Error: Assertion failed (depth == CV_8U || depth == CV_16U || depth == CV_32F) in cv::cvtColor, file C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\imgproc\src\color.cpp, line 7935
Traceback (most recent call last):
File "C:/Users/Meliodas/PycharmProjects/OpenCV_By_Examples/code_tester.py", line 11, in
hsv = cv2.cvtColor(frame_gau_blur, cv2.COLOR_BGR2HSV)
cv2.error: C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\imgproc\src\color.cpp:7935: error: (-215) depth == CV_8U || depth == CV_16U || depth == CV_32F in function cv::cvtColor</p>
</blockquote>
<p>Thanks a lot in advance for your help!</p>
| 1 | 2016-08-08T11:00:47Z | 38,873,062 | <p>I have solved the my problem and after looking up the meanings of the errors online (the one's that I got), I was able to find the solutions for them and hence I was able to solve them. If you run the following code given below you should be able to detect blue circles pretty well. Thanks a lot to the people who tried to help me to solve my problem. </p>
<p>The code is given below:</p>
<pre><code>import cv2
import numpy as np
cap = cv2.VideoCapture(0)
if cap.isOpened():
while(True):
ret, frame = cap.read()
# blurring the frame that's captured
frame_gau_blur = cv2.GaussianBlur(frame, (3, 3), 0)
# converting BGR to HSV
hsv = cv2.cvtColor(frame_gau_blur, cv2.COLOR_BGR2HSV)
# the range of blue color in HSV
lower_blue = np.array([110, 50, 50])
higher_blue = np.array([130, 255, 255])
# getting the range of blue color in frame
blue_range = cv2.inRange(hsv, lower_blue, higher_blue)
res_blue = cv2.bitwise_and(frame_gau_blur,frame_gau_blur, mask=blue_range)
blue_s_gray = cv2.cvtColor(res_blue, cv2.COLOR_BGR2GRAY)
canny_edge = cv2.Canny(blue_s_gray, 50, 240)
# applying HoughCircles
circles = cv2.HoughCircles(canny_edge, cv2.HOUGH_GRADIENT, dp=1, minDist=10, param1=10, param2=20, minRadius=100, maxRadius=120)
cir_cen = []
if circles != None:
# circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# drawing on detected circle and its center
cv2.circle(frame,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(frame,(i[0],i[1]),2,(0,0,255),3)
cir_cen.append((i[0],i[1]))
print cir_cen
cv2.imshow('circles', frame)
cv2.imshow('gray', blue_s_gray)
cv2.imshow('canny', canny_edge)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
else:
print 'no cam'
</code></pre>
| 1 | 2016-08-10T12:09:10Z | [
"python",
"python-2.7",
"opencv",
"computer-vision",
"opencv3.1"
] |
How to limit memory usage ofPython _ pandas | 38,827,526 | <p>Currently am using a panda operation to merge two csv file It take around 4.6 gb RAM i want to limit the RAM usage by 2 gb like java -Xmax and -Xmin</p>
<p>is their any way to do so</p>
<p>Thanks in advance</p>
| -2 | 2016-08-08T11:01:35Z | 38,827,634 | <p>Use <a href="https://docs.python.org/2/library/resource.html#resource.setrlimit" rel="nofollow"><code>setrlimit</code></a>:</p>
<pre><code>import resource
rsrc = resource.RLIMIT_DATA
soft, hard = resource.getrlimit(rsrc)
print 'Soft limit starts as :', soft
resource.setrlimit(rsrc, (1024, hard)) #limit to one kilobyte
soft, hard = resource.getrlimit(rsrc)
print 'Soft limit changed to :', soft
</code></pre>
<p><strong>EDIT</strong>: Actually, I'm not sure if <code>setrlimit</code> controls the CPU or RAM usage. From the shell, however you could make use of <code>ulimit</code>:</p>
<pre><code>ulimit -v 128k
python script.py
ulimit -v unlimited
</code></pre>
<p><strong>EDIT</strong>: Please note that this is for <strong><em>Linux</em></strong> systems, and I'm not sure how to do this, or if it's possible on Windows.</p>
| 2 | 2016-08-08T11:06:47Z | [
"python",
"pandas"
] |
Get list_display in django admin to display the 'many' end of a many-to-one relationship | 38,827,608 | <p>I would like to display all pet owners (Clients) using list_display and for each owner a comma-separate list of all of their pets (Patients).</p>
<p>The foreign key is in the Patient table, such that an owner can have many pets, but a pet can only have one owner.</p>
<p>I've got the following to work but would like some advise as to whether this is an acceptable approach.</p>
<pre><code>from .models import Client, Patient
class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'patients')
def patients(self,obj):
p = Patient.objects.filter(client_id=obj.pk)
return list(p)
</code></pre>
<p>This is what it looks like:
<a href="http://i.stack.imgur.com/b3OcJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/b3OcJ.png" alt="enter image description here"></a></p>
<p>Thanks for any guidance.</p>
<p>UPDATE:
Here's where I'm at so far:</p>
<p>Here's what I've managed to get working so far</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, request):
c = Client.objects.get(pk=1)
p = c.patient_fk.all()
return p
</code></pre>
<p>This is following the docs re: <a href="https://docs.djangoproject.com/en/dev/topics/db/queries/#following-relationships-backward" rel="nofollow">following relationships backwards</a>.</p>
<p>Of course, the above example 'fixes' the number of client objects to just one (pk=1) so I'm not sure how I'd get the results for all of the Clients.</p>
<p>@pleasedontbelong - I've tried your code, thank you very much. I'm almost certainly doing something wrong as I'm getting an error.but So you know the FK now has </p>
<pre><code> related_name = 'patient_fk'
</code></pre>
<p>which explains why I'm not using patient_set (since FOO_set is overriden)</p>
<p>So here's what I have:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def get_queryset(self, request):
qs = super(ClientAdmin, self).get_queryset(request)
return qs.prefetch_related('patient_fk')
def getpatients(self, obj):
return self.patient_fk.all()
</code></pre>
<p>The error I get is "'ClientAdmin' object has no attribute 'patient_fk'" and relates to the last line of the code above. </p>
<p>Any ideas?</p>
<p>Thanks!</p>
<p>EDIT</p>
<p>I've tried Brian's code:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, obj):
p = obj.patient_pk.all()
return list(p)
</code></pre>
<p>...and am getting error <code>'Client' object has no attribute 'patient_fk'</code></p>
<p>If I run my original code, it still works ok:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, obj):
p = Patient.objects.filter(client_id=obj.pk)
return list(p)
</code></pre>
<p>For reference, here are my classes:</p>
<pre><code>class Client(TimeStampedModel):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
....
class Patient(TimeStampedModel):
client = models.ForeignKey(Client, on_delete=models.CASCADE, related_name='patient_fk')
name = models.CharField(max_length=30)
....
</code></pre>
| 0 | 2016-08-08T11:05:09Z | 38,827,864 | <p>if it works :+1: !!</p>
<p>few notes however: it will execute one query for each Client, so if you display 100 clients on the admin, django will execute 100 queries</p>
<p>You could maybe improve it by changing the main queryset (<a href="https://docs.djangoproject.com/en/1.8/ref/contrib/admin/#django.contrib.admin.ModelAdmin.get_queryset" rel="nofollow">like this</a>) on the admin and using <a href="https://docs.djangoproject.com/ja/1.9/ref/models/querysets/#prefetch-related" rel="nofollow">prefetch_related('patients')</a></p>
<p>should be something like:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'patients')
def get_queryset(self, request):
qs = super(ClientAdmin, self).get_queryset(request)
return qs.prefetch_related('patients') # do read the doc, maybe 'patients' is not the correct lookup for you
def patients(self,obj):
return self.patients_set.all() # since you have prefetched the patients I think it wont hit the database, to be tested
</code></pre>
<p>Hope this helps</p>
<h1>Note:</h1>
<p>you can get all the Patients related to a Client using the <a href="https://docs.djangoproject.com/ja/1.9/ref/models/relations/#related-objects-reference" rel="nofollow">related object reference</a>, something like:</p>
<pre><code># get one client
client = Client.objects.last()
# get all the client's patient
patients = client.patient_set.all()
</code></pre>
<p>the last line is similar to:</p>
<pre><code>patients = Patient.objects.get(client=client)
</code></pre>
<p>finally you can override the <code>patient_set</code> name and make it prettier, read <a href="https://docs.djangoproject.com/en/1.9/topics/db/queries/#following-relationships-backward" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/db/queries/#following-relationships-backward</a></p>
<p>I haven't tested it, It would be nice to have a feedback to see if this will prevent the <a href="http://stackoverflow.com/questions/97197/what-is-the-n1-selects-issue">n+1 problem</a></p>
| 2 | 2016-08-08T11:17:56Z | [
"python",
"django",
"python-3.x",
"django-admin"
] |
Get list_display in django admin to display the 'many' end of a many-to-one relationship | 38,827,608 | <p>I would like to display all pet owners (Clients) using list_display and for each owner a comma-separate list of all of their pets (Patients).</p>
<p>The foreign key is in the Patient table, such that an owner can have many pets, but a pet can only have one owner.</p>
<p>I've got the following to work but would like some advise as to whether this is an acceptable approach.</p>
<pre><code>from .models import Client, Patient
class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'patients')
def patients(self,obj):
p = Patient.objects.filter(client_id=obj.pk)
return list(p)
</code></pre>
<p>This is what it looks like:
<a href="http://i.stack.imgur.com/b3OcJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/b3OcJ.png" alt="enter image description here"></a></p>
<p>Thanks for any guidance.</p>
<p>UPDATE:
Here's where I'm at so far:</p>
<p>Here's what I've managed to get working so far</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, request):
c = Client.objects.get(pk=1)
p = c.patient_fk.all()
return p
</code></pre>
<p>This is following the docs re: <a href="https://docs.djangoproject.com/en/dev/topics/db/queries/#following-relationships-backward" rel="nofollow">following relationships backwards</a>.</p>
<p>Of course, the above example 'fixes' the number of client objects to just one (pk=1) so I'm not sure how I'd get the results for all of the Clients.</p>
<p>@pleasedontbelong - I've tried your code, thank you very much. I'm almost certainly doing something wrong as I'm getting an error.but So you know the FK now has </p>
<pre><code> related_name = 'patient_fk'
</code></pre>
<p>which explains why I'm not using patient_set (since FOO_set is overriden)</p>
<p>So here's what I have:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def get_queryset(self, request):
qs = super(ClientAdmin, self).get_queryset(request)
return qs.prefetch_related('patient_fk')
def getpatients(self, obj):
return self.patient_fk.all()
</code></pre>
<p>The error I get is "'ClientAdmin' object has no attribute 'patient_fk'" and relates to the last line of the code above. </p>
<p>Any ideas?</p>
<p>Thanks!</p>
<p>EDIT</p>
<p>I've tried Brian's code:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, obj):
p = obj.patient_pk.all()
return list(p)
</code></pre>
<p>...and am getting error <code>'Client' object has no attribute 'patient_fk'</code></p>
<p>If I run my original code, it still works ok:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, obj):
p = Patient.objects.filter(client_id=obj.pk)
return list(p)
</code></pre>
<p>For reference, here are my classes:</p>
<pre><code>class Client(TimeStampedModel):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
....
class Patient(TimeStampedModel):
client = models.ForeignKey(Client, on_delete=models.CASCADE, related_name='patient_fk')
name = models.CharField(max_length=30)
....
</code></pre>
| 0 | 2016-08-08T11:05:09Z | 38,828,289 | <pre><code>def patients(self,obj):
p = obj.patients.all()
return list(p)
</code></pre>
<p>this is assuming that in your ForeignKey you set <code>related_name='patients'</code></p>
<p>EDIT: fixed mistake
EDIT2: changed reverse_name to related_name and added '.all()'</p>
| 0 | 2016-08-08T11:39:37Z | [
"python",
"django",
"python-3.x",
"django-admin"
] |
Get list_display in django admin to display the 'many' end of a many-to-one relationship | 38,827,608 | <p>I would like to display all pet owners (Clients) using list_display and for each owner a comma-separate list of all of their pets (Patients).</p>
<p>The foreign key is in the Patient table, such that an owner can have many pets, but a pet can only have one owner.</p>
<p>I've got the following to work but would like some advise as to whether this is an acceptable approach.</p>
<pre><code>from .models import Client, Patient
class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'patients')
def patients(self,obj):
p = Patient.objects.filter(client_id=obj.pk)
return list(p)
</code></pre>
<p>This is what it looks like:
<a href="http://i.stack.imgur.com/b3OcJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/b3OcJ.png" alt="enter image description here"></a></p>
<p>Thanks for any guidance.</p>
<p>UPDATE:
Here's where I'm at so far:</p>
<p>Here's what I've managed to get working so far</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, request):
c = Client.objects.get(pk=1)
p = c.patient_fk.all()
return p
</code></pre>
<p>This is following the docs re: <a href="https://docs.djangoproject.com/en/dev/topics/db/queries/#following-relationships-backward" rel="nofollow">following relationships backwards</a>.</p>
<p>Of course, the above example 'fixes' the number of client objects to just one (pk=1) so I'm not sure how I'd get the results for all of the Clients.</p>
<p>@pleasedontbelong - I've tried your code, thank you very much. I'm almost certainly doing something wrong as I'm getting an error.but So you know the FK now has </p>
<pre><code> related_name = 'patient_fk'
</code></pre>
<p>which explains why I'm not using patient_set (since FOO_set is overriden)</p>
<p>So here's what I have:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def get_queryset(self, request):
qs = super(ClientAdmin, self).get_queryset(request)
return qs.prefetch_related('patient_fk')
def getpatients(self, obj):
return self.patient_fk.all()
</code></pre>
<p>The error I get is "'ClientAdmin' object has no attribute 'patient_fk'" and relates to the last line of the code above. </p>
<p>Any ideas?</p>
<p>Thanks!</p>
<p>EDIT</p>
<p>I've tried Brian's code:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, obj):
p = obj.patient_pk.all()
return list(p)
</code></pre>
<p>...and am getting error <code>'Client' object has no attribute 'patient_fk'</code></p>
<p>If I run my original code, it still works ok:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'getpatients')
def getpatients(self, obj):
p = Patient.objects.filter(client_id=obj.pk)
return list(p)
</code></pre>
<p>For reference, here are my classes:</p>
<pre><code>class Client(TimeStampedModel):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
....
class Patient(TimeStampedModel):
client = models.ForeignKey(Client, on_delete=models.CASCADE, related_name='patient_fk')
name = models.CharField(max_length=30)
....
</code></pre>
| 0 | 2016-08-08T11:05:09Z | 38,917,120 | <p>This now works:</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'get_patients')
def get_queryset(self, obj):
qs = super(ClientAdmin, self).get_queryset(obj)
return qs.prefetch_related('patient_fk')
def get_patients(self, obj):
return list(obj.patient_fk.all())
</code></pre>
<p>This page only needed 6 queries to display... </p>
<p><a href="http://i.stack.imgur.com/gnSGh.png" rel="nofollow"><img src="http://i.stack.imgur.com/gnSGh.png" alt="enter image description here"></a></p>
<p>...compared to my original code (below) which was running a separate query to retrieve the patients for each client (100 clients per page)</p>
<pre><code>from .models import Client, Patient
class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'patients')
def patients(self,obj):
p = Patient.objects.filter(client_id=obj.pk)
return list(p)
</code></pre>
<p><a href="http://i.stack.imgur.com/CRZ6i.png" rel="nofollow"><img src="http://i.stack.imgur.com/CRZ6i.png" alt="enter image description here"></a></p>
<p>Here's my understanding of how and why this works (feel free to point out any errors):</p>
<p>Every model has a <strong>Manager</strong> whose default name is <strong>objects</strong> allowing us to access the database records. To pull all records from a model, we us <code>SomeModel.objects.all()</code> which - under the hood - is just the <strong>QuerySet</strong> returned by the <strong>get_queryset</strong> method of the Manager class.</p>
<p>So if we need to tweak what is returned from a Model - i.e. the QuerySet - then we need to override the method that grabs it, namely <strong>get_queryset</strong>. Our new method has same name as the method we want to override:</p>
<pre><code> def get_queryset(self, obj):
</code></pre>
<p>Now, the above method knows nothing about how to get access to the modes data. It contains no code. To get access to the data we need to call the 'real' get_queryset method (the one we're overriding) so that we can actually get data back, tweak it (add some extra patient info), then return it. </p>
<p>To access the 'original' get_queryset method and get a QuerySet object (containing all Model data, no patients) then we use <code>super()</code>.</p>
<p><code>super()</code> gives us access to a method on a parent class.</p>
<p>For example:</p>
<p><a href="http://i.stack.imgur.com/Az6rP.png" rel="nofollow"><img src="http://i.stack.imgur.com/Az6rP.png" alt="enter image description here"></a></p>
<p>In our case it lets us grab ClientAdmin's <code>get_queryset()</code> method.</p>
<pre><code>def get_queryset(self, obj):
qs = super(ClientAdmin, self).get_queryset(obj)
</code></pre>
<p><code>qs</code> hold all the data in the Model in a QuerySet object. </p>
<p>To 'add in' all of the Patients objects that lie at the end of the one-to-many relationship (a Client can have many Patients) we use <code>prefetch_related()</code>:</p>
<pre><code>return qs.prefetch_related('patient_fk')'
</code></pre>
<p>This performs a lookup for each Client and returns any Patient objects by following the 'patient_fk' foreign key. This is performed under the hood by Python (not SQL) such that the end result is a new QuerySet - generated by a single database lookup - containing all of the data we need to not only list all of the objects in our main Model but also include related objets from other Models.</p>
<p>So, what happens if we do <em>not</em> override <code>Manager.get_queryset()</code> method? Well, then we just get the data that is in the specific table (Clients), no info about Patients (...and 100 extra database hits):</p>
<pre><code>class ClientAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'mobile', 'get_patients')
#do not override Manager.get_queryset()
#def get_queryset(self, obj):
# qs = super(ClientAdmin, self).get_queryset(obj)
# return qs.prefetch_related('patient_fk')
def get_patients(self, obj):
return list(obj.patient_fk.all())
#forces extra per-client query by following patient_fk
</code></pre>
<p>I hope this helps someone out there. Any errors in my explanation let me know and I'll correct.</p>
| 0 | 2016-08-12T11:41:27Z | [
"python",
"django",
"python-3.x",
"django-admin"
] |
How to read only number from a specific line using python script | 38,827,657 | <p>How to read only number from a specific line using python script for example </p>
<p>"1009 run test jobs" here i should read only number "1009" instead of "1009 run test jobs"</p>
| -1 | 2016-08-08T11:07:43Z | 38,827,760 | <p>a simple regexp should do:</p>
<pre><code>import re
match = re.match(r"(\d+)", "1009 run test jobs")
if match:
number = match.group()
</code></pre>
<p><a href="https://docs.python.org/3/library/re.html" rel="nofollow">https://docs.python.org/3/library/re.html</a></p>
| 0 | 2016-08-08T11:12:29Z | [
"python"
] |
How to read only number from a specific line using python script | 38,827,657 | <p>How to read only number from a specific line using python script for example </p>
<p>"1009 run test jobs" here i should read only number "1009" instead of "1009 run test jobs"</p>
| -1 | 2016-08-08T11:07:43Z | 38,827,778 | <p>Use regular expression:</p>
<pre><code>>>> import re
>>> x = "1009 run test jobs"
>>> re.sub("[^0-9]","",x)
>>> re.sub("\D","",x) #better way
</code></pre>
| 1 | 2016-08-08T11:13:41Z | [
"python"
] |
How to read only number from a specific line using python script | 38,827,657 | <p>How to read only number from a specific line using python script for example </p>
<p>"1009 run test jobs" here i should read only number "1009" instead of "1009 run test jobs"</p>
| -1 | 2016-08-08T11:07:43Z | 38,827,794 | <p>Or this if your number always comes first <code>int(line.split()[0])</code></p>
| 0 | 2016-08-08T11:14:20Z | [
"python"
] |
How to read only number from a specific line using python script | 38,827,657 | <p>How to read only number from a specific line using python script for example </p>
<p>"1009 run test jobs" here i should read only number "1009" instead of "1009 run test jobs"</p>
| -1 | 2016-08-08T11:07:43Z | 38,827,849 | <p>Or a simple check if its numbers in a string.</p>
<p><code>[int(s) for s in str.split() if s.isdigit()]</code></p>
<p>Where str is your string of text.</p>
| 0 | 2016-08-08T11:17:00Z | [
"python"
] |
How to read only number from a specific line using python script | 38,827,657 | <p>How to read only number from a specific line using python script for example </p>
<p>"1009 run test jobs" here i should read only number "1009" instead of "1009 run test jobs"</p>
| -1 | 2016-08-08T11:07:43Z | 38,827,880 | <p>Pretty sure there is a "more pythonic" way, but this works for me:</p>
<pre><code>s='teststri3k2k3s21k'
outs=''
for i in s:
try:
numbr = int(i)
outs+=i
except:
pass
print(outs)
</code></pre>
<p>If the number is always at the beginning of your string, you might consider something like <code>outstring = instring[0,3]</code>.</p>
| 0 | 2016-08-08T11:18:41Z | [
"python"
] |
How to read only number from a specific line using python script | 38,827,657 | <p>How to read only number from a specific line using python script for example </p>
<p>"1009 run test jobs" here i should read only number "1009" instead of "1009 run test jobs"</p>
| -1 | 2016-08-08T11:07:43Z | 38,828,188 | <p>You can do it with regular expression. That's very easy:</p>
<pre><code>import re
regularExpression = "[^\d-]*(-?[0-9]+).*"
line = "some text -123 some text"
m = re.search(regularExpression, line)
if m:
print(m.groups()[0])
</code></pre>
<p>This regular expression extracts the first number in a text. It considers <code>'-'</code> as part of numbers. If you don't want this change regular expression to this one: <code>"[^\d-]*([0-9]+).*"</code></p>
| 0 | 2016-08-08T11:34:23Z | [
"python"
] |
accessing python dictionary from bash script | 38,827,679 | <p>I am invoking the bash script from python script.</p>
<p>I want the bash script to add an element to dictionary "d" in the python script</p>
<h2><code>abc3.sh</code>:</h2>
<pre><code>#!/bin/bash
rank=1
echo "plugin"
function reg()
{
if [ "$1" == "what" ]; then
python -c 'from framework import data;data(rank)'
echo "iamin"
else
plugin
fi
}
plugin()
{
echo "i am plugin one"
}
reg $1
</code></pre>
<h2>python file:</h2>
<pre><code> import sys,os,subprocess
from collections import *
subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash')
def data(rank,check):
d[rank]["CHECK"]=check
print d[1]["CHECK"]
</code></pre>
| 0 | 2016-08-08T11:09:00Z | 38,828,116 | <p>If I understand correctly, you have a python script that runs a shell script, that in turn runs a new python script. And you'd want the second Python script to update a dictionnary in the first script. That will not work like that. </p>
<p>When you run your first python script, it will create a new python process, which will interpret each instruction from your source script.</p>
<p>When it reaches the instruction <code>subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash')</code>, it will spawn a new shell (bash) process which will in turn interpret your shell script.</p>
<p>When the shell script reaches <code>python -c <commands></code>, it invokes a new python process. This process is independant from the initial python process (even if you run the same script file).</p>
<p>Because each of theses scripts will run in a different process, they don't have access to each other data (the OS makes sure that each process is independant from each other, excepted for specific inter-process communications methods).</p>
<p>What you need to do: use some kind of interprocess mechanism, so that the initial python script gets data from the shell script. You may for example read data from the shell standard output, using <a href="https://docs.python.org/3/library/subprocess.html#subprocess.check_output" rel="nofollow">https://docs.python.org/3/library/subprocess.html#subprocess.check_output</a></p>
| 2 | 2016-08-08T11:30:44Z | [
"python",
"bash"
] |
accessing python dictionary from bash script | 38,827,679 | <p>I am invoking the bash script from python script.</p>
<p>I want the bash script to add an element to dictionary "d" in the python script</p>
<h2><code>abc3.sh</code>:</h2>
<pre><code>#!/bin/bash
rank=1
echo "plugin"
function reg()
{
if [ "$1" == "what" ]; then
python -c 'from framework import data;data(rank)'
echo "iamin"
else
plugin
fi
}
plugin()
{
echo "i am plugin one"
}
reg $1
</code></pre>
<h2>python file:</h2>
<pre><code> import sys,os,subprocess
from collections import *
subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash')
def data(rank,check):
d[rank]["CHECK"]=check
print d[1]["CHECK"]
</code></pre>
| 0 | 2016-08-08T11:09:00Z | 38,829,773 | <p>Let's suppose that you have a shell plugin that echoes the value:</p>
<pre><code>echo $1 12
</code></pre>
<p>The mockup python script looks like (I'm on windows/MSYS2 BTW, hence the strange paths for a Linux user):</p>
<pre><code>import subprocess
p = subprocess.Popen(args=[r'C:\msys64\usr\bin\sh.exe',"-c","C:/users/jotd/myplugin.sh myarg"],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
o,e= p.communicate()
p.wait()
if len(e):
print("Warning: error found: "+e.decode())
result = o.strip()
d=dict()
d["TEST"] = result
print(d)
</code></pre>
<p>it prints the dictionary, proving that argument has been passed to the shell, and went back processed.
Note that stderr has been filtered out to avoid been mixed up with the results, but is printed to the console if occurs.</p>
<pre><code>{'TEST': b'myarg 12'}
</code></pre>
| 0 | 2016-08-08T12:54:08Z | [
"python",
"bash"
] |
How to update a dataframe in Pandas Python | 38,827,835 | <p>I have the following two dataframes in pandas:</p>
<pre><code>DF1:
AuthorID1 AuthorID2 Co-Authored
A1 A2 0
A1 A3 0
A1 A4 0
A2 A3 0
DF2:
AuthorID1 AuthorID2 Co-Authored
A1 A2 5
A2 A3 6
A6 A7 9
</code></pre>
<p>I would like (without looping and comparing) to find the matching AuthorID1 and AuthorID2 pairing in DF2 that exist in DF1 and update the column values accordingly. So the result for the above two tables would be the following:</p>
<pre><code>Resulting Updated DF1:
AuthorID1 AuthorID2 Co-Authored
A1 A2 5
A1 A3 0
A1 A4 0
A2 A3 6
</code></pre>
<p>Is there a fast way to do this? As I have 7 millions rows in DF1 and looping and comparing would just take forever.</p>
<p>Update: note that the last two in DF2 should not be part of the update in DF1 since it doesn't exist in DF1</p>
| 1 | 2016-08-08T11:16:36Z | 38,827,938 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html" rel="nofollow"><code>update</code></a>:</p>
<pre><code>df1.update(df2)
print (df1)
AuthorID1 AuthorID2 Co-Authored
0 A1 A2 5.0
1 A2 A3 6.0
2 A1 A4 0.0
3 A2 A3 0.0
</code></pre>
<p>Sample:</p>
<pre><code>df1 = pd.DataFrame({'new': {0: 7, 1: 8, 2: 1, 3: 3},
'AuthorID2': {0: 'A2', 1: 'A3', 2: 'A4', 3: 'A3'},
'AuthorID1': {0: 'A1', 1: 'A1', 2: 'A1', 3: 'A2'},
'Co-Authored': {0: 0, 1: 0, 2: 0, 3: 0}})
df2 = pd.DataFrame({'AuthorID2': {0: 'A2', 1: 'A3'},
'AuthorID1': {0: 'A1', 1: 'A2'},
'Co-Authored': {0: 5, 1: 6}})
AuthorID1 AuthorID2 Co-Authored new
0 A1 A2 0 7
1 A1 A3 0 8
2 A1 A4 0 1
3 A2 A3 0 3
print (df2)
AuthorID1 AuthorID2 Co-Authored
0 A1 A2 5
1 A2 A3 6
df1.update(df2)
print (df1)
AuthorID1 AuthorID2 Co-Authored new
0 A1 A2 5.0 7
1 A2 A3 6.0 8
2 A1 A4 0.0 1
3 A2 A3 0.0 3
</code></pre>
<p>EDIT by comment:</p>
<p>I think you need filter <code>df2</code> by <code>df1</code> firstly with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow"><code>isin</code></a>:</p>
<pre><code>df2 = df2[df2[['AuthorID1','AuthorID2']].isin(df1[['AuthorID1','AuthorID2']]).any(1)]
print (df2)
AuthorID1 AuthorID2 Co-Authored
0 A1 A2 5
1 A2 A3 6
df1.update(df2)
print (df1)
AuthorID1 AuthorID2 Co-Authored
0 A1 A2 5.0
1 A2 A3 6.0
2 A1 A4 0.0
3 A2 A3 0.0
</code></pre>
| 1 | 2016-08-08T11:21:55Z | [
"python",
"pandas",
"dataframe"
] |
Manipulating Azure storage containers in a Djano project | 38,827,843 | <p>In a Django project, I'm uploading video files to an Azure storage via the following snippet:</p>
<pre><code>content_str = content.read()
blob_service.put_blob(
'videos',
name,
content_str,
x_ms_blob_type='BlockBlob',
x_ms_blob_content_type=content_type,
x_ms_blob_cache_control ='public, max-age=3600, s-maxage=86400'
)
</code></pre>
<p>where <code>name</code> is a random <code>uuid</code> string and <code>videos</code> is the name of the container. How do I upload the video files without specifying a container, i.e. de facto creating a unique container for every file I upload?</p>
| 0 | 2016-08-08T11:16:49Z | 38,827,950 | <blockquote>
<p>How do I upload the video files without specifying a container, i.e.
de facto creating a unique container for every file I upload?</p>
</blockquote>
<p>Each blob (a video file in your case) must belong to a container. So what you could do is create a container using <code>blob_service.create_container</code> before you call <code>blob_service.put_blob</code>. You can name the container as <code>uuid</code>.</p>
| 0 | 2016-08-08T11:22:28Z | [
"python",
"azure",
"windows-azure-storage",
"azure-storage-blobs"
] |
kivy: __init__() is missing x required positional arguments | 38,827,858 | <p>I have the class Movie as follows:</p>
<pre><code>class Movie(Widget):
def __init__(self, title, image, time, description, trailer, fsk, threeD, **kwargs):
super(Movie, self).__init__(title, image, time, description, trailer, fsk, threeD, **kwargs)
title = StringProperty()
image = StringProperty()
time = StringProperty()
description = StringProperty()
trailer = StringProperty()
fsk = NumericProperty()
threeD = BooleanProperty()
</code></pre>
<p>When I run my script Python interpreter tells me this:</p>
<pre><code>TypeError: __init__() missing 7 required positional arguments: 'title', 'image', 'time', 'description', 'trailer', 'fsk', and 'threeD'
</code></pre>
<p>So what am I doing wrong? I struggle with this some time already.</p>
<hr>
<p>Whole source code relevant to this issue:</p>
<pre><code>class Movie(Widget):
def __init__(self, title, image, time, description, trailer, fsk, threeD, **kwargs):
super(Movie, self).__init__(title, image, time, description, trailer, fsk, threeD, **kwargs)
title = StringProperty()
image = StringProperty()
time = StringProperty()
description = StringProperty()
trailer = StringProperty()
fsk = NumericProperty()
threeD = BooleanProperty()
class MainView(Widget):
def __init__(self, **kwargs):
super(MainView, self).__init__(**kwargs)
movies = ListProperty()
# movies = self.getMovies()
# for movie in movies:
# self.add_widget(movie)
def getMovies(self, url="http://.../"):
html = lxml.html.parse(url)
titles = html.xpath("//h5")
times = html.xpath("//td[@class='pday ptoday']/span/a")
trailers = html.xpath("//a[@data-modal-trailer-url]/@data-modal-trailer-url")
fsks = html.xpath("//tr[@data-fsk]/@data-fsk")
movies = list()
# for i in range(0, len(titles)):
# movie = Movie(titles[i].text, "images[i]", times[i].text, "", "https:" + trailers[i][:-11], fsks[i], "no")
# movies.append(movie)
return movies
</code></pre>
| 0 | 2016-08-08T11:17:45Z | 38,911,620 | <p>I've found out that the kv-lang-file was the reason for this object initialization error. So I don't know how to fix it but I think that is an other question as this is about why <code>__init__</code> is being called</p>
| -1 | 2016-08-12T06:49:39Z | [
"python",
"python-3.x",
"kivy"
] |
Finding the input dependencies of a functions outputs | 38,827,921 | <p>I've been working on a python program with pycparser which is supposed to generate a JSON-file with dependencies of a given function and its outputs.
For an example function:</p>
<pre><code>int Test(int testInput)
{
int b = testInput;
return b;
}
</code></pre>
<p>Here I would expect <strong>b</strong> to be dependent on <strong>testInput</strong>. But ofcourse it can get a lot more complicated with structs and if-statements etc. The files I'm testing also have functions in a specific form that are considered inputs and outputs as in:</p>
<pre><code>int Test(int testInput)
{
int anotherInput = DatabaseRead(VariableInDatabase);
int b = testInput;
int c;
c = anotherInput + 1;
DatabaseWrite(c);
return b;
}
</code></pre>
<p>Here <strong>c</strong> would be dependent on <strong>VariableInDatabase</strong>, and <strong>b</strong> same as before.
I've run into a wall with this analysis in pycparser as mostly structs and pointers are really hard for me to handle, and it seems like there'd be a better way. I've read into ASTs and CFGs, and other analysis tools like Frama-C but I can't seem to find a clear answer if this is even a thing.</p>
<p>Is there a known way to do this kind of analysis, and if so, what should I be looking into?
It's supposed to thousands of files and be able to output these dependencies into a JSON, so plugins for editors doesn't seem like what I'm looking for.</p>
| 0 | 2016-08-08T11:20:43Z | 38,828,618 | <p>You need data flow analysis of your code, and then you want to follow the data flow backwards from a result to its sources, up to some stopping point (in your case, you stopped at a function parameter but you probably also want to stop at any global variable).</p>
<p>This is called <a href="http://en.wikipedia.org/wiki/Program_slicing" rel="nofollow">program slicing</a> in the literature.</p>
<p>Computing data flows is pretty hard, especially if you have a complex language (C is fun: you can have data flows through indirectly called functions that read values; now you need indirect points-to analysis to support your data flow, and vice versa).</p>
<p>Here's fun example:</p>
<pre><code> // ocean of functions:
...
int a(){ return b; }
...
int p(){ return q; }
...
void foo( int()* x )
{ return (*x)(); }
</code></pre>
<p>Does foo depend on b? on q? You can't know unless you know that
foo calls a or b. But foo is handed a function pointer... and
what might that point to?</p>
<p>Using just ASTs and CFGs is necessary but not sufficient; data flow analysis algorithms are hard, especially if you have scale (as you suggest you do); you need a lot of machinery to do this that is not easy to build
[We've done this on C programs of 16 million lines]. See my essay on <a href="http://www.semdesigns.com/Products/DMS/LifeAfterParsing.html" rel="nofollow">Life After Parsing</a>.</p>
| 1 | 2016-08-08T11:57:11Z | [
"python",
"c",
"static-analysis",
"control-flow-graph",
"pycparser"
] |
Python Terminal unexpected character after line continuation character | 38,827,949 | <p>I have a file isqrt.py, containing following code:</p>
<pre><code> from cmath import sqrt
x = -1
y = sqrt(x)
print(y)
</code></pre>
<p>I am getting following error in my Mac Terminal:</p>
<pre><code>File "isqrt.py", line 1
{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
^
SyntaxError: unexpected character after line continuation character
</code></pre>
<p>Do you know what is causing the error?</p>
| -1 | 2016-08-08T11:22:28Z | 38,828,040 | <p>Might be a bad idea to start with non-defined expressions, since the square-root of a negative number is not defined (at least when using real numbers).</p>
<p>What happens if you calculate the square-root of a positive number?</p>
| 0 | 2016-08-08T11:27:10Z | [
"python"
] |
Python Terminal unexpected character after line continuation character | 38,827,949 | <p>I have a file isqrt.py, containing following code:</p>
<pre><code> from cmath import sqrt
x = -1
y = sqrt(x)
print(y)
</code></pre>
<p>I am getting following error in my Mac Terminal:</p>
<pre><code>File "isqrt.py", line 1
{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
^
SyntaxError: unexpected character after line continuation character
</code></pre>
<p>Do you know what is causing the error?</p>
| -1 | 2016-08-08T11:22:28Z | 38,828,091 | <p>Your error is showing you that the file you're running is not what you think it is; it's got a whole load of control characters. Seems like you've saved a file as RTF rather than plain text. Ideally, you should use a proper text editor to write Python code.</p>
| 4 | 2016-08-08T11:29:27Z | [
"python"
] |
Django MissingFileError: Path is a directory | 38,827,960 | <p>I'm trying to deploy my project static files on S3 AWS but when i collectstatic on my terminal, i got this error. I heard that it looks like you i'm trying to include a static asset in my template, but i have specified a directory instead of a file... and i do not understand this :/</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/home/damian/proj1/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py", line 63, in __call__
return self.application(environ, start_response)
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/base.py", line 57, in __call__
static_file = self.find_file(environ['PATH_INFO'])
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/django.py", line 75, in find_file
return self.get_static_file(path, url)
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/base.py", line 111, in get_static_file
self.add_stat_headers(headers, path, url)
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/base.py", line 121, in add_stat_headers
file_stat = stat_regular_file(path)
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/utils.py", line 30, in stat_regular_file
raise MissingFileError('Path is a directory: {0}'.format(path))
MissingFileError: Path is a directory: /home/damian/proj1/blog/static_in_pro/our_static
[08/Aug/2016 13:14:21] "GET / HTTP/1.1" 500 59
</code></pre>
<p>my (not all) settings:</p>
<pre><code>STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static_in_env", "static_root")
STATICFILES_DIRS = (
os.path.join(BASE_DIR, "static_in_pro", "our_static"),
#'/var/www/static/',
)
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, "static_in_env", "media_root")
CRISPY_TEMPLATE_PACK = 'bootstrap3'
MEDIAFILES_DIRS = (MEDIA_ROOT)
#AWS S3 STATICK FILES
AWS_HEADERS = { # see http://developer.yahoo.com/performance/rules.html#expires
'Expires': 'Thu, 31 Dec 2099 20:00:00 GMT',
'Cache-Control': 'max-age=94608000',
}
AWS_STORAGE_BUCKET_NAME = '###'
AWS_ACCESS_KEY_ID = '###'
AWS_SECRET_ACCESS_KEY = '###'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
STATIC_URL = "https://%s/" % AWS_S3_CUSTOM_DOMAIN
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
</code></pre>
<p>Thanks for help and to indicate the correct path for the ongoing work!
Cheers</p>
| -1 | 2016-08-08T11:22:49Z | 38,828,847 | <p>If you use only <code>django-storages</code> you'll need to specify </p>
<pre><code>DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
</code></pre>
<p>instead <code>STATICFILES_STORAGE</code></p>
| 0 | 2016-08-08T12:09:15Z | [
"python",
"django",
"amazon-web-services",
"amazon-s3"
] |
Django MissingFileError: Path is a directory | 38,827,960 | <p>I'm trying to deploy my project static files on S3 AWS but when i collectstatic on my terminal, i got this error. I heard that it looks like you i'm trying to include a static asset in my template, but i have specified a directory instead of a file... and i do not understand this :/</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/home/damian/proj1/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py", line 63, in __call__
return self.application(environ, start_response)
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/base.py", line 57, in __call__
static_file = self.find_file(environ['PATH_INFO'])
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/django.py", line 75, in find_file
return self.get_static_file(path, url)
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/base.py", line 111, in get_static_file
self.add_stat_headers(headers, path, url)
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/base.py", line 121, in add_stat_headers
file_stat = stat_regular_file(path)
File "/home/damian/proj1/local/lib/python2.7/site-packages/whitenoise/utils.py", line 30, in stat_regular_file
raise MissingFileError('Path is a directory: {0}'.format(path))
MissingFileError: Path is a directory: /home/damian/proj1/blog/static_in_pro/our_static
[08/Aug/2016 13:14:21] "GET / HTTP/1.1" 500 59
</code></pre>
<p>my (not all) settings:</p>
<pre><code>STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static_in_env", "static_root")
STATICFILES_DIRS = (
os.path.join(BASE_DIR, "static_in_pro", "our_static"),
#'/var/www/static/',
)
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, "static_in_env", "media_root")
CRISPY_TEMPLATE_PACK = 'bootstrap3'
MEDIAFILES_DIRS = (MEDIA_ROOT)
#AWS S3 STATICK FILES
AWS_HEADERS = { # see http://developer.yahoo.com/performance/rules.html#expires
'Expires': 'Thu, 31 Dec 2099 20:00:00 GMT',
'Cache-Control': 'max-age=94608000',
}
AWS_STORAGE_BUCKET_NAME = '###'
AWS_ACCESS_KEY_ID = '###'
AWS_SECRET_ACCESS_KEY = '###'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
STATIC_URL = "https://%s/" % AWS_S3_CUSTOM_DOMAIN
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
</code></pre>
<p>Thanks for help and to indicate the correct path for the ongoing work!
Cheers</p>
| -1 | 2016-08-08T11:22:49Z | 38,832,242 | <p>That error is from WhiteNoise, but you don't need to use WhiteNoise if you're serving your static files from S3. You should remove the WhiteNoise references from your <code>wsgi.py</code> file.</p>
| 1 | 2016-08-08T14:46:12Z | [
"python",
"django",
"amazon-web-services",
"amazon-s3"
] |
Passing variable into MySQL using Python 2.7 | 38,827,994 | <p>I've been trying to insert a string value from a variable in Python 2.7 into a MySQL statement.
I can't seem to get it to work, could someone point me in the right direction?</p>
<pre><code>import MySQLdb
country_name = raw_input("Which country would you like?\n")
dbh = MySQLdb.connect(host="localhost",
user="boole",
passwd="****",
db="mySQL_Experiment_1")
sth = dbh.cursor()
sth.execute("""SELECT name, population FROM world WHERE name=(%s)""", (country_name))
for row in sth:
print row[0], row[1]
</code></pre>
<p>It outputs:</p>
<pre><code>/usr/bin/python2.7 "/home/boole/Documents/Python Scripts/mySQL_Experiment_1/main.py"
Which country would you like?
Canada
Traceback (most recent call last):
File "/home/boole/Documents/Python Scripts/mySQL_Experiment_1/main.py", line 10, in <module>
sth.execute("""SELECT name, population FROM world WHERE name=(%s)""", (country_name))
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 187, in execute
query = query % tuple([db.literal(item) for item in args])
TypeError: not all arguments converted during string formatting
Process finished with exit code 1
</code></pre>
<p>Thanks,
Boole</p>
| 0 | 2016-08-08T11:24:16Z | 38,842,518 | <p>Try this </p>
<pre><code>cursor.execute("SELECT name, population FROM world where name = %s", [country_name])
</code></pre>
| 1 | 2016-08-09T05:10:41Z | [
"python",
"mysql",
"python-2.7"
] |
Python grammar end "return outside function" | 38,828,017 | <p>I've noticed that Python grammar allows return statement appear outside function, but I really don't understand, why? I believe that one can specify grammar so, that this wouldn't be allowed.</p>
<p>This is a piece of Python grammar which allows this:</p>
<pre><code>single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
return_stmt: 'return' [testlist]
</code></pre>
<p>Also the interpreter reports this as syntax error ('return' outside function), but how can parser detect it, if this isn't specified in the grammar? </p>
| 2 | 2016-08-08T11:25:37Z | 38,828,722 | <p>First, the interrupter builds the AST tree. Then, When it generates code for basic blocks by visiting the AST tree, It verifies that the return statement is inside a function.</p>
<pre><code>compiler_visit_stmt(struct compiler *c, stmt_ty s)
...
switch (s->kind) {
...
case Return_kind:
if (c->u->u_ste->ste_type != FunctionBlock)
return compiler_error(c, "'return' outside function");
</code></pre>
<p>As you can see, the semantics of the language is not defined only by its grammar.</p>
| 4 | 2016-08-08T12:03:07Z | [
"python",
"parsing",
"ebnf"
] |
An efficient way to save parsed XML content to Django Model | 38,828,108 | <p>This is my first question so I will do my best to conform to the question guidelines. I'm also learning how to code so please ELI5.</p>
<p>I'm working on a django project that parses XML to django models. Specifically Podcast XMLs. </p>
<p>I currently have this code in my model:</p>
<pre><code> from django.db import models
import feedparser
class Channel(models.Model):
channel_title = models.CharField(max_length=100)
def __str__(self):
return self.channel_title
class Item(models.Model):
channel = models.ForeignKey(Channel, on_delete=models.CASCADE)
item_title = models.CharField(max_length=100)
def __str__(self):
return self.item_title
radiolab = feedparser.parse('radiolab.xml')
if Channel.objects.filter(channel_title = 'Radiolab').exists():
pass
else:
channel_title= radiolab.feed.title
a = Channel.objects.create(channel_title=channel_title)
a.save()
for episode in radiolab.entries:
item_title = episode.title
channel_title = Channel.objects.get(channel_title="Radiolab")
b = Item.objects.create(channel=channel_title, item_title=item_title)
b.save()
</code></pre>
<p>radiolab.xml is a feed I've saved locally from <a href="http://feeds.wnyc.org/radiolab" rel="nofollow">Radiolab Podcast Feed.</a></p>
<p>Because this code is run whenever I python manage.py runserver, the parsed xml content is sent to my database just like I want to but this happens every time I runserver, meaning duplicate records. </p>
<p>I'd love some help in finding a way to make this happen just once and also a DRY mechanism for adding different feeds so they're parsed and saved to database preferably with the feed url submitted via forms. </p>
| 0 | 2016-08-08T11:30:24Z | 38,828,170 | <p>If you don't want it run every time, don't put it in models.py. The only thing that belongs there are the model definitions themselves.</p>
<p>Stuff that happens in response to a user action on the site goes in a view. Or, if you want this to be done from the admin site, it should go in the admin.py file.</p>
| 0 | 2016-08-08T11:33:26Z | [
"python",
"xml",
"django",
"django-models",
"xml-parsing"
] |
Spark cartesian product | 38,828,139 | <p>I have to compare coordinates in order to get the distance. Therefor i load the data with sc.textFile() and make a cartesian product. There are about 2.000.000 lines in the textfile thus 2.000.000 x 2.000.000 to be compared coordinates.</p>
<p>I tested the code with about 2.000 coordinates and it worked fine within seconds. But using the big file it seems to stop at a certain point and i don't know why. The code looks as follows:</p>
<pre><code>def concat(x,y):
if(isinstance(y, list)&(isinstance(x,list))):
return x + y
if(isinstance(x,list)&isinstance(y,tuple)):
return x + [y]
if(isinstance(x,tuple)&isinstance(y,list)):
return [x] + y
else: return [x,y]
def haversian_dist(tuple):
lat1 = float(tuple[0][0])
lat2 = float(tuple[1][0])
lon1 = float(tuple[0][2])
lon2 = float(tuple[1][2])
p = 0.017453292519943295
a = 0.5 - cos((lat2 - lat1) * p)/2 + cos(lat1 * p) * cos(lat2 * p) * (1 - cos((lon2 - lon1) * p)) / 2
print(tuple[0][1])
return (int(float(tuple[0][1])), (int(float(tuple[1][1])),12742 * asin(sqrt(a))))
def sort_val(tuple):
dtype = [("globalid", int),("distance",float)]
a = np.array(tuple[1], dtype=dtype)
sorted_mins = np.sort(a, order="distance",kind="mergesort")
return (tuple[0], sorted_mins)
def calc_matrix(sc, path, rangeval, savepath, name):
data = sc.textFile(path)
data = data.map(lambda x: x.split(";"))
data = data.repartition(100).cache()
data.collect()
matrix = data.cartesian(data)
values = matrix.map(haversian_dist)
values = values.reduceByKey(concat)
values = values.map(sort_val)
values = values.map(lambda x: (x[0], x[1][1:int(rangeval)].tolist()))
values = values.map(lambda x: (x[0], [y[0] for y in x[1]]))
dicti = values.collectAsMap()
hp.save_pickle(dicti, savepath, name)
</code></pre>
<p>Even a file with about 15.000 entries doesn't work. I know the cartesian causes O(n^2) runtime. But shouldn't spark handle this? Or is something wrong? The only starting point is a error message, but i don't know if it relates to the actual problem:</p>
<pre><code>16/08/06 22:21:12 WARN TaskSetManager: Lost task 15.0 in stage 1.0 (TID 16, hlb0004): java.net.SocketException: Daten?bergabe unterbrochen (broken pipe)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:440)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:452)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:452)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:452)
at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1741)
at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239)
16/08/06 22:21:12 INFO TaskSetManager: Starting task 15.1 in stage 1.0 (TID 17, hlb0004, partition 15,PROCESS_LOCAL, 2408 bytes)
16/08/06 22:21:12 WARN TaskSetManager: Lost task 7.0 in stage 1.0 (TID 8, hlb0004): java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
</code></pre>
| 0 | 2016-08-08T11:31:48Z | 38,831,748 | <p>You used <code>data.collect()</code> in your code which basically calls all data into one machine. Depending on the memory on that machine, 2,000,000 lines of data might not fit very well.</p>
<p>Also, I tried to reduce the number of computations to be done by doing joins instead of using <code>cartesian</code>. (Please note that I just generated random numbers using numpy and that the format here may be different from what you have. Still, the main idea is the same.)</p>
<pre><code>import numpy as np
from numpy import arcsin, cos, sqrt
# suppose my data consists of latlong pairs
# we will use the indices for pairing up values
data = sc.parallelize(np.random.rand(10,2)).zipWithIndex()
data = data.map(lambda (val, idx): (idx, val))
# generate pairs (e.g. if i have 3 pairs with indices [0,1,2],
# I only have to compute for distances of pairs (0,1), (0,2) & (1,2)
idxs = range(data.count())
indices = sc.parallelize([(i,j) for i in idxs for j in idxs if i < j])
# haversian func (i took the liberty of editing some parts of it)
def haversian_dist(latlong1, latlong2):
lat1, lon1 = latlong1
lat2, lon2 = latlong2
p = 0.017453292519943295
def hav(theta): return (1 - cos(p * theta))/2
a = hav(lat2 - lat1) + cos(p * lat1)*cos(p * lat2)*hav(lon2 - lon1)
return 12742 * arcsin(sqrt(a))
joined1 = indices.join(data).map(lambda (i, (j, val)): (j, (i, val)))
joined2 = joined1.join(data).map(lambda (j, ((i, latlong1), latlong2)): ((i,j), (latlong1, latlong2))
haversianRDD = joined2.mapValues(lambda (x, y): haversian_dist(x, y))
</code></pre>
| 1 | 2016-08-08T14:23:02Z | [
"python",
"apache-spark",
"cartesian-product"
] |
How to find polygon vertices from edge detection images? | 38,828,143 | <p><strong>1. The problem</strong></p>
<p>Given the images of a house roof, I am trying to find the contours of the roofs. I have labelled data available (as polygon vertices) which I interpolate and create the truth image which is shown below
<a href="http://i.stack.imgur.com/OhNKI.png" rel="nofollow"><img src="http://i.stack.imgur.com/OhNKI.png" alt="ground truth from annotations"></a></p>
<p>I use canny, hough-lines, LBP features to train an ML model the results look decent. the model output is shown in the middle, and overlay on test image is shown on right.</p>
<p><a href="http://i.stack.imgur.com/ZybaY.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZybaY.png" alt="left original image, middle Model output, right: Output over-layed to original image"></a></p>
<p><strong>2. What I need.</strong> </p>
<p>The final output should really be a set of polygons and I need to find the points on which these polygons should be drawn (see the highlighted points in image below). So the output can be set of n line segments. where each line segment is 2 points [(x1,y1),(x2,y2)]</p>
<p><a href="http://i.stack.imgur.com/70HAu.png" rel="nofollow"><img src="http://i.stack.imgur.com/70HAu.png" alt="marked image with vertices coded in color"></a></p>
<p><strong>3. What are my thoughts/ideas;</strong></p>
<p>a. Erosion,Dilation,Opening,closing,skeletonize operations</p>
<p>While these operations make the lines in the above image much neater, they donât help me find the polygon vertices I am looking for.</p>
<p>I'd like to fit (a number of) lines to the white pixels in the image (something like hough lines). </p>
<p>The intersections of these lines would give me the vertices for the polygons I am looking for.</p>
<p>I am wondering if there is a more standard/better way of accomplishing the above.</p>
| 0 | 2016-08-08T11:31:58Z | 38,834,390 | <p>I think <a href="http://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html#houghlinesp" rel="nofollow">HoughLinesP</a> will help you in your goal. It will find line segments and output them in a vector <code>[x1,y1,x2,y2]</code> where (x,y) pairs represent the start and endpoints of line segments.</p>
<p>Each vertex should be <strong>near</strong> the end of 2 or more line segments. You go through each of the endpoints and count how many times they appear. When you've processed all the points you can eliminate any that have less than 2 occurances. Of course you will need some small threshold for determining a point is unique because the gaps in the lines some psuedocode: <code>dist(point1, point2) < some_delta_threshold</code></p>
<p>I'm not sure how you would find the polygons at this point, but hopefully this offers some assistance</p>
| 1 | 2016-08-08T16:36:47Z | [
"python",
"numpy",
"image-processing",
"signal-processing",
"skimage"
] |
How to select a group of minimum values out of a numpy array? | 38,828,180 | <p><code>fitVec = np.zeros((100, 2))</code> #Initializing the fitVec, where first column` will be the indices and second column will contain the values</p>
<p>After Initialization, fitVec gets assigned some values by running a function.
Final fitVec values:</p>
<pre><code>fitVec [[ 2.00000000e+01 2.42733444e+10]
[ 2.10000000e+01 2.53836270e+10]
[ 2.20000000e+01 2.65580909e+10]
[ 2.30000000e+01 2.76674886e+10]
[ 2.40000000e+01 2.88334239e+10]
[ 2.50000000e+01 3.00078878e+10]
[ 2.60000000e+01 3.11823517e+10]
[ 2.70000000e+01 3.22917494e+10]
[ 2.80000000e+01 3.34011471e+10]
[ 2.90000000e+01 3.45756109e+10]
[ 3.00000000e+01 3.57500745e+10]
[ 3.10000000e+01 3.68594722e+10]
[ 3.20000000e+01 3.79688699e+10]
[ 3.30000000e+01 3.90782676e+10]
[ 3.40000000e+01 4.02527315e+10]
[ 3.50000000e+01 4.14271953e+10]
[ 3.60000000e+01 4.25365930e+10]
[ 3.70000000e+01 4.36476395e+10]]
</code></pre>
<p>**I haven't shown all of the 100*4 matrix to make it look less messy.
Now I want to select the twenty (20*4) minimum values out of it.
I'm trying
winner = np.argmin(fitVec[100,1])
but it gives me only one minimum value whereas I want 20 min values. How should I go about it?</p>
| 0 | 2016-08-08T11:33:57Z | 39,816,949 | <p>First off, I'd separate indices and values; no need to store them both as <code>float</code>. After that, <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" rel="nofollow"><code>numpy.argsort</code></a> is your friend:</p>
<pre><code>import numpy
idx = numpy.arange(20, 38, dtype=int)
vals = numpy.random.rand(len(idx))
i = numpy.argsort(vals)
sorted_idx = idx[i]
sorted_vals = vals[i]
print(idx)
print(vals)
print
print(sorted_idx)
print(sorted_vals)
</code></pre>
<p>Output:</p>
<pre><code>[20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37]
[ 0.00560689 0.73380138 0.53490514 0.1221538 0.45490855 0.39076217
0.39906252 0.59933451 0.7163099 0.393409 0.15854323 0.4631854
0.92469362 0.69999709 0.67664291 0.73184184 0.52893679 0.60365631]
[20 23 30 25 29 26 24 31 36 22 27 37 34 33 28 35 21 32]
[ 0.00560689 0.1221538 0.15854323 0.39076217 0.393409 0.39906252
0.45490855 0.4631854 0.52893679 0.53490514 0.59933451 0.60365631
0.67664291 0.69999709 0.7163099 0.73184184 0.73380138 0.92469362]
</code></pre>
| 1 | 2016-10-02T12:22:13Z | [
"python",
"numpy"
] |
Select multiple sections of rows by index in pandas | 38,828,331 | <p>I have large DataFrame with GPS path and some attributes. A few sections of the path are those which I need to analyse. I would like to subset only those sections to a new DataFrame. I can subset one section at the time but the idea is to have them all and to have an original index.</p>
<p>The problem is similar to:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A':[0,1,2,3,4,5,6,7,8,9],'B':['a','b','c','d','e','f','g','h','i','j']},
index=range(10,20,))
</code></pre>
<p>I want o get something like:</p>
<pre><code>cdf = df.loc[[11:13] & [17:20]] # SyntaxError: invalid syntax
</code></pre>
<p>desired outcome:</p>
<pre><code> A B
11 1 b
12 2 c
13 3 d
17 7 h
18 8 i
19 9 j
</code></pre>
<p>I know the example is easy with <code>cdf = df.loc[[11,12,13,17,18,19],:]</code> but in the original problem I have thousands of lines and some entries already removed, so listing points is rather not an option.</p>
| 1 | 2016-08-08T11:42:05Z | 38,828,421 | <p>One possible solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a>:</p>
<pre><code>cdf = pd.concat([df.loc[11:13], df.loc[17:20]])
print (cdf)
A B
11 1 b
12 2 c
13 3 d
17 7 h
18 8 i
19 9 j
</code></pre>
<p>Another solution with <code>range</code>:</p>
<pre><code>cdf = df.ix[list(range(11,14)) + list(range(17,20))]
print (cdf)
A B
11 1 b
12 2 c
13 3 d
17 7 h
18 8 i
19 9 j
</code></pre>
| 3 | 2016-08-08T11:46:59Z | [
"python",
"pandas",
"slice"
] |
Select multiple sections of rows by index in pandas | 38,828,331 | <p>I have large DataFrame with GPS path and some attributes. A few sections of the path are those which I need to analyse. I would like to subset only those sections to a new DataFrame. I can subset one section at the time but the idea is to have them all and to have an original index.</p>
<p>The problem is similar to:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A':[0,1,2,3,4,5,6,7,8,9],'B':['a','b','c','d','e','f','g','h','i','j']},
index=range(10,20,))
</code></pre>
<p>I want o get something like:</p>
<pre><code>cdf = df.loc[[11:13] & [17:20]] # SyntaxError: invalid syntax
</code></pre>
<p>desired outcome:</p>
<pre><code> A B
11 1 b
12 2 c
13 3 d
17 7 h
18 8 i
19 9 j
</code></pre>
<p>I know the example is easy with <code>cdf = df.loc[[11,12,13,17,18,19],:]</code> but in the original problem I have thousands of lines and some entries already removed, so listing points is rather not an option.</p>
| 1 | 2016-08-08T11:42:05Z | 38,828,498 | <p>You could use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html" rel="nofollow"><code>np.r_</code></a> to concatenate the slices:</p>
<pre><code>In [16]: df.loc[np.r_[11:13, 17:20]]
Out[16]:
A B
11 1 b
12 2 c
17 7 h
18 8 i
19 9 j
</code></pre>
<p>Note, however, that
<code>df.loc[A:B]</code> selects labels <code>A</code> through <code>B</code> with <code>B</code> included.
<code>np.r_[A:B]</code> returns an array of <code>A</code> through <code>B</code> with <code>B</code> excluded. To include <code>B</code> you would need to use <code>np.r_[A:B+1]</code>.</p>
<p>When passed a slice, such as <code>df.loc[A:B]</code>, <code>df.loc</code> ignores labels that are not in <code>df.index</code>. In contrast, when passed an array, such as <code>df.loc[np.r_[A:B]]</code>, <code>df.loc</code> may add a new row filled with NaNs for each value in the array which is not in <code>df.index</code>.</p>
<p>Thus to produce the desired result, you would need to adjust the right endpoint of the slices and use <code>isin</code> to test for membership in <code>df.index</code>:</p>
<pre><code>In [26]: df.loc[df.index.isin(np.r_[11:14, 17:21])]
Out[26]:
A B
11 1 b
12 2 c
13 3 d
17 7 h
18 8 i
19 9 j
</code></pre>
| 3 | 2016-08-08T11:51:11Z | [
"python",
"pandas",
"slice"
] |
Django says that table doesn't exist while it does | 38,828,394 | <p>My ultimate goal is to deploy Django application on new server and all I have is raw image of disk of the old server. I have everything set up on the new server: uwsgi, python, mysql, django etc. But let's get to my problem: when I run </p>
<pre><code>uwsgi --http :8001 --module propotolki.wsgi
</code></pre>
<p>It runs without errors but when I try to access it through browser I get the following stack trace in the logs:</p>
<pre><code>Internal Server Error: /
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 90, in get_response
response = middleware_method(request)
File "./apps/middleware/middleware.py", line 11, in process_request
if RedirectHandler.objects.filter(is_active=True, redirect_from=request.path).exists():
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 512, in exists
return self.query.has_results(using=self.db)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/query.py", line 409, in has_results
return bool(compiler.execute_sql(SINGLE))
File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/compiler.py", line 781, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/mysql/base.py", line 124, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
ProgrammingError: (1146, "Table 'propotolki.middleware_redirecthandler' doesn't exist")
</code></pre>
<p>Here's what I get from mysql console, proving that the table <em>does</em> exist:</p>
<pre><code>mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| propotolki |
+--------------------+
4 rows in set (0.00 sec)
mysql> use propotolki;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+---------------------------------------+
| Tables_in_propotolki |
+---------------------------------------+
| auth_group |
| auth_group_permissions |
| auth_permission |
| auth_user |
| auth_user_groups |
| auth_user_user_permissions |
| bla_files_blafile |
| bla_files_sitefiles |
| calc_anglealum |
| calc_anglesteelwhite |
| calc_baseheight |
| calc_cellsize |
| calc_color |
| calc_outgo_bc_25 |
| calc_pendant |
| calc_price |
| calc_roofcolor |
| calc_size |
| catalog_brand |
| catalog_category |
| catalog_colortemperature |
| catalog_diffuser |
| catalog_floortype |
| catalog_lightoutput |
| catalog_order |
| catalog_orderinfo |
| catalog_product |
| catalog_product_categories |
| catalog_product_color_temperature |
| catalog_product_diffuser |
| catalog_product_floor_type |
| catalog_product_light_output |
| catalog_product_related |
| catalog_product_related_categories |
| catalog_productsliderimage |
| catalog_sessionbasket |
| chunks_chunk |
| chunks_group |
| chunks_image |
| chunks_media |
| django_admin_log |
| django_content_type |
| django_ipgeobase_ipgeobase |
| django_ipgeobase_ipgeobase_city |
| django_ipgeobase_ipgeobase_country |
| django_ipgeobase_ipgeobase_region |
| django_session |
| django_site |
| feedback_feedback |
| gallery_gallerygroup |
| gallery_galleryimage |
| left_menu_leftmenuitem |
| middleware_breadcrumbs |
| middleware_flatpages |
| middleware_redirecthandler |
| middleware_slidebar |
| propotolki.django_content_type |
| propotolki.middleware_redirecthandler |
| south_migrationhistory |
| thumbnail_kvstore |
| watson_searchentry |
+---------------------------------------+
61 rows in set (0.01 sec)
</code></pre>
<p>I am far from being Django expert, so please ask for any needed info.
I also tried doing <code>python manager.py syncdb</code> but I get similar error telling that other tables also doesn't exist:</p>
<pre><code># python manage.py syncdb
Syncing...
Creating tables ...
Traceback (most recent call last):
File "manage.py", line 9, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 415, in handle
return self.handle_noargs(**options)
File "/usr/local/lib/python2.7/dist-packages/south/management/commands/syncdb.py", line 92, in handle_noargs
syncdb.Command().execute(**options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 415, in handle
return self.handle_noargs(**options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/syncdb.py", line 112, in handle_noargs
emit_post_sync_signal(created_models, verbosity, interactive, db)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/sql.py", line 216, in emit_post_sync_signal
interactive=interactive, db=db)
File "/usr/local/lib/python2.7/dist-packages/django/dispatch/dispatcher.py", line 185, in send
response = receiver(signal=self, sender=sender, **named)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/auth/management/__init__.py", line 82, in create_permissions
ctype = ContentType.objects.db_manager(db).get_for_model(klass)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/contenttypes/models.py", line 47, in get_for_model
defaults = {'name': smart_text(opts.verbose_name_raw)},
File "/usr/local/lib/python2.7/dist-packages/django/db/models/manager.py", line 154, in get_or_create
return self.get_queryset().get_or_create(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 373, in get_or_create
return self.get(**lookup), False
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 301, in get
num = len(clone)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 77, in __len__
self._fetch_all()
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 854, in _fetch_all
self._result_cache = list(self.iterator())
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 220, in iterator
for row in compiler.results_iter():
File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/compiler.py", line 710, in results_iter
for rows in self.execute_sql(MULTI):
File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/compiler.py", line 781, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/mysql/base.py", line 124, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
django.db.utils.ProgrammingError: (1146, "Table 'propotolki.django_content_type' doesn't exist")
</code></pre>
| 0 | 2016-08-08T11:45:36Z | 38,828,450 | <p>How did your table names end up with a '.' in them? It's a special character.</p>
<pre><code>| propotolki.django_content_type |
| propotolki.middleware_redirecthandler |
</code></pre>
<p>Try:</p>
<pre><code>ALTER TABLE `propotolki.middleware_redirecthandler` RENAME TO middleware_redirecthandler
</code></pre>
<p>You will need to do the same for the <code>django_content_type</code> table too.</p>
| 1 | 2016-08-08T11:48:33Z | [
"python",
"mysql",
"django",
"python-2.7",
"uwsgi"
] |
random module's randrange not working in pygame | 38,828,460 | <p>I am trying to make a game in pygame and want to add apples at random places but the random module is not working. I tried looking online but the guy there was able to use this without any problem
<strong>My code and The output is down below</strong> </p>
<pre><code>impoort pygame
imporrt random
pygame.init()
display_width = 1000
display_height = 500
gameDisplay = pygame.display.set_mode((display_width,display_height))
pygame.display.set_caption("SlikiSnake")
clock = pygame.time.Clock()
FPS = 15
block_size = 10
def GameLoop():
lead_x = display_width/2
lead_y = display_height/2
lead_x_change =0
lead_y_change =0
randAppleX = random.randint(0, display_width - block_size)
randAppleY = random.randint(0 ,display_height - block_size)
pygame.display.update()
gameExit = False
gameOver = False
while not gameExit:
while gameOver == True:
gameDisplay.fill(white)
message_on_screen("Game Over,press r to start again or Q to quit", black)
pygame.display.update()
for event in pygame.event.get():
if event.type ==pygame.KEYDOWN:
if event.key == pygame.K_r:
GameLoop()
if event.key == pygame.K_q:
gameExit = True
gameOver = False
if lead_x >= display_width or lead_x < 0 or lead_y >=display_height or lead_y < 0:
gameOver = True
lead_x += lead_x_change
lead_y += lead_y_change
gameDisplay.fill(random)
pygame.draw.rect(gameDisplay,red,[randAppleX,randAppleY,block-size,block_size])
pygame.draw.rect(gameDisplay,blue,[lead_x,lead_y,block_size,block_size])
clock.tick(FPS)
pygame.display.flip()
pygame.quit()
quit()
GameLoop()
`
</code></pre>
<p>but the error is:</p>
<p><a href="http://i.stack.imgur.com/caaPn.png" rel="nofollow"><img src="http://i.stack.imgur.com/caaPn.png" alt="http://i.stack.imgur.com/caaPn.png"></a></p>
| -5 | 2016-08-08T11:49:11Z | 38,828,617 | <ol>
<li><p>Next time write your code and errors directly in your question body, not as a screenshot.</p></li>
<li><p>You must have assigned <code>random</code> to a tuple somewhere between <code>import random</code> to the lines you posted.</p></li>
</ol>
| 1 | 2016-08-08T11:57:08Z | [
"python",
"random",
"pygame"
] |
Python threading interrupt sleep | 38,828,578 | <p>Is there a way in python to interrupt a thread when it's sleeping?
(As we can do in java)</p>
<p>I am looking for something like that.</p>
<pre><code> import threading
from time import sleep
def f():
print('started')
try:
sleep(100)
print('finished')
except SleepInterruptedException:
print('interrupted')
t = threading.Thread(target=f)
t.start()
if input() == 'stop':
t.interrupt()
</code></pre>
<p>The thread is sleeping for 100 seconds and if I type 'stop', it interrupts</p>
| 0 | 2016-08-08T11:55:11Z | 38,828,735 | <p>How about using condition objects: <a href="https://docs.python.org/2/library/threading.html#condition-objects" rel="nofollow">https://docs.python.org/2/library/threading.html#condition-objects</a></p>
<p>Instead of sleep() you use wait(<em>timeout</em>). To "interrupt" you call notify().</p>
| 0 | 2016-08-08T12:03:55Z | [
"python",
"multithreading",
"thread-sleep"
] |
Calculating the stock price volatility from a 3-columns csv | 38,828,622 | <p>I am looking for a way to make the following code work:</p>
<pre><code>import pandas
path = 'data_prices.csv'
data = pandas.read_csv(path, sep=';')
data = data.sort_values(by=['TICKER', 'DATE'], ascending=[True, False])
data.columns
</code></pre>
<p>I have a 2 dimensional array with three columns, the data looks like this:</p>
<pre><code>DATE;TICKER;PRICE
20151231;A UN Equity;41.81
20151230;A UN Equity;42.17
20151229;A UN Equity;42.36
20151228;A UN Equity;41.78
20151224;A UN Equity;42.14
20151223;A UN Equity;41.77
20151222;A UN Equity;41.22
20151221;A UN Equity;40.83
20151218;A UN Equity;40.1
20091120;PCG UN Equity;42.1
20091119;PCG UN Equity;41.53
20091118;PCG UN Equity;41.86
20091117;PCG UN Equity;42.23
20091116;PCG UN Equity;42.6
20091113;PCG UN Equity;41.93
20091112;PCG UN Equity;41.6
20091111;PCG UN Equity;42.01
</code></pre>
<p>Now, I want to calculate the x-day realized volatility where x came from an input field and x should not be bigger than the number of observations.</p>
<p>The steps that need to be taken:</p>
<ul>
<li>Calculate the log return for each line</li>
<li>Take those returns and run the standard deviation on top of it</li>
<li>Multiply by the square root of 255 to normalize for per annum volatility</li>
</ul>
| -1 | 2016-08-08T11:57:34Z | 38,832,992 | <p>Apologies, it's not fully clear on the sort of output you're hoping for so I've assumed you want to enter a ticker and a period (x) and see the current volatility number. Below I have also made use of numpy, in case you don't have that library.</p>
<p>Essentially I've created a DataFrame of all the original data and then a new DF filtered for the given ticker (where the user only needs to type in the 'A' or 'PCG' part, because 'UN Equity' is assumed constant). In this new DF, after checking that your period (x) input is not too high, it will output the most recent annualised volatility value.</p>
<pre><code>import numpy as np
import pandas as pd
data = pd.read_csv('dump.csv', sep=';')
data = data.sort_values(by=['TICKER','DATE'],ascending=[True,True])
def vol(ticker, x):
df = pd.DataFrame(data)
df['pct_chg'] = df.PRICE.pct_change()
df['log_rtn'] = np.log(1 + df.pct_chg)
df_filtered = df[df.TICKER==ticker+' UN Equity']
max_x = len(df_filtered) - 1
if x > max_x:
print('Too many periods. Reduce x')
df_filtered['vol'] = pd.rolling_std(df_filtered.log_rtn, window=x) * (255**0.5)
print(df_filtered.vol.iloc[-1])
</code></pre>
<p>As an example, with an input of <strong>vol('PCG',6)</strong> the output is 0.187855386042 </p>
<p>Probably not the most elegant and apologies if I've misunderstood your request.</p>
| 0 | 2016-08-08T15:21:09Z | [
"python",
"pandas",
"stocks",
"yield-return",
"volatility"
] |
Modifying data in a column Python | 38,828,675 | <p>Hi guys i have a column like this,</p>
<pre><code> Start
Start = 11122001
Start = 12012014
Start = 23122001
</code></pre>
<p>And i want to remove the "Start =" and the date format into </p>
<pre><code> Start
11/12/2001
12/01/2014
23/12/2001
</code></pre>
<p>How do I do this properly?</p>
| 0 | 2016-08-08T12:00:43Z | 38,828,715 | <p>It depends on what you are trying to do.</p>
<p>If you want to remove <code>Start =</code> from each line:</p>
<pre><code>lines = [ format_date(re.sub("^Start =", '', line)) for line in lines ]
</code></pre>
<p>(presuming you have your text line by line in a list). </p>
<p>To format date you need to implement the function <code>format_date</code>
which will convert dates from <code>11122001</code> to <code>11/12/2001</code>.</p>
<p>There are several ways how to do this depending on the input format.
One of the solutions:</p>
<pre><code>def format_date(x):
if re.match(x, '[0-9]{8}'):
return "/".join([x[:2], x[2:4], x[4:]])
else:
return x
</code></pre>
<p>You check first if the line match the date expression (looks like a date),
and if it does, rewrite it. Otherwise just return it as is.</p>
<p>Of course, you can combine the solution in one line
and don't use function at all, but in this case
it will be not so clear.</p>
<p>Another, <code>map</code>-based solution:</p>
<pre><code>def format_line(x):
x = re.sub("^Start =", '', line)
if re.match(x, '[0-9]{8}'):
return "/".join([x[:2], x[2:4], x[4:]])
else:
return x
map(format_line, lines)
</code></pre>
| 4 | 2016-08-08T12:02:33Z | [
"python",
"string"
] |
succeed in script but fail in rc.local | 38,828,731 | <p>I write mail.py(use webpy) to send me the ip address of each machine. </p>
<pre><code>#!/usr/bin/env python
#coding=utf-8
import web
def send_mail(send_to, subject, body, cc=None, bcc=None):
try:
web.config.smtp_server = 'xxxxx'
web.config.smtp_port = 25
web.config.smtp_username = 'xxx'
web.config.smtp_password = 'xxx'
web.config.smtp_starttls = True
send_from = 'xxx'
web.sendmail(send_from, send_to, subject, body, cc=cc, bcc=bcc)
return 1 #pass
except Exception, e:
print e
return -1 #fail
if __name__=='__main__':
print "in mail.py"
f=file('/home/spark/Desktop/ip.log')
f1=f.read()
f.close()
send_to = ['xxxx']
subject = 'xxxx'
body = 'ip:',f1
send_mail(send_to, subject, body)
</code></pre>
<p>rc.local</p>
<pre><code>bash deploy.sh &
exit 0
</code></pre>
<p>deploy.sh</p>
<pre><code>#!/usr/bin/env
cd /home/spark/Desktop
python mail.py >>deploy.log
echo "-----------------------------------------------------------"
</code></pre>
<p>I can receive email if I do 'python mail.py'.But when I put it in rc.local, I can not receive the email, the message in deploy.log outputs
[Errno -2 ] Name or service not known.</p>
<p>I am puzzled with this output.</p>
| 0 | 2016-08-08T12:03:40Z | 38,830,057 | <p>This might happen because <code>PATH</code> is different when <code>rc.local</code> runs. Specifically, <code>web.sendmail</code> might expect to find <code>sendmail</code> in the path, but it's not there yet. See docs <a href="http://webpy.org/cookbook/sendmail" rel="nofollow">here</a>.</p>
<p>The paths might be specific to your system. To debug this you could dump things from inside <code>rc.local</code> to a file such as <code>/tmp/rc.local.log</code> and inspect it when the system is up: e.g., <code>env >>/tmp/rc.local.log</code>.</p>
<p>Note, if you have multiple drives being mounted during startup, the drive containing <code>sendmail</code> might not be mounted yet at that point. This is a pain to deal with. To double check, add <code>mount >>/tmp/rc.local.log</code>.</p>
| 1 | 2016-08-08T13:08:00Z | [
"python",
"bash",
"ubuntu",
"rc"
] |
python global variable between modules | 38,828,738 | <p>I have two modules and I'm trying to modify a global variable in the first module from the second module.</p>
<p><code>app.py</code>:</p>
<pre><code>import time
glob=0;
def setval(val):
global glob
glob = val
print "glob = "+glob
def bau():
while(1):
if(glob):
print"glob is set"
else:
print"glob is unset"
time.sleep(1)
bau()
</code></pre>
<p><code>start.py</code>:</p>
<pre><code>from app import setval
app.setval(1)
</code></pre>
<p>I not able to understand why in <code>start.py</code> the full content of <code>app.py</code> is included and not only the function that I want.</p>
<p>Second I don't understand why by running the first <code>app.py</code> and then <code>start.py</code>, that <code>start.py</code> does not modify the value of the global variable in app. </p>
| 0 | 2016-08-08T12:04:10Z | 38,828,822 | <blockquote>
<p>I not able to understand why in start.py the full content of app.py is
included and not only the function that I want.</p>
</blockquote>
<p>You misunderstand how import works. What it does it actually runs the script you are importing and <em>then</em> binds to things defined inside. If you wish to only import a function then your script is not supposed to do anything other then declarations, i.e. remove <code>bau()</code> line.</p>
<p>So normally you would only declare functions, classes and constants inside your scripts and in one root script you would call them.</p>
<blockquote>
<p>Second I don't understand why by running the first app.py and then
start.py, that start.py does not modify the value of the global
variable in app.</p>
</blockquote>
<p>That's because <code>setval()</code> is never reached due to <code>bau()</code> call, i.e. <code>start.py</code> is blocked on <code>import</code> statement.</p>
<hr>
<p>Side note: I suggest you stop using globals. Wrap everything with functions/classes and pass parameters around. Globals are very hard to control.</p>
| 4 | 2016-08-08T12:08:09Z | [
"python"
] |
python global variable between modules | 38,828,738 | <p>I have two modules and I'm trying to modify a global variable in the first module from the second module.</p>
<p><code>app.py</code>:</p>
<pre><code>import time
glob=0;
def setval(val):
global glob
glob = val
print "glob = "+glob
def bau():
while(1):
if(glob):
print"glob is set"
else:
print"glob is unset"
time.sleep(1)
bau()
</code></pre>
<p><code>start.py</code>:</p>
<pre><code>from app import setval
app.setval(1)
</code></pre>
<p>I not able to understand why in <code>start.py</code> the full content of <code>app.py</code> is included and not only the function that I want.</p>
<p>Second I don't understand why by running the first <code>app.py</code> and then <code>start.py</code>, that <code>start.py</code> does not modify the value of the global variable in app. </p>
| 0 | 2016-08-08T12:04:10Z | 38,829,109 | <p>As suggested by <a href="http://stackoverflow.com/users/645551/freakish">freakish</a>, you can use that approach.</p>
<p>Or if you want to keep it in this format that it's called by different scripts , I suggest you use enviroment variables.</p>
<p>start.py</p>
<pre><code>import os
os.environ['glob_var'] = 'any_variable'
</code></pre>
<p>app.py</p>
<pre><code>import os
print os.environ.get('glob_var', 'Not Set')
</code></pre>
| 0 | 2016-08-08T12:21:50Z | [
"python"
] |
How to use Tensorflow and Sci-Kit Learn together in one environmen in PyCharm ? | 38,828,829 | <p>I am using Ubuntu 16.04 . I tried to install Tensorflow using Anaconda 2 . But it installed a Environment inside ubuntu . So i had to create a virtual environment and then use Tensorflow . Now how can i use both Tensorflow and Sci-kit learn together in a single environment . </p>
| 1 | 2016-08-08T12:08:33Z | 39,021,770 | <p>Anaconda <code>defaults</code> doesn't provide tensorflow yet, but <code>conda-forge</code> do, conda <code>install -c conda-forge tensorflow</code> should see you right, though (for others reading!) the installed tensorflow will not work on CentOS < 7 (or other Linux Distros of a similar vintage).</p>
| 1 | 2016-08-18T15:11:58Z | [
"python",
"pycharm",
"tensorflow",
"anaconda",
"ubuntu-16.04"
] |
How to use Tensorflow and Sci-Kit Learn together in one environmen in PyCharm ? | 38,828,829 | <p>I am using Ubuntu 16.04 . I tried to install Tensorflow using Anaconda 2 . But it installed a Environment inside ubuntu . So i had to create a virtual environment and then use Tensorflow . Now how can i use both Tensorflow and Sci-kit learn together in a single environment . </p>
| 1 | 2016-08-08T12:08:33Z | 39,022,103 | <p>there are many options to install, but for example if you use #pip method, you can install #tensorflow using:
sudo apt-get install python-pip python-dev python-virtualenv</p>
<p>then complete the process following the official instructions: <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#pip-installation" rel="nofollow">https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#pip-installation</a> </p>
<p>example python2.7:</p>
<pre><code>(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl
(tensorflow)$ pip install --upgrade $TF_BINARY_URL
</code></pre>
<p>and into the same virtual-environment install scikit-learn using same pip-method: <a href="http://scikit-learn.org/stable/install.html" rel="nofollow">http://scikit-learn.org/stable/install.html</a></p>
<pre><code>pip install -U scikit-learn
</code></pre>
| 0 | 2016-08-18T15:28:22Z | [
"python",
"pycharm",
"tensorflow",
"anaconda",
"ubuntu-16.04"
] |
"QThread: Destroyed while thread is still running" when run from the Windows cmd or IDLE but not from PyCharm? | 38,829,023 | <p>This is a simplified version of the program implementing PyQt multi-threading with QObject.moveToThread. Basically, I query a webpage on a separate thread and extract the HMTL content.</p>
<p>I get this problem where running the code from IDLE or the Windows command line hangs python. The Windows cmd shows "QThread: Destroyed while thread is still running". However, if I run it from Pycharm, everything works fine.</p>
<p>You can get the .ui file <a href="https://drive.google.com/open?id=0B3IIoC4r3p1CNmpuUm83bWt4QUk" rel="nofollow">here</a></p>
<p>Any ideas?</p>
<pre><code>import requests
import sys
from PyQt4 import QtGui, uic
from PyQt4.QtCore import QObject, pyqtSlot, pyqtSignal, QThread
qtCreatorFile = "window.ui"
Ui_MainWindow, QtBaseClass = uic.loadUiType(qtCreatorFile)
class HttpClient(QObject):
finished = pyqtSignal(str)
def __init__(self):
QObject.__init__(self)
@pyqtSlot()
def retrieve_page(self, url):
response = requests.get(url)
self.finished.emit(response.text)
class HtmlGetter(QtGui.QMainWindow, Ui_MainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
Ui_MainWindow.__init__(self)
self.setupUi(self)
self.go_button.clicked.connect(self.query_page)
def query_page(self):
http_client = HttpClient()
temp_thread = QThread()
http_client.moveToThread(temp_thread)
temp_thread.started.connect(
lambda: http_client.retrieve_page("http://www.google.com/"))
http_client.finished.connect(self.show_html)
# Terminating thread gracefully.
http_client.finished.connect(temp_thread.quit)
http_client.finished.connect(http_client.deleteLater)
temp_thread.finished.connect(temp_thread.deleteLater)
temp_thread.start()
def show_html(self, html_text):
print(html_text)
def main():
app = QtGui.QApplication(sys.argv)
window = HtmlGetter()
window.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-08-08T12:17:09Z | 38,832,029 | <p>I figured it out:</p>
<p>Both http_client and temp_thread have to be attributes or the HtmlGetter class. I think it's because otherwise python discards them when exiting the function. This is the working code:</p>
<pre><code>import requests
import sys
from PyQt4 import QtGui, uic
from PyQt4.QtCore import QObject, pyqtSlot, pyqtSignal, QThread
qtCreatorFile = "window.ui"
Ui_MainWindow, QtBaseClass = uic.loadUiType(qtCreatorFile)
class HttpClient(QObject):
finished = pyqtSignal()
send_text = pyqtSignal(str)
def __init__(self):
QObject.__init__(self)
@pyqtSlot()
def retrieve_page(self, url):
response = requests.get(url)
self.send_text.emit(response.text)
self.finished.emit()
class HtmlGetter(QtGui.QMainWindow, Ui_MainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
Ui_MainWindow.__init__(self)
self.setupUi(self)
self.go_button.clicked.connect(self.query_page)
def query_page(self):
self.http_client = HttpClient()
self.temp_thread = QThread()
self.http_client.moveToThread(self.temp_thread)
self.temp_thread.started.connect(
lambda: self.http_client.retrieve_page("http://www.google.com/"))
self.http_client.send_text.connect(self.show_html)
# Terminating thread gracefully.
self.http_client.finished.connect(self.temp_thread.quit)
self.http_client.finished.connect(self.http_client.deleteLater)
self.temp_thread.finished.connect(self.temp_thread.deleteLater)
self.temp_thread.start()
def show_html(self, html_text):
print(html_text)
def main():
app = QtGui.QApplication(sys.argv)
window = HtmlGetter()
window.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-08-08T14:36:15Z | [
"python",
"multithreading",
"pyqt",
"hang",
"destroy"
] |
Retain original format of datetime64 when converting to list | 38,829,075 | <pre><code>>>>df = pd.DataFrame(index=pd.date_range(DT.datetime(2016,8,1), DT.datetime(2016,8,9)), columns=['a','b'] )
>>>df.index
DatetimeIndex(['2016-08-01', '2016-08-02', '2016-08-03', '2016-08-04',
'2016-08-05', '2016-08-06', '2016-08-07', '2016-08-08',
'2016-08-09'],
dtype='datetime64[ns]', freq='D', tz=None)
>>>df.index.values.tolist()
[1470009600000000000L,
1470096000000000000L,
1470182400000000000L,
1470268800000000000L,
1470355200000000000L,
1470441600000000000L,
1470528000000000000L,
1470614400000000000L,
1470700800000000000L]
</code></pre>
<p>Basically the datetime64[ns] format is automatically converted to long format. Is there a way that I can keep the format for those operations otherwise I need to convert it back if I wanted to access the df content. For example</p>
<pre><code>>>>df.loc[df.index.values.tolist()[3]]
</code></pre>
<p>does not work, while </p>
<pre><code>>>>df.loc[df.index.values[3]]
</code></pre>
<p>works.</p>
| 0 | 2016-08-08T12:20:11Z | 38,830,047 | <p>You can retain the original format while converting them to list by using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.date.html#pandas.DatetimeIndex.date" rel="nofollow"><code>.date</code></a> of <code>pandas.DatetimeIndex.date</code> which returns the date part of the Timestamps.</p>
<pre><code>In [14]: df.index.date.tolist()
Out[14]:
[datetime.date(2016, 8, 1),
datetime.date(2016, 8, 2),
datetime.date(2016, 8, 3),
datetime.date(2016, 8, 4),
datetime.date(2016, 8, 5),
datetime.date(2016, 8, 6),
datetime.date(2016, 8, 7),
datetime.date(2016, 8, 8),
datetime.date(2016, 8, 9)]
</code></pre>
| 2 | 2016-08-08T13:07:35Z | [
"python",
"pandas",
"python-datetime"
] |
The truth value of a Series is ambiguous | 38,829,124 | <pre><code>df['class'] = np.where(((df['heart rate'] > 50) & (df['heart rate'] < 101 )) & ((df['systolic blood pressure'] > 140 & df['systolic blood pressure'] < 160)) &
((df['dyastolic blood pressure'] > 90 & ['dyastolic blood pressure'] < 100 )) & ((df['temperature'] > 35 & df['temperature'] < 39 )) &
((df['respiratory rate'] >11 & df ['respiratory rate'] <19)) & ((df['pulse oximetry' > 95] & df['pulse oximetry' < 100] )), "excellent", "critical")
</code></pre>
<p>For this code, I am getting:</p>
<blockquote>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
</blockquote>
| 0 | 2016-08-08T12:22:29Z | 38,829,324 | <p>Maybe this will solve your (our) problem.</p>
<pre><code>import random
import pandas as pd
import numpy as np
heart_rate = [random.randrange(45, 125) for _ in range(500)]
blood_pressure_systolic = [random.randrange(140, 230) for _ in range(500)]
blood_pressure_dyastolic = [random.randrange(90, 140) for _ in range(500)]
temperature = [random.randrange(34, 42) for _ in range(500)]
respiratory_rate = [random.randrange(8, 35) for _ in range(500)]
pulse_oximetry = [random.randrange(95, 100) for _ in range(500)]
vitalsign = {
'heart rate' : heart_rate,
'systolic blood pressure' : blood_pressure_systolic,
'dyastolic blood pressure' : blood_pressure_dyastolic,
'temperature' : temperature,
'respiratory rate' : respiratory_rate,
'pulse oximetry' : pulse_oximetry
}
vitalsign_maxima = {
'heart rate' : (50,101),
'systolic blood pressure' : (140,160),
'dyastolic blood pressure' : (90,100),
'temperature' : (35,39),
'respiratory rate' : (11,19),
'pulse oximetry' : (95,100)
}
def is_vitalsign_excellent(name):
lower, upper = vitalsign_maxima[name]
return (lower < df[name]) & (df[name] < upper)
df = pd.DataFrame(vitalsign)
f = np.where(is_vitalsign_excellent('heart rate') &
is_vitalsign_excellent('systolic blood pressure') &
is_vitalsign_excellent('dyastolic blood pressure') &
is_vitalsign_excellent('temperature') &
is_vitalsign_excellent('respiratory rate') &
is_vitalsign_excellent('pulse oximetry'),"excellent", "critical")
</code></pre>
<p>This one should work now.</p>
| 0 | 2016-08-08T12:32:17Z | [
"python",
"conditional",
"logical-operators"
] |
Scikit-learn SVM: Reshaping X leads to incompatible shapes | 38,829,135 | <p>I try to use scikit-learn SVM to predict whether a stock from S&P500 beats the index or not.
I have the 'sample' file from which I extract the features X and the labels (beats the index or doesn't beat it) Y.</p>
<p>When I tried it the first time (without reshaping X) I got the the following depreciation error:</p>
<pre><code>DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17
and will raise ValueError in 0.19. Reshape your data either using
X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1)
if it contains a single sample.
</code></pre>
<p>Consequently I tried the reshaping of X according to the recommendation and also to some forum posts.
Now however I get the following value error that X and Y don't have the same shape.</p>
<pre><code>ValueError: X and y have incompatible shapes.
X has 4337 samples, but y has 393.
</code></pre>
<p>Below you can see the shapes of X and Y before reshaping:</p>
<pre><code>('Shape of X = ', (493, 9))
('Shape of Y = ', (493,))
</code></pre>
<p>and after reshaping:</p>
<pre><code>('Shape of X = ', (4437, 1))
('Shape of Y = ', (493,))
</code></pre>
<p>I also tried to reshape so that I get the (493,9) shape, but also this didn't work as I got the following error.</p>
<pre><code>ValueError: total size of new array must be unchanged.
</code></pre>
<p>I posted below the code to extract the features and labels from the pandas DataFrame and and the SVM analysis:</p>
<p>Feature & Label selection:</p>
<pre><code>X = np.array(sample[features].values)
X = preprocessing.scale(X)
X = np.array(X)
X = X.reshape(-1,1)
Y = sample['status'].values.tolist()
Y = np.array(Y)
Z = np.array(sample[['changemktvalue', 'benchmark']])
</code></pre>
<p>SVM testing:</p>
<pre><code>test_size = 50
invest_amount = 1000
total_invests = 0
if_market = 0
if_strat = 0
clf = svm.SVC(kernel="linear", C= 1.0)
clf.fit(X[:-test_size],Y[:-test_size])
correct_count = 0
for x in range(1, test_size+1):
if clf.predict(X[-x])[0] == Y[-x]:
correct_count += 1
if clf.predict(X[-x])[0] == 1:
invest_return = invest_amount + (invest_amount * (Z[-x][0]/100)) #zeroth element of z
market_return = invest_amount + (invest_amount * (Z[-x][1]/100)) #marketsp500 is at pos 1
total_invests += 1
if_market += market_return
if_strat += invest_return
print("Accuracy:", (float(correct_count)/test_size) * 100.00)
</code></pre>
<p>Would be great if you have any inputs on how to solve this.</p>
| 0 | 2016-08-08T12:22:51Z | 38,829,560 | <p>You should not be reshaping <code>X</code> to <code>(-1, 1)</code>. In fact the error is in your call to the <code>predict</code> method.</p>
<p>Change</p>
<pre><code>clf.predict(X[-x])[0]
</code></pre>
<p>to</p>
<pre><code>clf.predict(X[-x].reshape((-1, 9)))[0]
</code></pre>
| 1 | 2016-08-08T12:42:56Z | [
"python",
"error-handling",
"scikit-learn",
"svm"
] |
Error NoReverseMatch | 38,829,138 | <p>I keep running in to a <code>NoReverseMatch</code> error on Django 1.10, while earlier versions have no problems with it.</p>
<p>rendered template:</p>
<pre><code>{% extends "loginBase.html" %}
{% block content %}
<h1>Login:</h1>
<form class="form-horizontal" role="form" method="post" action="{% url 'django.contrib.auth.views.login' %}">
{% csrf_token %}
{% if form.errors %}
<p>Your username and password didn't match. Please try again.</p>
{% endif %}
</code></pre>
<p>urls.py</p>
<pre><code>url(r'^login/$', views.login, {'template_name': 'login.html', 'authentication_form': LoginForm}, name='login'),
</code></pre>
<p>Any ideas on what the problem might be?</p>
| 0 | 2016-08-08T12:23:05Z | 38,829,349 | <p>In Django 1.10, <a href="https://docs.djangoproject.com/en/1.10/releases/1.10/#features-removed-in-1-10" rel="nofollow">you can no longer reverse URLs using the Python dotted path</a>, e.g. '<code>django.contrib.auth.views.login</code>'.</p>
<p>You already have <code>name='login'</code> in your URL pattern,</p>
<pre><code>url(r'^login/$', views.login, {...}, name='login'),
</code></pre>
<p>so use that in the url tag:</p>
<pre><code>{% url 'login' %}
</code></pre>
| 3 | 2016-08-08T12:33:32Z | [
"python",
"django",
"django-1.10"
] |
Boolearn logic in Python CGI script | 38,829,146 | <p>Does anyone know how to set a html check box to true or false with python. I am using a Python file to parse an XML file to a list. From this list I want to check a checkbox if the text in XML tag is 1 or I want it to remain unchecked if the the text in the XML tag is 0.</p>
<p>Thsi is being done as a cgi file, don't ask why. It just is. I can't use any frameworks as this is for a device with a small amount of memory.</p>
<p>The list I have parses the XML file to a list, this part works. </p>
<pre><code> <label class="checkbox inline control-label"><input name="L10" value="L10" checked="checked" type="checkbox"
<span> L10 &nbsp;&nbsp;&nbsp;</span></label>
<label class="checkbox inline control-label"><input name="L05" value="1" type="checkbox" checked/>
<span> L5 &nbsp;&nbsp;&nbsp;</span></label>
</code></pre>
<p>Can I do something like:</p>
<pre><code>if config_settings.settings[11] == '1':
True
</code></pre>
<p>Or could I put the logic into the html form something like:</p>
<pre><code><label class="checkbox inline control-label"><input name="L05" if config.settings.settings[11] == '1':
<input name="L05" value="1" type="checkbox" checked/>
</code></pre>
<p>any help would be greatly appreciated.</p>
| -1 | 2016-08-08T12:23:33Z | 38,850,770 | <p>I was shown a solution by someone who was using php to do something similiar. The answer that worked for me in the end was simple enough. I changed the setting in the XML file to 1 or 0 for true or false and then did this:</p>
<pre><code>if config_settings.settings[5] == '1':
print'''<html><label class="checkbox inline control-label"><input name="aWeight" value="1" type="checkbox" checked/></html>'''
else:
print'''<html><label class="checkbox inline control-label"><input name="aWeight" value="1" type="checkbox"/></html>'''
</code></pre>
<p>The HTML was inside a Python cgi script.</p>
| 0 | 2016-08-09T12:28:08Z | [
"python",
"html",
"xml",
"list"
] |
Dask error: Length of values does not match length of index | 38,829,387 | <p>I have read csv file using <a href="http://dask.pydata.org/en/latest/" rel="nofollow">dask</a> this way:</p>
<pre><code>import dask.dataframe as dd
train = dd.read_csv('act_train.csv')
</code></pre>
<p>Then I would like to apply simple logic per row , that works pretty fine in pandas:</p>
<pre><code>columns = list(train.columns)
for col in columns[1:]:
train[col] = train[col].apply(lambda x: x if x == -1 else x.split(' ')[1])
</code></pre>
<p>Unfortunately, last line of code generates the following error: <em>Length of values does not match length of index</em> </p>
<p>What am I doing wrong?</p>
| 0 | 2016-08-08T12:34:49Z | 38,830,548 | <p>If x doesn't contain space character than x.split(' ') will return a list containing single element x. </p>
<p>So, when u are trying to access the second element of x.split(' ') by calling
x.split(' ')[1]. It will give the error :</p>
<p>"Length of values does not match length of index", as there is no element at index 1 in x.split(' ').</p>
| 0 | 2016-08-08T13:30:30Z | [
"python",
"csv",
"dataframe",
"runtime-error",
"dask"
] |
How to compare two matrices (numpy ndarrays) element by element and get the min-value of each comparison | 38,829,428 | <p>So i want to make a comparison between two matrices (size: 98000 x 64). The comparison should be done element by element and i want to the min value of each comparison stored in a third matrix with the same dimensions. I also want the comparison being done without the use of loops! </p>
<p>Here's a small example:</p>
<pre><code>a=np.array([1,2,3])
b=np.array([4,1,2])
</code></pre>
<p>a function that compares the 1 and the 4, the 2 and the 1 and the 3 and the 2 and stores it in the vector c</p>
<p>answer</p>
<pre><code>c=[1,1,2]
</code></pre>
<p>is there an efficient way to do this?</p>
| 1 | 2016-08-08T12:37:10Z | 38,829,754 | <p>Numpy has a minimum feature, as below:</p>
<pre><code>c = np.minimum(a,b)
</code></pre>
| 4 | 2016-08-08T12:53:03Z | [
"python",
"numpy"
] |
Process variables from one method to another inside one class | 38,829,632 | <p>Guys, I think how to process variables from one method into another inside one class. For example : </p>
<pre><code>class Newclas:
def getPortalSources(self,portal):
self.connection_source=self.config.get("portal_"+portal,'Sources')
self.portal=portal
def getConnection(self,source):
self.source=source
self.connection_string=self.config.get('CONNECTION',self.portal+'_'+source+'_'+'connectstring') ## Connection
</code></pre>
<p>Until now I used something like above. So on getConnection I used self.portal variable from getPortalSources method. However it is still a little bit unclear for me.</p>
<p>Just wondering if there is some other better aproach to do something like that ? If so, could you give me some tips, or examples.</p>
<p>For example : </p>
<pre><code>def getPortalSources(self,portal):
self.connection_source=self.config.get("portal_"+portal,'Sources')
self.portal=portal
def getConnection(source):
self.connection_string=self.config.get('CONNECTION',getPortalSources.portal+'_'+source+'_'+'connectstring') ## Connection
</code></pre>
<p>Of course it will not work, but I think then you got my idea.</p>
<p>Regards</p>
| -2 | 2016-08-08T12:46:44Z | 38,829,961 | <p>What I suggest is that you use constructor, or global variables.</p>
<p>I will give an example for constructor here:</p>
<pre><code>class Newclas:
def __init__(self,portal='default_portal',source='default_source'):
self.portal = portal
self.source = source
def getPortalSources(self,portal=self.portal):
self.connection_source=self.config.get("portal_"+portal,'Sources')
def getConnection(self,source=self.source):
self.connection_string=self.config.get('CONNECTION',self.portal+'_'+source+'_'+'connectstring') ## Connection
</code></pre>
<p>So what's happening here is, when you create the object of this class you make like this:</p>
<pre><code>new_obj = Newclas(portal='the_portal',source='the source')
</code></pre>
<p>using <code>portal='the_portal',source='the source'</code> is optional if you don't provide this they will take the default value.</p>
<p>And now when you say : <code>new_obj.getConnection()</code> it will provide you with that thing.</p>
<p>if you say : <code>new_obj.getConnection(source='some_other_source')</code> then it will give with that source.</p>
| 0 | 2016-08-08T13:03:48Z | [
"python"
] |
tensorflow.train.import_meta_graph does not work? | 38,829,641 | <p>I try to simply save and restore a graph, but the simplest example does not work as expected (this is done using version 0.9.0 or 0.10.0 on Linux 64 without CUDA using python 2.7 or 3.5.2)</p>
<p>First I save the graph like this:</p>
<pre><code>import tensorflow as tf
v1 = tf.placeholder('float32')
v2 = tf.placeholder('float32')
v3 = tf.mul(v1,v2)
c1 = tf.constant(22.0)
v4 = tf.add(v3,c1)
sess = tf.Session()
result = sess.run(v4,feed_dict={v1:12.0, v2:3.3})
g1 = tf.train.export_meta_graph("file")
## alternately I also tried:
## g1 = tf.train.export_meta_graph("file",collection_list=["v4"])
</code></pre>
<p>This creates a file "file" that is non-empty and also sets g1 to something that looks like a proper graph definition.</p>
<p>Then I try to restore this graph:</p>
<pre><code>import tensorflow as tf
g=tf.train.import_meta_graph("file")
</code></pre>
<p>This works without an error, but does not return anything at all. </p>
<p>Can anyone provide the necessary code to simply just save the graph for "v4" and completely restore it so that running this in a new session will produce the same result?</p>
| 1 | 2016-08-08T12:47:09Z | 38,834,095 | <p>To reuse a <code>MetaGraphDef</code>, you will need to record the names of interesting tensors in your original graph. For example, in the first program, set an explicit <code>name</code> argument in the definition of <code>v1</code>, <code>v2</code> and <code>v4</code>:</p>
<pre><code>v1 = tf.placeholder(tf.float32, name="v1")
v2 = tf.placeholder(tf.float32, name="v2")
# ...
v4 = tf.add(v3, c1, name="v4")
</code></pre>
<p>Then, you can use the string names of the tensors in the original graph in your call to <code>sess.run()</code>. For example, the following snippet should work:</p>
<pre><code>import tensorflow as tf
_ = tf.train.import_meta_graph("file")
sess = tf.Session()
result = sess.run("v4:0", feed_dict={"v1:0": 12.0, "v2:0": 3.3})
</code></pre>
<p>Alternatively, you can use <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/framework.html#Graph.get_tensor_by_name" rel="nofollow"><code>tf.get_default_graph().get_tensor_by_name()</code></a> to get <code>tf.Tensor</code> objects for the tensors of interest, which you can then pass to <code>sess.run()</code>:</p>
<pre><code>import tensorflow as tf
_ = tf.train.import_meta_graph("file")
g = tf.get_default_graph()
v1 = g.get_tensor_by_name("v1:0")
v2 = g.get_tensor_by_name("v2:0")
v4 = g.get_tensor_by_name("v4:0")
sess = tf.Session()
result = sess.run(v4, feed_dict={v1: 12.0, v2: 3.3})
</code></pre>
<hr>
<p><strong>UPDATE</strong>: Based on discussion in the comments, here a the complete example for saving and loading, including saving the variable contents. This illustrates the saving of a variable by doubling the value of variable <code>vx</code> in a separate operation. </p>
<p>Saving:</p>
<pre><code>import tensorflow as tf
v1 = tf.placeholder(tf.float32, name="v1")
v2 = tf.placeholder(tf.float32, name="v2")
v3 = tf.mul(v1, v2)
vx = tf.Variable(10.0, name="vx")
v4 = tf.add(v3, vx, name="v4")
saver = tf.train.Saver([vx])
sess = tf.Session()
sess.run(tf.initialize_all_variables())
sess.run(vx.assign(tf.add(vx, vx)))
result = sess.run(v4, feed_dict={v1:12.0, v2:3.3})
print(result)
saver.save(sess, "model_ex1")
</code></pre>
<p>Restoring:</p>
<pre><code>import tensorflow as tf
saver = tf.train.import_meta_graph("model_ex1.meta")
sess = tf.Session()
saver.restore(sess, "model_ex1")
result = sess.run("v4:0", feed_dict={"v1:0": 12.0, "v2:0": 3.3})
print(result)
</code></pre>
<p>The bottom line is that, in order to make use of a saved model, you must remember the names of at least some of the nodes (e.g. a training op, an input placeholder, an evaluation tensor, etc.). The <code>MetaGraphDef</code> stores the list of variables that are contained in the model, and helps to restore these from a checkpoint, but you are required to reconstruct the tensors/operations used in training/evaluating the model yourself.</p>
| 3 | 2016-08-08T16:18:58Z | [
"python",
"tensorflow"
] |
Summing Booleans in a Dataframe | 38,829,702 | <p>I have a non-indexed Pandas dataframe where each row consists of numeric and boolean values with some NaNs. An example row in my dataframe might look like this (with variables above): </p>
<pre><code>X_1 X_2 X_3 X_4 X_5 X_6 X_7 X_8 X_9 X_10 X_11 X_12
24.4 True 5.1 False 22.4 55 33.4 True 18.04 False NaN NaN
</code></pre>
<p>I would like to add a new variable to my dataframe, call it <code>X_13</code>, which is the number of True values in each row. So in the above case, I would like to obtain:</p>
<pre><code>X_1 X_2 X_3 X_4 X_5 X_6 X_7 X_8 X_9 X_10 X_11 X_12 X_13
24.4 True 5.1 False 22.4 55 33.4 True 18.04 False NaN NaN 2
</code></pre>
<p>I have tried <code>df[X_13] = df[X_2] + df[X_4] + df[X_8] + df[X_10]</code> and that gives me what I want unless the row contains a <code>NaN</code> in a location where a Boolean is expected. For those rows, <code>X_13</code> has the value <code>NaN</code>. </p>
<p>Sorry -- this feels like it should be absurdly simple. Any suggestions? </p>
| 1 | 2016-08-08T12:50:23Z | 38,829,815 | <p>Select boolean columns and then sum:</p>
<pre><code>df.select_dtypes(include=['bool']).sum(axis=1)
</code></pre>
<p>If you have NaNs, first fill with False's:</p>
<pre><code>df.fillna(False).select_dtypes(include=['bool']).sum(axis=1)
</code></pre>
<hr>
<p>Consider this DataFrame:</p>
<pre><code>df
Out:
a b c d
0 True False 1 True
1 False True 2 NaN
</code></pre>
<p><code>df == True</code> returns True for (0, c) as well:</p>
<pre><code>df == True
Out:
a b c d
0 True False True True
1 False True False False
</code></pre>
<p>So if you take the sum, you will get 3 instead of 2. Another important point is that boolean arrays <a href="http://pandas.pydata.org/pandas-docs/stable/gotchas.html#nan-integer-na-values-and-na-type-promotions" rel="nofollow">cannot contain NaNs</a>. So if you check the dtypes, you will see:</p>
<pre><code>df.dtypes
Out:
a bool
b bool
c int64
d object
dtype: object
</code></pre>
<p>By filling with <code>False</code>s you can have a boolean array:</p>
<pre><code>df.fillna(False).dtypes
Out:
a bool
b bool
c int64
d bool
dtype: object
</code></pre>
<p>Now you can safely sum by selecting the boolean columns.</p>
<pre><code>df.fillna(False).select_dtypes(include=['bool']).sum(axis=1)
Out:
0 2
1 1
dtype: int64
</code></pre>
| 4 | 2016-08-08T12:55:59Z | [
"python",
"pandas",
"dataframe"
] |
Django how to send the email after save the object,but send email function don't delay return HttpResponse | 38,829,705 | <p>I want to send an email after saving the data into a database,but I don't want to wait after completed send email return the <code>HTTP</code> response ,I want to return the <code>HTTP</code> response direct ,then send the email by Django self. </p>
<pre><code>def received(request):
login=get_login(request)
received=True
cluster_list=models.Cluster.objects.all()
Asset_Type=models.CategoryOfAsset.objects.all()
if request.method=="GET":
return render(request,"received.html",locals())
if request.is_ajax():
try:
req=json.loads(request.body)
meta_data_dict=req['meta_data']
item_data_dict=req['all_item_data']['item_data']
received_or_shipment=True
insert_meta_item_to_DB(meta_data_dict,item_data_dict,received_or_shipment)
sendTemplateEmail(meta_data_dict,item_data_dict)
return HttpResponse(json.dumps('sucessful'))
except Exception, e:
logger.error(e)
</code></pre>
<p>Now the code will cause this error: </p>
<blockquote>
<p>ValueError: The view tool.views.received didn't return an
HttpResponse object</p>
</blockquote>
<p>It returned None instead.</p>
| 0 | 2016-08-08T12:50:31Z | 38,829,870 | <h2>Use a local SMTP server.</h2>
<p>Will result in the mail being queued (even if it's not delivered) almost instantly, so you are able to send your http response without being held up by the delays in sending the email.</p>
<h2>Use a Task Queue</h2>
<p>In it's simplest form, you can just chuck in the email message into a table and have cron job periodically inspect that table and send whatever messages that need to be sent.</p>
<p>A slightly more sophisticated method is to use <a href="http://redis.io/commands#list" rel="nofollow">Redis</a> and have a <a href="http://stackoverflow.com/questions/32088702/using-django-for-cli-tool">django CLI</a> listen in on it. </p>
<p>An even more sophisticated(?) solution is to use a Celery Task.</p>
| 2 | 2016-08-08T12:58:58Z | [
"python",
"django",
"email"
] |
Django how to send the email after save the object,but send email function don't delay return HttpResponse | 38,829,705 | <p>I want to send an email after saving the data into a database,but I don't want to wait after completed send email return the <code>HTTP</code> response ,I want to return the <code>HTTP</code> response direct ,then send the email by Django self. </p>
<pre><code>def received(request):
login=get_login(request)
received=True
cluster_list=models.Cluster.objects.all()
Asset_Type=models.CategoryOfAsset.objects.all()
if request.method=="GET":
return render(request,"received.html",locals())
if request.is_ajax():
try:
req=json.loads(request.body)
meta_data_dict=req['meta_data']
item_data_dict=req['all_item_data']['item_data']
received_or_shipment=True
insert_meta_item_to_DB(meta_data_dict,item_data_dict,received_or_shipment)
sendTemplateEmail(meta_data_dict,item_data_dict)
return HttpResponse(json.dumps('sucessful'))
except Exception, e:
logger.error(e)
</code></pre>
<p>Now the code will cause this error: </p>
<blockquote>
<p>ValueError: The view tool.views.received didn't return an
HttpResponse object</p>
</blockquote>
<p>It returned None instead.</p>
| 0 | 2016-08-08T12:50:31Z | 38,829,945 | <p>Simplest solution is to run is asynchronously:</p>
<pre><code>import threading
def run_async(func, args):
threading.Thread(target=func, args=args).start()
</code></pre>
<p>and then:</p>
<pre><code>run_async([function], ([args]))
</code></pre>
| 1 | 2016-08-08T13:02:36Z | [
"python",
"django",
"email"
] |
Matplotlib; open plots minimized? | 38,829,797 | <p>So I've got a code that is going to make a couple plots, for which I want/have the window maximized in size. However, I'd ideally also want these windows to be minimized when they get created (mostly because it would make testing things a lot easier!) Is there any way to achieve this? I'm currently using Python 2.7 on Linux, with matplotlib version 1.3.1 using backend TkAgg.</p>
<p>If any other information is needed, just ask, and thanks in advance (even if it turns out it's not possible!).</p>
| 0 | 2016-08-08T12:55:19Z | 38,830,288 | <p>Try this after relevant plots:</p>
<pre><code>mng = plt.get_current_fig_manager()
mng.window.showMaximized()
mng.window.showMinimized()
</code></pre>
| 0 | 2016-08-08T13:18:55Z | [
"python",
"linux",
"matplotlib",
"plot"
] |
Use struct in python to deserialize a byte array coming from serial | 38,829,921 | <p>I have a class with all sorts of data in it, like:</p>
<pre><code>class UARTMessage:
Identification1 = int(0) #byte 0
Timestamp1 = int(0) #bytes [1:5]
Voltage1 = int(0) #bytes [6:7]
Current1 = int(0) #bytes [8:9]
Signal1= int(0) #bytes [10:11]
Identification2 = int(0) #byte 12
Timestamp2 = int(0) #bytes [13:17]
Voltage2 = int(0) #bytes [18:19]
Current2 = int(0) #bytes [20:21]
Signal = int(0) #bytes [22:23]
Identification3 = int(0) #byte 24
</code></pre>
<p>The data to fill this structure up will come from a serial. I need to deserialize the data coming from the serial in the shape of this structure. I am reading from serial 40 bytes data chunks and I need to split itit. I tried pickle library but it seems that it's not fitted exactly for deserializing this type of data. I found <a href="https://docs.python.org/3/library/struct.html" rel="nofollow">struct</a> but I cannot understand how to use it proprely in this case. <br>
As the comments in the struct, I need to desearilize the chunks of data like: first byte is Identificator, bytes from 1 to 5 included is the timestamp and so on....<br>
DO you have any ideea how can I achieve this? <br>
Thanks</p>
| 1 | 2016-08-08T13:01:19Z | 38,833,966 | <p>First of all, we need to declare format of incoming bytes according to this list: <a href="https://docs.python.org/3/library/struct.html?highlight=struct#format-characters" rel="nofollow">https://docs.python.org/3/library/struct.html?highlight=struct#format-characters</a>. </p>
<pre><code>import struct
import sys
class UARTMessage:
fmt = '@B5shhhB5shhhB'
def __init__(self, data_bytes):
fields = struct.unpack(self.fmt, data_bytes)
(self.Identification1,
self.Timestamp1,
self.Voltage1,
self.Current1,
self.Signal1,
self.Identification2,
self.Timestamp2,
self.Voltage2,
self.Current2,
self.Signal2,
self.Identification3) = fields
self.Timestamp1 = int.from_bytes(self.Timestamp1, sys.byteorder)
self.Timestamp2 = int.from_bytes(self.Timestamp2, sys.byteorder)
self.Timestamp3 = int.from_bytes(self.Timestamp3, sys.byteorder)
</code></pre>
<p>First character of the <code>fmt</code> is byte order. <code>@</code> is python default (usually little endian), if you need to use network big-endian put <code>!</code>. Each subsequent character represents a data type which comes from the bytes stream.</p>
<p>Next, in the initializer, I unpack bytes according to the recipe in <code>fmt</code> into a <code>fields</code> tuple. Next, I assign the values of the tuple to object attributes. Timestamp has unusual length of 5 bytes, so it requires special treatment. It is fetched as 5-bytes string (<code>5s</code> in fmt) and converted to int using <code>int.from_bytes</code> function with system default bytes order (if you need a different bytes order enter <code>'big'</code> or <code>'little'</code> as a second argument).</p>
<p>When you want to create your structure, pass the sequence of bytes to the constructor.</p>
| 1 | 2016-08-08T16:11:44Z | [
"python",
"serialization"
] |
Selecting many rows in Qt table | 38,829,929 | <p>I am trying to create a <code>QTableView</code> in Qt which is efficient for large tables. I've managed to make the display of data efficient by defining my own abstract table model:</p>
<pre class="lang-py prettyprint-override"><code>from PyQt4 import QtCore, QtGui
from PyQt4.QtCore import Qt
class DataTableModel(QtCore.QAbstractTableModel):
def columnCount(self, index=None):
return 3
def rowCount(self, index=None):
return 10000
def headerData(self, section, orientation, role):
if role != Qt.DisplayRole:
return None
if orientation == Qt.Horizontal:
return 'c'
elif orientation == Qt.Vertical:
return 'r'
def data(self, index, role):
if not index.isValid():
return None
if role == Qt.DisplayRole:
return "({0},{1})".format(index.row(), index.column())
app = QtGui.QApplication([""])
viewer = QtGui.QTableView()
model = DataTableModel()
viewer.setModel(model)
viewer.show()
</code></pre>
<p>This works fine, because the <code>data</code> method is only called for cells that appear in the field of view of the table.</p>
<p>I now want to display an existing selection of some fraction of the rows:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
selected_rows = np.where(np.random.random(10000) > 0.5)[0]
</code></pre>
<p>I can tell the table widget about this selection by doing e.g.:</p>
<pre class="lang-py prettyprint-override"><code>smodel = viewer.selectionModel()
for row in selected_rows:
model_index = model.createIndex(row, 0)
smodel.select(model_index, QtGui.QItemSelectionModel.Select | QtGui.QItemSelectionModel.Rows)
</code></pre>
<p>However, this is very inefficient. It typically takes a second to select 1000-2000 rows, when in practice I have tables with millions of rows. There may be ways of speeding up this loop, but I would like to do away with the loop altogether, and instead have Qt only ask me (similarly to the data itself) for information about selections within the visible cells. Is this possible, and if so, what is the best way to achieve this?</p>
| 0 | 2016-08-08T13:01:37Z | 38,831,656 | <p>The simplest way would be to reimplement the selection model. The view queries the selection model for the selection status of each index. Alas, the <code>QItemSelectionModel</code> has a major shortcoming: you can't reimplement its <code>isSelected</code> method.</p>
<p>The best you can do is to create a fresh selection model on a model perhaps not attached to any views, then to select the items there, and finally to set the model and selection model on the view.</p>
<p>This is an API shortcoming.</p>
<p>If this is a professional project, you should be compiling your own copy of Qt anyway, under your own git version control, and it's a trivial manner to make the <code>isSelected</code> method virtual.</p>
| 0 | 2016-08-08T14:18:36Z | [
"python",
"performance",
"qt",
"pyqt"
] |
Selecting many rows in Qt table | 38,829,929 | <p>I am trying to create a <code>QTableView</code> in Qt which is efficient for large tables. I've managed to make the display of data efficient by defining my own abstract table model:</p>
<pre class="lang-py prettyprint-override"><code>from PyQt4 import QtCore, QtGui
from PyQt4.QtCore import Qt
class DataTableModel(QtCore.QAbstractTableModel):
def columnCount(self, index=None):
return 3
def rowCount(self, index=None):
return 10000
def headerData(self, section, orientation, role):
if role != Qt.DisplayRole:
return None
if orientation == Qt.Horizontal:
return 'c'
elif orientation == Qt.Vertical:
return 'r'
def data(self, index, role):
if not index.isValid():
return None
if role == Qt.DisplayRole:
return "({0},{1})".format(index.row(), index.column())
app = QtGui.QApplication([""])
viewer = QtGui.QTableView()
model = DataTableModel()
viewer.setModel(model)
viewer.show()
</code></pre>
<p>This works fine, because the <code>data</code> method is only called for cells that appear in the field of view of the table.</p>
<p>I now want to display an existing selection of some fraction of the rows:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
selected_rows = np.where(np.random.random(10000) > 0.5)[0]
</code></pre>
<p>I can tell the table widget about this selection by doing e.g.:</p>
<pre class="lang-py prettyprint-override"><code>smodel = viewer.selectionModel()
for row in selected_rows:
model_index = model.createIndex(row, 0)
smodel.select(model_index, QtGui.QItemSelectionModel.Select | QtGui.QItemSelectionModel.Rows)
</code></pre>
<p>However, this is very inefficient. It typically takes a second to select 1000-2000 rows, when in practice I have tables with millions of rows. There may be ways of speeding up this loop, but I would like to do away with the loop altogether, and instead have Qt only ask me (similarly to the data itself) for information about selections within the visible cells. Is this possible, and if so, what is the best way to achieve this?</p>
| 0 | 2016-08-08T13:01:37Z | 38,831,667 | <p>You should use the second overloaded version of <code>select</code>, the one that accepts a <a href="http://doc.qt.io/qt-4.8/qitemselection.html" rel="nofollow"><code>QItemSelection</code></a> instead of a single index.</p>
<p>The <code>QItemSelection</code> is able to select <em>ranges</em> of rows by providing the two argument to the constructor:</p>
<pre><code>QItemSelection(start_index, stop_index)
</code></pre>
<p>moreover you can <a href="http://doc.qt.io/qt-4.8/qitemselection.html#merge" rel="nofollow"><code>merge</code></a> the items to become a single selection:</p>
<pre><code>selection.merge(other_selection, flags)
</code></pre>
<p>This suggest to:</p>
<ol>
<li>Sort the indices of the rows you want to select</li>
<li>Use <code>itertools.groupby</code> to group together consecutive rows</li>
<li>Use <code>createIndex</code> to get the <code>QModelIndex</code> of all start-end indices of these groups</li>
<li>Create the <code>QItemSelection</code> objects for each group of rows</li>
<li>merge all <code>QItemSelection</code>s into a single <code>QItemSelection</code></li>
<li>Perform the selection over your model.</li>
</ol>
<p>Note that you want to sort the rows <em>by index</em>, not by their values.</p>
| 1 | 2016-08-08T14:19:05Z | [
"python",
"performance",
"qt",
"pyqt"
] |
Selecting many rows in Qt table | 38,829,929 | <p>I am trying to create a <code>QTableView</code> in Qt which is efficient for large tables. I've managed to make the display of data efficient by defining my own abstract table model:</p>
<pre class="lang-py prettyprint-override"><code>from PyQt4 import QtCore, QtGui
from PyQt4.QtCore import Qt
class DataTableModel(QtCore.QAbstractTableModel):
def columnCount(self, index=None):
return 3
def rowCount(self, index=None):
return 10000
def headerData(self, section, orientation, role):
if role != Qt.DisplayRole:
return None
if orientation == Qt.Horizontal:
return 'c'
elif orientation == Qt.Vertical:
return 'r'
def data(self, index, role):
if not index.isValid():
return None
if role == Qt.DisplayRole:
return "({0},{1})".format(index.row(), index.column())
app = QtGui.QApplication([""])
viewer = QtGui.QTableView()
model = DataTableModel()
viewer.setModel(model)
viewer.show()
</code></pre>
<p>This works fine, because the <code>data</code> method is only called for cells that appear in the field of view of the table.</p>
<p>I now want to display an existing selection of some fraction of the rows:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
selected_rows = np.where(np.random.random(10000) > 0.5)[0]
</code></pre>
<p>I can tell the table widget about this selection by doing e.g.:</p>
<pre class="lang-py prettyprint-override"><code>smodel = viewer.selectionModel()
for row in selected_rows:
model_index = model.createIndex(row, 0)
smodel.select(model_index, QtGui.QItemSelectionModel.Select | QtGui.QItemSelectionModel.Rows)
</code></pre>
<p>However, this is very inefficient. It typically takes a second to select 1000-2000 rows, when in practice I have tables with millions of rows. There may be ways of speeding up this loop, but I would like to do away with the loop altogether, and instead have Qt only ask me (similarly to the data itself) for information about selections within the visible cells. Is this possible, and if so, what is the best way to achieve this?</p>
| 0 | 2016-08-08T13:01:37Z | 38,877,904 | <p>If you want to display some selected rows only as opposite to
display everything and select some rows, then QSortFilterProxyModel could help:</p>
<pre><code>from PyQt4 import QtCore, QtGui
from PyQt4.QtCore import Qt
import numpy as np
class FilterProxy(QtGui.QSortFilterProxyModel):
afilter = set(np.where(np.random.random(10000) > 0.5)[0])
def updateFilter(self, new_filter):
self.afilter = new_filter
self.invalidateFilter()
def filterAcceptsRow(self, row, parent):
if not self.afilter:
return True
return row in self.afilter
class DataTableModel(QtCore.QAbstractTableModel):
def columnCount(self, index=None):
return 3
def rowCount(self, index=None):
return 10000
def headerData(self, section, orientation, role):
if role != Qt.DisplayRole:
return None
if orientation == Qt.Horizontal:
return 'c'
elif orientation == Qt.Vertical:
return 'r'
def data(self, index, role):
if not index.isValid():
return None
if role == Qt.DisplayRole:
return "({0},{1})".format(index.row(), index.column())
class MyWindow(QtGui.QMainWindow):
def __init__(self):
super(MyWindow, self).__init__()
self.viewer = QtGui.QTableView()
self.setCentralWidget(self.viewer)
self.action = QtGui.QAction("Filter x > 0.5", self)
self.action.triggered.connect(self.updateFilter)
self.addToolBar("Ffilter").addAction(self.action)
self.model = DataTableModel()
self.proxyModel = FilterProxy(self.viewer)
self.proxyModel.setDynamicSortFilter(True)
self.proxyModel.setSourceModel(self.model)
self.viewer.setModel(self.proxyModel)
def updateFilter(self):
new_max = np.random.rand(1)[0]
new_filter = set(np.where(np.random.random(10000) > new_max)[0])
self.action.setText("Filter x > {} N = {}".format(new_max, len(new_filter)))
self.proxyModel.updateFilter(new_filter)
app = QtGui.QApplication([""])
viewer = MyWindow()
viewer.show()
app.exec_()
</code></pre>
| 0 | 2016-08-10T15:36:21Z | [
"python",
"performance",
"qt",
"pyqt"
] |
Dealing with NSRect in python | 38,829,970 | <p>I have a function:</p>
<pre><code>def getMediaBox(doc, pageNum):
page = CGPDFDocumentGetPage(doc, pageNum)
return CGPDFPageGetBoxRect(page, kCGPDFMediaBox)
</code></pre>
<p>which returns:</p>
<pre><code><NSRect origin=<NSPoint x=0.0 y=0.0> size=<NSSize width=499.0 height=709.0>>
</code></pre>
<p>Is this a data type that python can do anything with? What do the <> brackets mean? Ideally, I want to query and test the size numbers.</p>
<p>Interestingly, CoreGraphics seems to accept nested lists, like [[0,0],[499,709]] when it expects an NSRect.</p>
<p>Many thanks: your help is greatly appreciated.</p>
| 0 | 2016-08-08T13:04:11Z | 38,873,499 | <p>You can query the NSRect object with the following lines:</p>
<pre><code>x = CGRectGetWidth(mediaBox)
y = CGRectGetHeight(mediaBox)
</code></pre>
| 0 | 2016-08-10T12:30:43Z | [
"python",
"osx",
"core-graphics"
] |
Groupby an numpy.array based on groupby of a pandas.DataFrame with the same length | 38,830,134 | <p>I have a numpy.array <code>arr</code> and a pandas.DataFrame <code>df</code>.</p>
<p><code>arr</code> and <code>df</code> have the same shape <code>(x,y)</code>. </p>
<p>I need to group by one column of <code>df</code> and apply the transformation of the impacted rows on <code>arr</code> which have the same shape. </p>
<p>To be clear, here is a toy example:</p>
<pre class="lang-py prettyprint-override"><code>arr =
0 1 12 3
2 5 45 47
3 19 11 111
df =
A B C D
0 0 1 2 3
1 4 5 6 7
2 4 9 10 11
</code></pre>
<p>I want to group <code>df</code> by <code>A</code> and compute the mean but in place of transforming <code>df</code> I want <code>arr</code> to be transformed. </p>
<p>So I get something like:</p>
<pre class="lang-py prettyprint-override"><code> arr =
0 1 12 3
(2+3)/2 (5+19)/2 (45+11)/2 (47+111)/2
</code></pre>
<p>Is that possible? With no expensive loops?</p>
<p>Thanks in advance</p>
| 2 | 2016-08-08T13:11:50Z | 38,830,253 | <p>It looks like need first create <code>DataFrame</code> from <code>arr</code>, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by column <code>A</code> and aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.mean.html" rel="nofollow"><code>mean</code></a>. Last convert it to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html" rel="nofollow"><code>values</code></a>:</p>
<pre><code>print (pd.DataFrame(arr).groupby(df.A).mean().values)
[[ 0. 1. 12. 3. ]
[ 2.5 12. 28. 79. ]]
</code></pre>
| 2 | 2016-08-08T13:17:03Z | [
"python",
"arrays",
"pandas",
"numpy",
"dataframe"
] |
Rearrange position of words in string conditionally | 38,830,165 | <p>I've spent the last few months developing a program that my company is using to clean and geocode addresses on a large scale (~5,000/day). It is functioning adequately well, however, there are certain address formats that I see daily that are causing issues for me.</p>
<p>Addresses with a format such as this <code>park avenue 1</code> are causing issues with my geocoding. My thought process to tackle this issue is as follows:</p>
<ol>
<li>Split the address into a list</li>
<li>Find the index of my delimiter word in the list. The delimiter words are words such as <code>avenue, street, road, etc</code>. I have a list of these delimiters called <code>patterns</code>.</li>
<li>Check to see if the word immediately following the delimiter is composed of digits with a length of 4 or less. If the number has a length of higher than 4 it is likely to be a zip code, which I do not need. If it's less than 4 it will most likely be the house number.</li>
<li>If the word meets the criteria that I explained in the previous step, I need to move it to the first position in the list.</li>
<li>Finally, I will put the list back together into a string.</li>
</ol>
<p>Here is my initial attempt at putting my thoughts into code:</p>
<pre><code>patterns ['my list of delimiters']
address = 'park avenue 1' # this is an example address
address = address.split(' ')
for pattern in patterns:
location = address.index(pattern) + 1
if address[location].isdigit() and len(address[location]) <= 4:
# here is where i'm getting a bit confused
# what would be a good way to go about moving the word to the first position in the list
address = ' '.join(address)
</code></pre>
<p>Any help would be appreciated. Thank you folks in advance.</p>
| 0 | 2016-08-08T13:12:54Z | 38,830,304 | <p>Make the string <code>address[location]</code> into a list by wrapping it in brackets, then concatenate the other pieces.</p>
<pre><code>address = [address[location]] + address[:location] + address[location+1:]
</code></pre>
<p>An example:</p>
<pre><code>address = ['park', 'avenue', '1']
location = 2
address = [address[location]] + address[:location] + address[location+1:]
print(' '.join(address)) # => '1 park avenue'
</code></pre>
| 1 | 2016-08-08T13:19:26Z | [
"python"
] |
Rearrange position of words in string conditionally | 38,830,165 | <p>I've spent the last few months developing a program that my company is using to clean and geocode addresses on a large scale (~5,000/day). It is functioning adequately well, however, there are certain address formats that I see daily that are causing issues for me.</p>
<p>Addresses with a format such as this <code>park avenue 1</code> are causing issues with my geocoding. My thought process to tackle this issue is as follows:</p>
<ol>
<li>Split the address into a list</li>
<li>Find the index of my delimiter word in the list. The delimiter words are words such as <code>avenue, street, road, etc</code>. I have a list of these delimiters called <code>patterns</code>.</li>
<li>Check to see if the word immediately following the delimiter is composed of digits with a length of 4 or less. If the number has a length of higher than 4 it is likely to be a zip code, which I do not need. If it's less than 4 it will most likely be the house number.</li>
<li>If the word meets the criteria that I explained in the previous step, I need to move it to the first position in the list.</li>
<li>Finally, I will put the list back together into a string.</li>
</ol>
<p>Here is my initial attempt at putting my thoughts into code:</p>
<pre><code>patterns ['my list of delimiters']
address = 'park avenue 1' # this is an example address
address = address.split(' ')
for pattern in patterns:
location = address.index(pattern) + 1
if address[location].isdigit() and len(address[location]) <= 4:
# here is where i'm getting a bit confused
# what would be a good way to go about moving the word to the first position in the list
address = ' '.join(address)
</code></pre>
<p>Any help would be appreciated. Thank you folks in advance.</p>
| 0 | 2016-08-08T13:12:54Z | 38,831,062 | <p>Here's a modified version of your code. It uses simple list slicing to rearrange the parts of the address list.</p>
<p>Rather than using a <code>for</code> loop to search for a matching road type it uses set operations.</p>
<p>This code isn't perfect: it won't catch "numbers" like 12a, and it won't handle weird street names like "Avenue Road".</p>
<pre><code>road_patterns = {'avenue', 'street', 'road', 'lane'}
def fix_address(address):
address_list = address.split()
road = road_patterns.intersection(address_list)
if len(road) == 0:
print("Can't find a road pattern in ", address_list)
elif len(road) > 1:
print("Ambiguous road pattern in ", address_list, road)
else:
road = road.pop()
index = address_list.index(road) + 1
if index < len(address_list):
number = address_list[index]
if number.isdigit() and len(number) <= 4:
address_list = [number] + address_list[:index] + address_list[index + 1:]
address = ' '.join(address_list)
return address
addresses = (
'42 tobacco road',
'park avenue 1 a',
'penny lane 17',
'nonum road 12345',
'strange street 23 london',
'baker street 221b',
'37 gasoline alley',
'83 avenue road',
)
for address in addresses:
fixed = fix_address(address)
print('{!r} -> {!r}'.format(address, fixed))
</code></pre>
<p><strong>output</strong></p>
<pre><code>'42 tobacco road' -> '42 tobacco road'
'park avenue 1 a' -> '1 park avenue a'
'penny lane 17' -> '17 penny lane'
'nonum road 12345' -> 'nonum road 12345'
'strange street 23 london' -> '23 strange street london'
'baker street 221b' -> 'baker street 221b'
Can't find a road pattern in ['37', 'gasoline', 'alley']
'37 gasoline alley' -> '37 gasoline alley'
Ambiguous road pattern in ['83', 'avenue', 'road'] {'avenue', 'road'}
'83 avenue road' -> '83 avenue road'
</code></pre>
| 1 | 2016-08-08T13:54:04Z | [
"python"
] |
sympy.geometry Point class is working slow | 38,830,236 | <p>I have a code which reads unstructured mesh. I wrote wrappers around geometric entities of <code>sympy.geometry</code> such as:</p>
<pre><code>class Point:
def __init__(self, x, y, parent_mesh):
self.shape = sympy.geometry.Point(x,y)
self.parent_mesh = parent_mesh
self.parent_cell = list()
</code></pre>
<p>Everything works fine but initialization of <code>sympy.geometry.Point</code> takes a lot of time for each <code>Point</code>. Actually, the code did not finish execution for thousands of points. Similar code written in C++ finished in a few seconds. Without it the code is fast enough (I removed it and timed). I read that a possible reason could be that <code>sympy.geometry</code> converts floating point numbers to rationals for precision. Is there a way (flag) to speed up <code>sympy.geometry</code> as I do not need exact precision?</p>
| 1 | 2016-08-08T13:16:13Z | 38,830,325 | <p>Take a look at the <a href="http://docs.sympy.org/latest/modules/geometry/points.html#module-sympy.geometry.point" rel="nofollow"><code>Point</code> class documentation</a>, specifically, in one of the first examples:</p>
<blockquote>
<p>Floats are automatically converted to Rational unless the evaluate flag is <code>False</code>.</p>
</blockquote>
<p>So, you could pass a flag named <code>evaluate</code> during initialization of your <code>Point</code> classes:</p>
<pre><code>self.shape = sympy.geometry.Point(x,y, evaluate=False)
</code></pre>
<p>which apparently signals what you're after.</p>
| 3 | 2016-08-08T13:20:20Z | [
"python",
"python-3.x",
"sympy"
] |
How to fill matplotlib bars with a gradient? | 38,830,250 | <p>I would be very interested in filling matplotlib/seaborn bars of a barplot with different gradients exactly like done here (not with matplotlib as far as I understood):
<a href="http://i.stack.imgur.com/fOGrG.png" rel="nofollow"><img src="http://i.stack.imgur.com/fOGrG.png" alt="enter image description here"></a></p>
<p>I have also checked this related topic <a href="http://stackoverflow.com/questions/22081361/pyplot-vertical-gradient-fill-under-curve">Pyplot: vertical gradient fill under curve?</a>.</p>
<p>Is this only possible via gr-framework:
<a href="http://i.stack.imgur.com/FJqmp.png" rel="nofollow"><img src="http://i.stack.imgur.com/FJqmp.png" alt="enter image description here"></a>
or are there alternative strategies?</p>
| -1 | 2016-08-08T13:16:55Z | 38,833,551 | <p>I am using seaborn <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.barplot.html" rel="nofollow">barplot</a> with the <code>palette</code> option. Imagine you have a simple dataframe like:</p>
<pre><code>df = pd.DataFrame({'a':[1,2,3,4,5], 'b':[10,5,2,4,5]})
</code></pre>
<p>using seaborn:</p>
<pre><code>sns.barplot(df['a'], df['b'], palette='Blues_d')
</code></pre>
<p>you can obtain something like:</p>
<p><a href="http://i.stack.imgur.com/Rdb83.png" rel="nofollow"><img src="http://i.stack.imgur.com/Rdb83.png" alt="enter image description here"></a></p>
<p>then you can also play with the <code>palette</code> option and <code>colormap</code> adding a gradient according to some data like:</p>
<pre><code>sns.barplot(df['a'], df['b'], palette=cm.Blues(df['b']*10)
</code></pre>
<p>obtaining:</p>
<p><a href="http://i.stack.imgur.com/EaFAi.png" rel="nofollow"><img src="http://i.stack.imgur.com/EaFAi.png" alt="enter image description here"></a></p>
<p>Hope that helps.</p>
| 1 | 2016-08-08T15:48:56Z | [
"python",
"matplotlib",
"bar-chart",
"gradient",
"seaborn"
] |
Removing \xa0 from string in a list | 38,830,308 | <p>I have a list with a bunch of words:</p>
<pre><code>lista = ['Jeux Olympiques De Rio\xa02016', 'Sahara Ray', 'Michael Phelps', 'Amber Alert']
</code></pre>
<p>I tried to replace the <code>'\xa0'</code>:</p>
<pre><code>for element in listor:
element = element.replace('\xa0',' ')
</code></pre>
<p>But it didn't work. Also, when I <code>print</code> the elements, it prints:</p>
<pre><code>print(lista[0])
Jeux Olympiques De Rio 2016
</code></pre>
<p>Does anyone have an idea on how to solve this?</p>
| 1 | 2016-08-08T13:19:36Z | 38,830,388 | <pre><code>for index, element in enumerate(listor):
listor[index] = element.replace('\xa0',' ')
</code></pre>
<p>Now you're replacing the string within the list rather than trying to change the string itself. Previously, you were assigning a new string to the same variable name, which doesn't change the previous string but just changes the string that the "element" variable is pointing to. With this, you'll be actually overwriting the list element with a new string.</p>
| 2 | 2016-08-08T13:23:21Z | [
"python",
"string",
"python-3.x"
] |
Removing \xa0 from string in a list | 38,830,308 | <p>I have a list with a bunch of words:</p>
<pre><code>lista = ['Jeux Olympiques De Rio\xa02016', 'Sahara Ray', 'Michael Phelps', 'Amber Alert']
</code></pre>
<p>I tried to replace the <code>'\xa0'</code>:</p>
<pre><code>for element in listor:
element = element.replace('\xa0',' ')
</code></pre>
<p>But it didn't work. Also, when I <code>print</code> the elements, it prints:</p>
<pre><code>print(lista[0])
Jeux Olympiques De Rio 2016
</code></pre>
<p>Does anyone have an idea on how to solve this?</p>
| 1 | 2016-08-08T13:19:36Z | 38,830,447 | <p>Just use a list comprehension to replace the ending if a string contains <code>'\xa0'</code>:</p>
<pre><code>res = [elem if '\xa0' not in elem else elem.replace('\xa0', '') for elem in lista]
</code></pre>
<p>Your current approach merely re-assigns a name (<code>element</code>) over and over without actually modifying the list <code>lista</code>. The list comprehension will create a new list with elements from the original list, i.e:</p>
<pre><code>for elem in lista
</code></pre>
<p>and replace all strings that contain <code>\xa0</code> i.e:</p>
<pre><code>elem if '\xa0' not in elem else elem.replace('\xa0', '')
</code></pre>
| 2 | 2016-08-08T13:25:28Z | [
"python",
"string",
"python-3.x"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.