Q_Id int64 337 49.3M | CreationDate stringlengths 23 23 | Users Score int64 -42 1.15k | Other int64 0 1 | Python Basics and Environment int64 0 1 | System Administration and DevOps int64 0 1 | Tags stringlengths 6 105 | A_Id int64 518 72.5M | AnswerCount int64 1 64 | is_accepted bool 2
classes | Web Development int64 0 1 | GUI and Desktop Applications int64 0 1 | Answer stringlengths 6 11.6k | Available Count int64 1 31 | Q_Score int64 0 6.79k | Data Science and Machine Learning int64 0 1 | Question stringlengths 15 29k | Title stringlengths 11 150 | Score float64 -1 1.2 | Database and SQL int64 0 1 | Networking and APIs int64 0 1 | ViewCount int64 8 6.81M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
35,394,675 | 2016-02-14T17:17:00.000 | 3 | 1 | 1 | 0 | python,package,pypi | 35,530,716 | 2 | false | 0 | 0 | Upon searching pypi.python.org for pi3d I have found that when you go to the pi3d v2.9 page there is now a large bold warning saying that it isn't the latest version and gives a link to v2.10 which was probably put there between the time you asked this question and now. However the fact that the v2.10 was not listed fo... | 2 | 9 | 0 | I maintain the pi3d package which is available on pypi.python.org. Prior to v2.8 the latest version was always returned by a search for 'pi3d'. Subsequently v2.7 + v2.8 then v2.7 + v2.8 + v2.9 were listed. These three are still listed even though I am now at v2.10. i.e. the latest version is NOT listed and it requires ... | on pypi.python.org what would cause hidden old versions to be returned by explicit search | 0.291313 | 0 | 0 | 269 |
35,395,843 | 2016-02-14T19:01:00.000 | 0 | 0 | 0 | 1 | python,openshift | 35,442,172 | 1 | true | 0 | 1 | Bryan has answered the question. Tkinter will not work with WSGI. A web framework such as Django must be used. | 1 | 0 | 0 | I would like to deploy a Python3 app that uses tkinter on OpenShift. I added the following to setup.py: install_requires=["Tcl==8.6.4"]. When I ran git push I received the following error:
Could not find suitable distribution for Requirement.parse('Tcl==8.6.4').
Can anyone provide the correct syntax, distribution p... | Using Tkinter with Openshift | 1.2 | 0 | 0 | 105 |
35,397,377 | 2016-02-14T20:07:00.000 | 0 | 0 | 0 | 0 | python,themes,freeze,python-idle | 35,397,402 | 1 | false | 0 | 0 | Turns out one way is manually deleting the faulty theme. This allows the Configure IDLE menu to open. Whoops. | 1 | 2 | 0 | So, recently I was using the Python theme function for the IDLE program itself. I downloaded three themes and built my own one, which is selected now. The problem is, I forgot to set colours for the blinker and highlighting, which is hugely problematic. When I went to see if I could change back to the default setting,... | Python freezes when configuring IDLE | 0 | 0 | 0 | 312 |
35,398,139 | 2016-02-14T21:21:00.000 | 1 | 0 | 1 | 0 | python,memory,nonetype | 35,398,203 | 2 | false | 0 | 0 | The memory location of None is statically allocated. It is set, when python is compiled. So different versions of CPython has different ids. | 1 | 2 | 0 | When I type id(None) into a Python interpreter, I get 9545840. I can open another terminal and do the same thing, and I get the same result even if the first terminal has been closed, so apparently None has been assigned a place in memory that has been reserved. When is that memory location decided on? Is it somethi... | None memory location | 0.099668 | 0 | 0 | 117 |
35,399,162 | 2016-02-14T23:09:00.000 | 0 | 0 | 1 | 0 | ipython,ubuntu-15.10 | 35,399,319 | 1 | false | 0 | 0 | Did you try to install this traitlets dependency ? : pip install traitlets | 1 | 0 | 0 | I try to open ipython notebook but i get this message:
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 3080, in
@_call_aside
File "/usr/lib/python2.7/dist-packages/pkg_... | ipython notebook don't open | 0 | 0 | 0 | 114 |
35,404,323 | 2016-02-15T08:12:00.000 | 0 | 0 | 0 | 0 | python,pygal | 35,423,410 | 1 | false | 0 | 0 | Never mind, I got it sorted, I was using the wrong graph type | 1 | 0 | 1 | How can I plot multiple data series with different number of elements and have them fill the graph along the x axis?
At the moment if I graph a = [1,2,3,4,5] and b = [1,2,3] the b lines only covers half the graph. Is this possible or do I need to somehow combine the graphs after plotting/rendering them? | pygal data series of different lengths | 0 | 0 | 0 | 110 |
35,407,514 | 2016-02-15T10:56:00.000 | 0 | 0 | 1 | 1 | python,c++,linux,gdb,arm | 35,412,607 | 1 | false | 0 | 0 | You are probably missing library headers (something like python3-dev).
To install it on Ubuntu or similar start by sudo apt-get install python3-dev.
Or if you don't plan to use python scripting in gdb, you can configure with "--without-python".
As far as I can tell you are also not configuring gdb correctly. You can le... | 1 | 2 | 0 | I want to debug application on devices , i prefer to use gdb(ARM version) than gdb with gdbserver to debug, because there is a dashboard , a visual interface for GDB in Python.
It must cooperation with gdb(ARM version) on devices,so i need to cross compiling a ARM version of gdb with python, the command used shows bel... | GDB cross-compilation for arm | 0 | 0 | 0 | 1,845 |
35,411,265 | 2016-02-15T13:56:00.000 | 0 | 1 | 1 | 0 | python,logging,optimization | 35,420,774 | 2 | false | 0 | 0 | Use logger.debug('%s', myArray) rather than logger.debug(myArray). The first argument is expected to be a format string (as all the documentation and examples show) and is not assumed to be computationally expensive. However, as @dwanderson points out, the logging will actually only happen if the logger is enabled for ... | 1 | 2 | 1 | I'm optimizing a Python program that performs some sort of calculation. It uses NumPy quite extensively. The code is sprinkled with logger.debug calls (logger is the standard Python log object).
When I run cProfile I see that Numpy's function that converts an array to string takes 50% of the execution time. This is sur... | Python logger.debug converting arguments to string without logging | 0 | 0 | 0 | 2,623 |
35,414,707 | 2016-02-15T16:47:00.000 | 0 | 0 | 0 | 0 | python,rethinkdb | 35,417,780 | 1 | false | 1 | 0 | The easiest thing to do would be to denormalize your data so that your changefeed only has to look at one table. | 1 | 0 | 0 | I use rethinkdb changefeed and I need to catch event from one table with condition from another: first table contains some information, second table contains info about user and I need catch change in first table by the specific user.
I tryed join tables and use changefeed with it, but it not works good.
Are there way... | How use changefeed with 2 tables? | 0 | 0 | 0 | 35 |
35,421,803 | 2016-02-16T00:52:00.000 | 0 | 0 | 0 | 0 | python,database,filesystems,document-oriented-db | 35,421,941 | 2 | false | 0 | 0 | A DODB sounds like a much more reliable and professional solution. Besides you can add stored procedures thinking in the future and besides most databases offer text search capabilities. Backups are also easier, instead of using an incremental tar command, you can use the native DB backup tools.
I'm fan of CouchDB a... | 1 | 2 | 0 | I'm working in a Python program which has to access data that is currently stored in plain text files. Each file represents a cluster of data points that will be accessed together. I don't need to support different queries, the only thing I need is to retrieve and copy to memory cluster of data as fast as possible.
I'm... | Document-oriented databases vs plain text files | 0 | 1 | 0 | 367 |
35,422,002 | 2016-02-16T01:14:00.000 | 1 | 0 | 0 | 0 | python,django,forms,model,message | 35,422,055 | 2 | false | 1 | 0 | I assume that you will have some view which will render page on which user of your site will be able to read the unread notifications. So I think you can simply add to notifications model bool field unread. This field is set up when there is new notification to true. After user render page with unread notifications thi... | 1 | 1 | 0 | I want to develop a notification system with Django. So I have an button (and a count of unread messages), that show all messages to the user, so the counter returns to zero again. How can detect my database, that the user already has read the messages and reset the counter? I dont think that I can emulate this with fo... | How can I model this behavior in Django? | 0.099668 | 0 | 0 | 105 |
35,422,495 | 2016-02-16T02:14:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore,google-console-developer | 35,578,109 | 1 | false | 1 | 0 | If you check the little question-mark near the statistics summary it says the following:
Statistics are generated every 24-48 hours. In addition to the kinds used for your entities, statistics also include system kinds, such as those related to metadata. System kinds are not visible in your kinds menu. Note that stati... | 1 | 0 | 0 | I recently deployed my app on GAE.
In my Datastore page of Google Cloud Console, in the Dashboard Summary, it shows that I have 75 entities. However, when I click on the Entities tab, it shows I have 2 entities of one kind and 3 entities of another kind. I remember creating these entities. I'm just curious where the 75... | Google App Engine Console shows more entities than I created | 0.197375 | 0 | 0 | 70 |
35,428,278 | 2016-02-16T09:14:00.000 | 1 | 0 | 1 | 0 | python,performance,python-2.7,python-3.x,design-patterns | 35,433,954 | 1 | false | 0 | 0 | Your question is quite broad, so I can't give you an exact answer. However, what I would generally do here is to run a linter like flake8 over the whole codebase to show you where you have unused imports and if you have references in your files to things that you haven't imported. It won't tell you if a whole file is n... | 1 | 0 | 0 | In legacy system, We have created init module which load information and used by various module(import statement). It's big module which consume more memory and process longer time and some of information is not needed or not used till now. There is two propose solution.
Can we determine in Python who is using this mo... | Determine usage/creation of object and data member into another module | 0.197375 | 0 | 1 | 41 |
35,434,188 | 2016-02-16T13:41:00.000 | 0 | 0 | 1 | 0 | python,strip | 35,435,311 | 2 | false | 0 | 0 | So, what is the memory on your target system? Unless you have less than 220MB RAM or so for the whole process, I think str.strip is what you should use there.
One could interactively consume the 1GB file to create a stripped 100MB part - but that would be cost intensive - having to hold up to the full 100MB in an inte... | 1 | 0 | 0 | I have a large string, >100mb in size.
I want to remove leading and trailing white space.
What is a simple and memory efficient way to do this?
Consider the following problem:
A 1Gb file will be partitioned for parallel processing.
This file is divided into 10 equal parts, each 100 Mb long.
A large part of these files ... | What is a simple and memory efficient way strip whitespace from a large string in Python | 0 | 0 | 0 | 228 |
35,436,599 | 2016-02-16T15:32:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,linear-regression | 35,438,322 | 3 | false | 0 | 0 | There is a linear classifier sklearn.linear_model.RidgeClassifer(alpha=0.) that you can use for this. Setting the Ridge penalty to 0. makes it do exactly the linear regression you want and set the threshold to divide between classes. | 1 | 1 | 1 | I trained a linear regression model(using sklearn with python3),
my train set was with 94 features and the class of them was 0 or 1..
than i went to check my linear regression model on the test set and it gave me those results:
1.[ 0.04988957] its real value is 0 on the test set
2.[ 0.00740425] its real value is 0 on t... | can i make linear regression predict like a classification? | 0.066568 | 0 | 0 | 2,479 |
35,437,458 | 2016-02-16T16:11:00.000 | 1 | 0 | 0 | 0 | python,mongodb,mongoengine | 35,648,604 | 1 | true | 1 | 0 | Mongoengine do not rebuild index automaticly. Mongoengine track changes in models (btw dont work if you add sparse to your filed(if field dont have unique options)) and then fire the ensureIndex in mongoDB. But when its fire - make sure you delete oldest index version manualy(Mongoengine doesn't) in mongoDB.
The probl... | 1 | 3 | 0 | When Mongoengine rebuild(update) a information about indexes? I mean, if a added or change some field (added uniques or sparse option to filed) or added some meta info in model declaration.
So question is:
When mongoengine update it?
How do they track changes? | When Mongoengine rebuild indexes? | 1.2 | 1 | 0 | 349 |
35,437,656 | 2016-02-16T16:19:00.000 | 0 | 0 | 0 | 0 | python,django,excel,django-import-export | 35,437,885 | 2 | false | 1 | 0 | An easy fix would be adding an apostrophe (') at the beginning of each number when doing using import-export. This way Excel will recognize those numbers as a text. | 1 | 1 | 0 | I am faced with the following problem: when I generate .csv files in python using django-import-export even though the field is a string, when I open it in Excel the leading zeros are omitted. E.g. 000123 > 123.
This is a problem, because if I'd like to display a zipcode I need the zeros the way they are. I can cover i... | Django import-export leading zeros for numerical values in excel | 0 | 0 | 0 | 281 |
35,440,612 | 2016-02-16T18:43:00.000 | 4 | 0 | 0 | 1 | python,django,macos | 35,442,348 | 1 | true | 1 | 0 | This is what version control is for. Sign up for an account at Github, Bitbucket, or Gitlab, and push your code there. | 1 | 5 | 0 | I'm developing a Django project on my MacBook Pro. Constantly paranoid that if my house burns down, someone stoling my MB, hard drive failure or another things that are not likely, but catastrophic if it occurs.
How can I create or get automatic backup every 1 hour from my OS X directory where the Django project is to ... | Create regular backups from OS X to the cloud | 1.2 | 0 | 0 | 35 |
35,441,310 | 2016-02-16T19:22:00.000 | 1 | 0 | 0 | 0 | python-2.7,csv,pandas | 35,441,725 | 1 | true | 0 | 0 | Upgrading pandas from 0.15.1 to 0.17.1 resolved this issue. | 1 | 0 | 1 | I need to save to csv, but have date values in the series that are below 1900 (ie Mar 1 1899), which is preventing this from happening. I get ValueError: year=1899 is before 1900; the datetime strftime() methods require year >= 1900. It seems a little absurd for a function like this to work only for dates above 1900s, ... | pandas to_csv on dataframe with a column that has dates below 1900 | 1.2 | 0 | 0 | 136 |
35,442,576 | 2016-02-16T20:42:00.000 | 0 | 0 | 1 | 0 | java,python,scala | 35,442,831 | 3 | false | 0 | 0 | Scala is made because of this. It mixes functional and OOP language features, so you can create methods by themselves, without creating a class to contain them.
Java doesn't have this feature, methods can't be created outside a class. Java is very object oriented, everything (except the primitives) extends object.
Some... | 1 | 0 | 0 | During 3 years of my working career, I have been working with databases, data, etc. It was only during the last year that I started working with Python for some data analysis. Now i got interested in all the Big Data ecosystem and Python gets me far enough, yet.
However, recently I chose to learn Scala as my second pro... | Why do i need to create a class in Java program? | 0 | 0 | 0 | 176 |
35,446,029 | 2016-02-17T01:00:00.000 | 0 | 1 | 1 | 0 | python,pytest | 44,379,571 | 1 | false | 0 | 0 | You should just update to newer pytest. Looks like this problem was fixed in pytest=2.9.0. | 1 | 1 | 0 | I execute py.test like this : py.test -s -f, -f is looponfail mode and -s is --capture=no mode.
But print() statement is allowed only when the test is fail. If all tests succeeded, all print() in all codes doesn't work.
How could I enable print() statement even in looponfail mode?
Python 3.4
Py.test 2.7.2 | How could I enable print statement in pytest looponfail mode? | 0 | 0 | 0 | 118 |
35,447,087 | 2016-02-17T03:01:00.000 | 3 | 0 | 0 | 0 | python,django,caching,django-models,django-views | 35,447,745 | 1 | true | 1 | 0 | You ask about "caching" which is a really broad topic, and the answer is always a mix of opinion, style and the specific app requirements. Here are a few points to consider.
If the data is per user, you can cache it per user:
from django.core.cache import cache
cache.set(request.user.id,"foo")
cache.get(request.user.i... | 1 | 1 | 0 | I want to use caching in Django and I am stuck up with how to go about it. I have data in some specific models which are write intensive. records will get added continuously to the model. Each user has some specific data in the model similar to orders table.
Since my model is write intensive I am not sure how effecti... | update existing cache data with newer items in django | 1.2 | 0 | 0 | 1,927 |
35,451,340 | 2016-02-17T08:23:00.000 | 1 | 0 | 0 | 0 | python,web-scraping,scrapy | 35,451,490 | 1 | true | 1 | 0 | Looks like you are trying the command scrapy startproject stack inside python interactive shell.
Run the same command directly on bash shell, and not inside python shell.
And you don't need import scrapy command to create a scrapy project. | 1 | 4 | 0 | I'm learning scrapy to create a crawler that could crawl website and get back the results, however on creating a new project, it is returning an error.
I tried creating a folder manually, but again it returned an error.
Any idea how to resolve this.
SyntaxError: invalid syntax
import scrapy
scrapy startproject stack | Scrapy: Create Project returning error | 1.2 | 0 | 0 | 4,043 |
35,451,564 | 2016-02-17T08:37:00.000 | 2 | 0 | 0 | 1 | python,c,macos,core-foundation,mach | 35,452,078 | 1 | true | 0 | 0 | You can't. Mac OS X does not keep track of this information in the way you're looking for -- opening an application from another application does not establish a relationship of any sort between those applications. | 1 | 2 | 0 | I'd like to create a daemon (base on script or some lower level language) that calculates statistics on all opened applications according to their initiating process. The problem is that the initiating process does not always equivalent to the actual parent process.
For instance, When I press an hyperlink from Microsof... | Running processes in OS X, Find the initiator process | 1.2 | 0 | 0 | 81 |
35,453,451 | 2016-02-17T10:02:00.000 | 1 | 0 | 0 | 0 | python,django | 35,453,880 | 2 | false | 1 | 0 | Add some boolean field (answered, is_answered.. etc) and check on every "Response" click if it answered.
Hope it will help. | 1 | 0 | 0 | I am writing a mini-CRM system that two users can login at the same time and they can answer received messages. However, the problem is that they might response the same message because messages can only disappear when they click "Response" button. Is there any suggestion to me to lock the system? | Lock the system | 0.099668 | 0 | 0 | 52 |
35,454,970 | 2016-02-17T11:08:00.000 | 1 | 0 | 1 | 1 | python,testing,analysis,cuckoo | 35,478,371 | 1 | true | 0 | 0 | I was able to fix this issue just by changing the configuration file "virtualbox.conf". in this configuration file it says that the virtual machine as [cuckoo1] (title of the virtual machine configuration).
Since my virtual machine name is "windows_7" i have to change [cuckoo1] to windows_7. That is why cuckoo don't g... | 1 | 2 | 0 | I have installed cuckoo sandbox in ubuntu environment with windows7 32 bit as guest os. I have followed the instructions given in their website.The vm is named windows_7. I have edited the "machine" and "label" field properly in "virtualbox.conf".
But when I try to start the cuckoo executing "sudo python cuckoo.py" it... | Cuckoo sandbox: shows "Configuration details about machine windows_7 are missing" error | 1.2 | 0 | 0 | 1,540 |
35,457,300 | 2016-02-17T12:52:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine,debugging,breakpoints | 35,457,609 | 2 | false | 1 | 0 | As often happens with these things, writing this question gave me a couple of ideas to try. I was using the Personal edition ... so I downloaded the professional edition ... and it all worked fine.
Looks like I'm paying $95 instead of $45 when the 30 day trial runs out. | 2 | 2 | 0 | I'm new to Python, Wing IDE and Google cloud apps.
I've been trying to get Wing IDE to stop at a breakpoint on the local (Windows 7) Google App Engine. I'm using the canned guestbook demo app and it launches fine and responds as expected in the web browser.
However breakpoints are not working. I'm not sure if this is i... | Wing IDE not stopping at break point for Google App Engine | 0.197375 | 0 | 0 | 544 |
35,457,300 | 2016-02-17T12:52:00.000 | 5 | 0 | 0 | 1 | python,google-app-engine,debugging,breakpoints | 42,961,127 | 2 | false | 1 | 0 | I have copied the wingdbstub.py file (from debugger packages of Wing ide) to the folder I am currently running my project on and used 'import wingdbstub' & initiated the debug process. All went well, I can now debug modules. | 2 | 2 | 0 | I'm new to Python, Wing IDE and Google cloud apps.
I've been trying to get Wing IDE to stop at a breakpoint on the local (Windows 7) Google App Engine. I'm using the canned guestbook demo app and it launches fine and responds as expected in the web browser.
However breakpoints are not working. I'm not sure if this is i... | Wing IDE not stopping at break point for Google App Engine | 0.462117 | 0 | 0 | 544 |
35,457,531 | 2016-02-17T13:03:00.000 | 0 | 0 | 0 | 1 | java,python,web-services,python-requests,legacy | 35,458,539 | 2 | false | 1 | 0 | Maybe you could add a man-in-the-middle. A socket server who gets the unix strings, parse them into a sys-2 type of message and send it to sys-2. That could be an option to not re-write all calls between the two systems. | 2 | 0 | 0 | I have a legacy web application sys-1 written in cgi that currently uses a TCP socket connection to communicated with another system sys-2. Sys-1 sends out the data in the form a unix string. Now sys-2 is upgrading to java web service which in turn requires us to upgrade. Is there any way to upgrade involving minimal c... | Python requests vs java webservices | 0 | 0 | 0 | 254 |
35,457,531 | 2016-02-17T13:03:00.000 | 0 | 0 | 0 | 1 | java,python,web-services,python-requests,legacy | 35,458,442 | 2 | true | 1 | 0 | Is there any way to upgrade involving minimal changes to the existing legacy code.
The solution mentioned, adding a conversion layer outside of the application, would have the least impact on the existing code base (in that it does not change the existing code base).
Can anyone advise if this method works
Would writ... | 2 | 0 | 0 | I have a legacy web application sys-1 written in cgi that currently uses a TCP socket connection to communicated with another system sys-2. Sys-1 sends out the data in the form a unix string. Now sys-2 is upgrading to java web service which in turn requires us to upgrade. Is there any way to upgrade involving minimal c... | Python requests vs java webservices | 1.2 | 0 | 0 | 254 |
35,460,433 | 2016-02-17T15:09:00.000 | 0 | 0 | 0 | 1 | python,debugging,gdb,qt-creator,opensuse | 35,525,248 | 1 | true | 0 | 0 | Works fine if "Run in terminal" unchecked or terminal changed back from konsole to xterm (works in konsole previously - weird). | 1 | 1 | 0 | See that when trying to debug my program in Qt Creator "Application Output" pane:
Debugging starts
Debugging has failed
Debugging has finished
Or freezes after
Debugging starts
Was able to run previously. Any way to fix this or to discover the problem?
Qt Creator 3.5.1, gcc 4.8.5, gdb 7.9.1, Python 2.7.9
P.S.... | Qt Creator failed to start gdb in latest openSUSE | 1.2 | 0 | 0 | 116 |
35,463,019 | 2016-02-17T16:59:00.000 | 0 | 1 | 0 | 1 | python,linux,process | 35,463,485 | 1 | false | 0 | 0 | Would running a filter in htop be quick enough?
Run htop, Press F5 to enter tree mode, then F4 to filter, and type in python... it should show all the python processes as they open/close | 1 | 2 | 0 | I am on Linux and wish to find the process spawned by a Python command.
Example: shutil.copyfile.
How do I do so?
Generally I have just read the processes from the terminal with ps however this command completes nearly instantaneously so I cannot do that for this without some lucky timing.
htop doesn't show the info... | How to find the name of a process spawned by Python? | 0 | 0 | 0 | 155 |
35,464,113 | 2016-02-17T17:51:00.000 | 0 | 0 | 1 | 1 | python,macos,import,terminal | 35,464,530 | 1 | true | 0 | 0 | That error means that there is no 'intelhex' on your Python path. The contents of /usr/local/bin should not matter (those are executable files but are not the Python modules). Are you sure that you installed the package and are loading it from the same Python site packages location you installed it to? | 1 | 0 | 0 | I am using the terminal on a MacBook Pro.
Trying to use intelhex in my code. I have downloaded intelhex using
sudo pip install intelhex
Success
pip list
shows intelhex installed
run my code and receive this error:
Traceback (most recent call last):
File "./myCode.py", line 20, in
from intelhex import I... | Intelhex - import error - macOSX Terminal | 1.2 | 0 | 0 | 1,912 |
35,464,138 | 2016-02-17T14:21:00.000 | 1 | 0 | 0 | 0 | python,numpy,memory,arrays | 35,464,658 | 2 | false | 0 | 0 | By definition you cannot append anything to an array because when the array is declared in memory it has to reserve as much space as it is going to need.
What you can do is to either declare an array with the known geometry and initial values and then rewrite the new values per row keeping a counter of the rows "append... | 1 | 0 | 1 | Is there a way to create a 3d numpy array by appending 2d numpy arrays? What I currently do is append my 2d numpy array into an initialized list of pre determined 2d numpy array, i.e., List=[np.zeros((600,600))]. After appending all my 2d numpy arrays I use numpy.dstack to create 3d numpy array. I think this is not a v... | Creating a 3d numpy array matrix using append method | 0.099668 | 0 | 0 | 1,796 |
35,466,165 | 2016-02-17T19:40:00.000 | 0 | 0 | 0 | 0 | python,list,pyodbc,netezza,executemany | 35,599,759 | 2 | false | 0 | 0 | Netezza is good for bulk loads, where executeMany() inserts number of rows in one go. The best way to load millions of rows is "nzload" utility which can be scheduled by vbscript, Excel Macro from Windows or Shell script from Linux. | 1 | 1 | 0 | I have about million records in a list that I would like to write to a Netezza table. I have been using executemany() command with pyodbc, which seems to be very slow (I can load much faster if I save the records to Excel and load to Netezza from the excel file). Are there any faster alternatives to loading a list with... | Loading data to Netezza as a list is very slow | 0 | 1 | 0 | 806 |
35,466,429 | 2016-02-17T19:53:00.000 | 7 | 0 | 1 | 0 | python,python-3.x,opencv | 44,714,952 | 5 | false | 0 | 0 | For anyone who would like to install OpenCV on Python 3.5.1,use this library called
opencv-contrib-python
This library works for Python 3.5.1 | 2 | 10 | 1 | I have searched quite a bit regarding this and I've tried some of these methods myself but I'm unable to work with OpenCV.So can anyone of you help me install OpenCV for python 3.5.1?
I'm using anaconda along with Pycharm in windows
Or is this not possible and i have to use python 2.7?
Thanks in advance | OpenCV for Python 3.5.1 | 1 | 0 | 0 | 79,914 |
35,466,429 | 2016-02-17T19:53:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,opencv | 63,226,795 | 5 | false | 0 | 0 | For OS:Windows 10 and Python version: 3.5.1 & 3.6, this worked for me
pip install opencv-contrib-python | 2 | 10 | 1 | I have searched quite a bit regarding this and I've tried some of these methods myself but I'm unable to work with OpenCV.So can anyone of you help me install OpenCV for python 3.5.1?
I'm using anaconda along with Pycharm in windows
Or is this not possible and i have to use python 2.7?
Thanks in advance | OpenCV for Python 3.5.1 | 0.039979 | 0 | 0 | 79,914 |
35,472,785 | 2016-02-18T04:23:00.000 | 1 | 0 | 0 | 0 | r,python-2.7,machine-learning,prediction,h2o | 41,201,223 | 3 | false | 0 | 0 | I have tried to use many of the default methods inside H2O with time series data. If you treat the system as a state machine where the state variables are a series of lagged prior states, it's possible, but not entirely effective as the prior states don't maintain their causal order. One way to alleviate this is to as... | 1 | 1 | 1 | We have hourly time series data having 2 columns, one is the timestamp and other is the error rate. We used H2O deep-learning model to learn and predict future error-rate but looks like it requires at least 2 features (except timestamp) for creating the model.
Is there any way h2o can learn this type of data (time, val... | Can we predict time-series single-dimensional data using H2O? | 0.066568 | 0 | 0 | 2,186 |
35,479,437 | 2016-02-18T10:54:00.000 | 0 | 0 | 1 | 0 | python,pip | 38,482,667 | 1 | false | 0 | 0 | Make sure of two things:
The pip version is the same in the offline server and in the online one.
To find out: pip -V
To update (if needed): pip install --upgrade pip
The python version is the same in both virtual enviroments or servers.
To find out: python (the header will have the version info)
In my case I ... | 1 | 2 | 0 | In order to make packages installed offline, I use the -d (or --download) option to pip install. For instance, pip install --download dependencies -r requirements.txt will download the packages for all required dependencies mentioned in requirements.txt to dependencies dir (but will not install them). Then I use pip in... | Offline installation for pip packages fails with error "Could not find a version that satisfies the requirement" | 0 | 0 | 0 | 2,118 |
35,484,772 | 2016-02-18T14:54:00.000 | 0 | 0 | 1 | 0 | python,c++,windows | 35,495,227 | 1 | false | 0 | 0 | You could have your python executable call the c++ executable and have the executable take in command line arguments. So basically in python have the service main code and a few basic cases that will call into a normal c++ executable. Not extremely efficient, but it works | 1 | 0 | 0 | Can I combine an executable with another executable (Windows Service Program) and run this program as a logical service?
By combining, I mean to form a single executable.
I want to write a Windows Service, and I've followed some tutorials that show how to do it using C++, i.e. writing the Service Program (in Windows) ... | Can I combine an executable with another executable (Windows Service Program) and run this program as a logical service? | 0 | 0 | 0 | 75 |
35,484,844 | 2016-02-18T14:57:00.000 | 3 | 0 | 1 | 0 | python,module,ptvs | 42,005,862 | 3 | false | 0 | 0 | I just wanted to add the below in addition to the verified answer, for a very specific scenario.
I was recently asked to fix the same problem that the OP was experiencing for a work machine, which had recently had the user accounts migrated over to a new domain.
Setup:
Visual Studio 2013
PTVS 2.2.30718
Anaconda 3.5
Bas... | 1 | 8 | 0 | In Visual Studio with PTVS I have two separate Python projects, one contains a Python source file named lib.py for use as a library of functions and the other is a main that uses the functions in the library. I am using an import statement in the main to reference the functions in the library project but get the follow... | PTVS: How to reference or use Python source code in one project from a second project | 0.197375 | 0 | 0 | 7,145 |
35,485,629 | 2016-02-18T15:29:00.000 | 0 | 0 | 1 | 0 | python,pandas | 35,486,318 | 2 | false | 0 | 0 | figured it out. specified the data type on import with dtype = {"phone" : str, "other_phone" : str}) | 1 | 0 | 1 | I'm using pandas to input a list of names and phone numbers, clean that list, then export it. When I export the list, all of the phone numbers have '.0' tacked on to the end. I tried two solutions:
A: round()
B: converting to integer then converting to text (which has worked in the past)
For some reason when I tried A,... | Removing decimals on export python | 0 | 0 | 0 | 79 |
35,488,268 | 2016-02-18T17:22:00.000 | 4 | 0 | 0 | 0 | python-3.x,google-search | 35,488,561 | 2 | false | 0 | 0 | Often when searching for Python stuff, I add the search term "python" anyway because many names refer to entirely different things in the world as well. Using "python3" here appears to solve your problem. I also feel it a lot less unobtrusive than the hacks you describe. | 1 | 17 | 0 | I like to use google when I'm searching for documentation on things related to python. Many times what I am looking for turns out to be in the official python documentation on docs.python.org. Unfortunately, at time of writing, the docs for the python 2.x branch tend to rank much higher on google than the 3.x branch, a... | How to make google search results default to python3 docs | 0.379949 | 0 | 1 | 710 |
35,489,583 | 2016-02-18T18:29:00.000 | 0 | 0 | 0 | 0 | python,networkx,graphlab | 35,491,984 | 2 | false | 0 | 0 | Here is the first cut at porting from NetworkX to GraphLab. However, iterating appears to be very slow.
temp1 = cc['component_id']
temp1.remove_column('__id')
id_set = set()
id_set = temp1['component_id']
for item in id_set:
nodeset = cc_out[cc_out['component_id'] == item]['__id'] | 1 | 0 | 0 | What is the GraphLab equivalent to the following NetworkX code?
for nodeset in nx.connected_components(G):
In GraphLab, I would like to obtain a set of Vertex IDs for each connected component. | NetworkX to GraphLab Connected Component Conversion | 0 | 0 | 1 | 156 |
35,492,556 | 2016-02-18T21:14:00.000 | 12 | 0 | 0 | 0 | python,numpy,machine-learning,computer-vision,scikit-learn | 35,492,991 | 1 | true | 0 | 0 | In sklearn you can do this only for linear kernel and using SGDClassifier (with appropiate selection of loss/penalty terms, loss should be hinge, and penalty L2). Incremental learning is supported through partial_fit methods, and this is not implemented for neither SVC nor LinearSVC.
Unfortunately, in practise fitting... | 1 | 7 | 1 | I have two data set with different size.
1) Data set 1 is with high dimensions 4500 samples (sketches).
2) Data set 2 is with low dimension 1000 samples (real data).
I suppose that "both data set have the same distribution"
I want to train an non linear SVM model using sklearn on the first data set (as a pre-training )... | How to update an SVM model with new data | 1.2 | 0 | 0 | 4,307 |
35,493,291 | 2016-02-18T22:00:00.000 | 0 | 0 | 0 | 1 | python,client,rethinkdb,failover,rethinkdb-python | 35,513,048 | 1 | false | 0 | 0 | Below is my opinion on how I setup thing.
When the local proxy crashes, they should restart by using a process monitor like systemd.
I don't use RethinkDB local proxy. I used HAProxy runs in TCP mode locally on every app server, to forward to RethinkDB. I used Consul Template so that when a RethinkDB node join cluster... | 1 | 0 | 0 | I have:
4 servers running a single RethinkDB instance in cluster (4 shards / 3 replicas tables)
2 application servers (tornado + RethinkDB proxy)
The clients connect only to their local proxy.
How to specify both the local + the other proxy so that the clients could fail over to the other proxies when their local pr... | RethinkDB clients connection failover between proxies | 0 | 0 | 0 | 102 |
35,493,485 | 2016-02-18T22:12:00.000 | 1 | 1 | 0 | 0 | python,email,alert,splunk | 35,514,817 | 2 | true | 0 | 0 | well stated @IvanStarostin
The script should always be located in : $SPLUNK_HOME/bin/scripts or in $SPLUNK_HOME/etc//bin/scripts in case of an app.
When an alert triggers you can select a script to be run in the following way:
Run the desired search and then click Save as Alert. Configure how often should your search r... | 1 | 0 | 0 | I have a Splunk query which returns several JSON results and that I want to save as alert, sending regular emails to a list of people.
I have created a Python script which takes as input some JSONS like the ones from the Splunk logs and beautifies the results.
How can I configure the Splunk alert so that the users ge... | How to configure Python script to change body for Splunk email alert? | 1.2 | 0 | 0 | 1,055 |
35,493,845 | 2016-02-18T22:35:00.000 | 2 | 0 | 1 | 0 | python,elasticsearch,kibana | 35,494,855 | 1 | true | 0 | 0 | Make sure the field is mapped as a date field in ES and not a text field, that is likely your issue.
The field name doesn't matter, other than when you tell kibana about your index make sure you pick the correct field.
The ISO/XML datetime format is the default for ES, but it can be changed in the mapping if you needed... | 1 | 0 | 0 | I am sending data to elastic-search from a python script , it works fine and i am able to view it in Kibana. Now i want to insert the timestamps for each record/document so that i can get some plot in Kibana , based on the time information , for example , the number of documents submitted per five minutes , etc.
When ... | Time information in elasticsearch data | 1.2 | 0 | 0 | 434 |
35,495,530 | 2016-02-19T01:07:00.000 | 0 | 0 | 0 | 1 | python,dll,memory-leaks,ctypes,dllexport | 35,613,787 | 1 | true | 0 | 0 | I ended up writing a program in C without dynamic memory allocation to test the library. The leak is indeed in one of the functions I'm calling, not the Python program. | 1 | 0 | 0 | I've written an abstraction layer in Python for a piece of commercial software that has an API used for accessing the database back end. The API is exposed via a Windows DLL, and my library is written in Python.
My Python package loads the necessary libraries provided by the application, initializes them, and creates a... | Diagnosing memory leak from Windows DLL accessed in Python with ctypes | 1.2 | 0 | 0 | 217 |
35,495,874 | 2016-02-19T01:44:00.000 | 1 | 0 | 0 | 0 | python,django,authorization,mechanicalturk | 35,503,013 | 1 | true | 1 | 0 | Every request from AWS will include additional URL parameters: workerId, assignmentId, hitId. That's probably the easiest way to identify a request coming from MTurk. There may be headers, as well, but they're not documented anywhere. | 1 | 0 | 0 | I have a django application that I want to host a form on to use as the template for an ExternalHit on Amazon's Mechanical Turk. I've been trying to figure out ways that I can make it so only mturk is authorized to view this document.
One idea I've been considering is looking at the request headers and confirming that ... | What options are there for verifying that mturk is requesting my ExternalQuestion and not a 3rd party? | 1.2 | 0 | 1 | 59 |
35,496,055 | 2016-02-19T02:05:00.000 | 2 | 0 | 1 | 0 | python,anaconda | 35,496,124 | 1 | true | 0 | 0 | No, you shouldn't need to uninstall anything. Anaconda, including its own Python distribution, lives in a separate directory. Anaconda adjusts the paths to make this work, so if some things relied on specifics of your old Python paths, those may break, but that's about all. | 1 | 3 | 0 | I had some issues with matplotlib in virtualenvironments on Python and was recommended to uninstall 3.5 to install anaconda as a result. If so, do I need to pip uninstall everything (both globally and on my user) I see from pip freeze as well as everything I've installed with brew? Or will Anaconda be able to utilize w... | Do I need to uninstall Python 3.5 before installing Anaconda on OSX? | 1.2 | 0 | 0 | 2,468 |
35,496,145 | 2016-02-19T02:15:00.000 | 7 | 0 | 1 | 0 | python,arrays,algorithm,big-o | 35,496,208 | 5 | false | 0 | 0 | There is a very simple-looking solution that is O(n): XOR elements of your sequence together using the ^ operator. The end value of the variable will be the value of the unique number.
The proof is simple: XOR-ing a number with itself yields zero, so since each number except one contains its own duplicate, the net resu... | 2 | 3 | 0 | For example, if L = [1,4,2,6,4,3,2,6,3], then we want 1 as the unique element. Here's pseudocode of what I had in mind:
initialize a dictionary to store number of occurrences of each element: ~O(n),
look through the dictionary to find the element whose value is 1: ~O(n)
This ensures that the total time complexity then... | Find the unique element in an unordered array consisting of duplicates | 1 | 0 | 0 | 1,157 |
35,496,145 | 2016-02-19T02:15:00.000 | 1 | 0 | 1 | 0 | python,arrays,algorithm,big-o | 35,496,242 | 5 | false | 0 | 0 | Your outlined algorithm is basically correct, and it's what the Counter-based solution by @BrendanAbel does. I encourage you to implement the algorithm yourself without Counter as a good exercise.
You can't beat O(n) even if the array is sorted (unless the array is sorted by the number of occurrences!). The unique elem... | 2 | 3 | 0 | For example, if L = [1,4,2,6,4,3,2,6,3], then we want 1 as the unique element. Here's pseudocode of what I had in mind:
initialize a dictionary to store number of occurrences of each element: ~O(n),
look through the dictionary to find the element whose value is 1: ~O(n)
This ensures that the total time complexity then... | Find the unique element in an unordered array consisting of duplicates | 0.039979 | 0 | 0 | 1,157 |
35,497,392 | 2016-02-19T04:36:00.000 | 0 | 1 | 0 | 0 | python,django,iis,fastcgi,gdal | 35,591,876 | 1 | true | 1 | 0 | Solved it by restart the machine | 1 | 0 | 0 | I set up a django website via IIS manager, which is working fine, then I add a function by using GDAL libs and the function is working fine.
And also it is fine if I run this website by using CMD with this command
python path\manage.py runserver 8000
But it cannot run via IIS
I got error is DLL load failed: The specifi... | How to setup FastCGI setting of IIS with GDAL libs | 1.2 | 0 | 0 | 134 |
35,505,089 | 2016-02-19T12:12:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,aws-lambda | 42,859,631 | 2 | false | 1 | 0 | You also have to include the query string parameter in the section Resources/Method Request. | 1 | 0 | 0 | I am creating an api with AWS API Gateway with Lambda functions. I want to be able to make an API call with the following criteria:
In the method request of the API i have specified the Query String: itemid
I want to be able to use this itemid value within my lambda function
I am using Python in Lambda
I have tried p... | AWS Lambda parameter passing | 0 | 0 | 0 | 1,250 |
35,507,732 | 2016-02-19T14:32:00.000 | 0 | 0 | 1 | 1 | python,azure,queue | 35,508,696 | 2 | false | 0 | 0 | One possible strategy could be to use Webjobs. Webjobs can execute Python scripts and run on a schedule. Let's say that you run a Webjob every 5 minutes, the Python script can pool the queue, do some processing and post the results back to you API. | 1 | 0 | 0 | I'm trying to define an architecture where multiple Python scripts need to be run in parallel and on demand. Imagine the following setup:
script requestors (web API) -> Service Bus queue -> script execution -> result posted back to script requestor
To this end, the script requestor places a script request message on th... | Using Microsoft Azure to run "a bunch of Python scripts on demand" | 0 | 0 | 0 | 634 |
35,508,255 | 2016-02-19T14:58:00.000 | -1 | 0 | 1 | 0 | python,regex | 35,508,334 | 3 | false | 0 | 0 | Something like: [^a-zA-Z\-](ios)[^a-zA-Z\-]
Might however be problematic at the beginning or the end of a line | 1 | 5 | 0 | After some search this seems more difficult than I thought: I am trying to write a regular expression in Python to find a word which is not surrounded by other letters or dashes.
In the following examples, I am trying to match ios:
It seems carpedios
I like "ios" because they have blue products
I like carpedios and io... | Find word not surrounded by alpha char | -0.066568 | 0 | 0 | 297 |
35,509,019 | 2016-02-19T15:35:00.000 | 1 | 0 | 1 | 1 | python,linux | 35,509,182 | 1 | false | 0 | 0 | Look at
setuptools
distutils
These are classical tools for python packaging | 1 | 3 | 0 | I have created a simple software with GUI. It has several source files. I can run the project in my editor. I think it is ready for the 1.0 release. But I don't know how to create a setup/installer for my software.
The source is in python. Environment is Linux(Ubuntu). I used an external library which does not come wit... | How to create setup/installer for my Python project which has dependencies? | 0.197375 | 0 | 0 | 137 |
35,514,183 | 2016-02-19T20:16:00.000 | 1 | 0 | 1 | 1 | python,redis,celery | 35,532,031 | 2 | true | 0 | 0 | If you need to preserve the python native data structure I'd recommend using one of the serialization modules such a cPickle which will preserve the data structure but won't be readable outside of Python. | 1 | 4 | 0 | I noticed this when using the delay() function to asynchronously send tasks. If I queue a task such as task.delay(("tuple",)), celery will store the argument as ["tuple"] and later the function will get the list back and not the tuple. Guessing this is because the data is being stored into json.
This is fine for tuple... | Python celery - tuples in arguments converted to lists | 1.2 | 0 | 0 | 951 |
35,514,886 | 2016-02-19T20:57:00.000 | 1 | 0 | 1 | 0 | python,pyqt,pyqt4,python-3.4,large-data | 35,515,404 | 2 | false | 0 | 1 | You might consider the HDF5 format, which can access using h5py, pytables, or other python packages. Depending on the dataformat, HDF5 could enable you to access the data on the HD in an efficient manner, which in practice means that you can save memory. The downside is that it requires some effort on your side as a ... | 1 | 1 | 0 | I'm working on an application using Python (3.4) and PyQt. The goal of the program is to manage and analyze large amount of data - up to ~50 binary files, which might be of total size up to 2-3 GB. When I tried to load a couple files into the program, it stops responding during loading and then takes ~1.5GB RAM just to... | Python - managing large data | 0.099668 | 0 | 0 | 367 |
35,516,720 | 2016-02-19T23:07:00.000 | 1 | 0 | 0 | 0 | python,django,performance | 35,516,808 | 1 | false | 1 | 0 | Yes, any content within the {% if condition %} and {% endif %} tags will not be sent the client if condition evaluates to False. It is not hidden via CSS, the content will simply not exist in the response.
It will also reduce the size of your HTTP response. | 1 | 0 | 0 | {% if foo == 1 %}
<-- blah blah blah -->>
{% endif %}
If the if block above evaluates the false, would the content inside the if block still be rendered to the client but hidden instead?
If not, is this method an acceptable way to reduce page load? | Django: Can template tags prevent content to be rendered to the clients? | 0.197375 | 0 | 0 | 59 |
35,516,849 | 2016-02-19T23:18:00.000 | 1 | 0 | 0 | 1 | apache-kafka,kafka-python | 35,533,318 | 1 | true | 0 | 0 | I'm actually not sure for Kafka 0.9, haven't yet had the need to go over the new design thoroughly, but AFAIK this wasn't possible in v8.
It certainly wasn't possible with the low-level consumer, but I also think that, if you assign more threads than you have partitions in the high-level consumer, only one thread per p... | 1 | 0 | 0 | For context, I am trying to transfer our python worker processes over to a kafka (0.9.0) based architecture, but I am confused about the limitations of partitions with respect to the consumer threads. Will having multiple consumers on a partition cause the other threads on the same partition to wait for the current thr... | Mulitple Python Consumer Threads on a Single Partition with Kafka 0.9.0 | 1.2 | 0 | 0 | 793 |
35,516,906 | 2016-02-19T23:22:00.000 | 1 | 0 | 1 | 0 | python,authentication,jupyter,jupyter-notebook,jupyterhub | 35,571,314 | 2 | false | 0 | 0 | there is a login hook in the config. You can write your own authentication there. | 1 | 1 | 0 | I've setup a Jupyter Notebook server with appropriate password and SSL so it is accessed via HTTPS. However, I'm looking now for a way to enforce a two factor authentication with username and password for loging in. The current Jupyter Notebook server only asks for a password and I hence have to create a shared one (no... | two factor authentication with username and password for a Jupyter Notebook server | 0.099668 | 0 | 0 | 2,499 |
35,522,767 | 2016-02-20T11:45:00.000 | 1 | 0 | 0 | 0 | python,numpy,scikit-learn | 37,745,749 | 1 | false | 0 | 0 | I met the same problem today. Now I have solved it.
Because I have installed numPy manually, and I use the command "pip" to install the else package.
Solve way:
find the old version of numPy.
You can import numPy and print the path of it.
delete the folder.
use pip to install again. | 1 | 0 | 1 | I have been trying to install and use scikit-learn and nltk. However, I get the following error while importing anything:
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/site-packages/sklearn/init.py", line 57, in
from .base import clone
File "/usr/local/lib/p... | Installation of nltk and scikit-learn | 0.197375 | 0 | 0 | 474 |
35,524,022 | 2016-02-20T13:37:00.000 | 4 | 0 | 0 | 1 | python,django,shell,subprocess,gunicorn | 35,524,148 | 1 | true | 1 | 0 | 1) User who runs gunicorn has no permissions to run .sh files
2) Your .sh file has no rights to be runned
3) Try to user full path to the file
Also, which error do you get when trying to run it on the production? | 1 | 1 | 0 | I have a django 1.9 project deployed using gunicorn with a view contains a line
subprocess.call(["xvfb-run ./stored/all_crawlers.sh "+outputfile+" " + url], shell=True, cwd= path_to_sh_file)
which runs fine with ./manage.py runserver
but fails on deployment and (deployed with gunicorn and wsgi).
Any Suggestion how ... | Django deployed project not running subprocess shell command | 1.2 | 0 | 0 | 309 |
35,531,367 | 2016-02-21T01:51:00.000 | 1 | 0 | 0 | 0 | python,pandas | 35,531,393 | 4 | false | 0 | 0 | Try this method:
Create a duplicate data set.
Use .mode() to find the most common value.
Pop all items with that value from the set.
Run .mode() again on the modified data set. | 1 | 0 | 1 | So I'm generating a summary report from a data set. I used .describe() to do the heavy work but it doesn't generate everything I need i.e. the second most common thing in the data set.
I noticed that if I use .mode() it returns the most common value, is there an easy way to get the second most common? | In pandas, how to get 2nd mode | 0.049958 | 0 | 0 | 2,825 |
35,534,039 | 2016-02-21T08:39:00.000 | 0 | 0 | 0 | 0 | python,django,django-rest-framework | 38,318,621 | 1 | false | 1 | 0 | One simple way would be to just reset the index in transform_dataframe method.
df = df.reset_index()
This would just add a new index and set your old index as a column, included in the output. | 1 | 0 | 1 | Backgroud
I am using django-rest-pandas for serving json & xls.
Observation
When I hit url with format=xls, I get complete data in the downloaded file. But for format=josn, the index field of dataframe is not part of the records.
Question
How can I make django-rest-pandas to include dataframe's index field in json re... | Include django rest pandas dataframe Index field in json response | 0 | 0 | 0 | 363 |
35,534,170 | 2016-02-21T08:56:00.000 | 5 | 0 | 0 | 0 | python,django,refactoring | 35,534,947 | 1 | true | 1 | 0 | This is not neccessery since django will pick only the updated files, and the whole idea of collectstatic is that you don't have to manually manage the static files.
However, if the old files do take a lot of space, once in while you can delete all the files and directories in the static directory, and then run collect... | 1 | 3 | 0 | Is the any automated way to remove (or at least mark) unused non-used (non-referenced) files located in /static/ folder and its sub-folders in Django project? | Django: removing non-used files | 1.2 | 0 | 0 | 1,427 |
35,535,422 | 2016-02-21T11:19:00.000 | 0 | 1 | 0 | 0 | python | 59,456,007 | 6 | false | 0 | 0 | When working with python projects its always a good idea to create a so called virtual environment, this way your modules will be more organized and reduces the import errors.
for example lets assume that you have a script.py which imports multiple modules including pypiwin32.
here are the steps to solve your problem:
... | 3 | 6 | 0 | I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com".
Any ideas? | No module named win32com | 0 | 0 | 0 | 17,931 |
35,535,422 | 2016-02-21T11:19:00.000 | 8 | 1 | 0 | 0 | python | 35,535,450 | 6 | true | 0 | 0 | As it is not built into Python, you will need to install it.
pip install pywin | 3 | 6 | 0 | I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com".
Any ideas? | No module named win32com | 1.2 | 0 | 0 | 17,931 |
35,535,422 | 2016-02-21T11:19:00.000 | 1 | 1 | 0 | 0 | python | 59,476,830 | 6 | false | 0 | 0 | This will work as well
python -m pip install pywin32 | 3 | 6 | 0 | I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com".
Any ideas? | No module named win32com | 0.033321 | 0 | 0 | 17,931 |
35,538,946 | 2016-02-21T16:53:00.000 | 1 | 0 | 1 | 0 | python,cython | 35,573,200 | 1 | true | 0 | 1 | I guess you can just cast to void *, pass it into your container, then convert back to your extension type. It's up to you to ensure you still have a reference to it in order to not let the pointer being invalid. | 1 | 0 | 0 | I have a multithreaded cython application and would like to pass an extension type between threads that holds a pointer to a thread safe Circular buffer that also makes various calculations.
Is there any way to make a c++ container handle a Extension type? | Gil-less container for cython extension type | 1.2 | 0 | 0 | 143 |
35,539,077 | 2016-02-21T17:03:00.000 | 1 | 0 | 0 | 0 | python,matplotlib,dct | 41,886,452 | 1 | false | 0 | 0 | Are you willing to use a library outside of numpy, scipy, and matplotlib?
If so you can use skimage.color.rgb2yiq() from the scikit-image library. | 1 | 1 | 1 | In matlab we have rgb2ntsc() function to get YIQ components of a RGB image. Is there a similar function available in python (numpy ,matplotlib or scipy libraries)?
Also to apply discrete cosine transform (compress it) we can use dct2() , in matlab , is there a similar function in python? | how to convert a RGB image to its YIQ components in python , and then apply dct transform to compress it? | 0.197375 | 0 | 0 | 1,415 |
35,542,519 | 2016-02-21T21:45:00.000 | 2 | 0 | 1 | 0 | python,nltk | 35,542,694 | 1 | false | 0 | 0 | You haven't given us much to go on. But let's assume you have a paragraph of text. Here's one I just stole from a Yelp review:
What a beautiful train station in the heart of New York City. I've grown up seeing memorable images of GCT on newspapers, in movies, and in magazines, so I was well aware of what the interior ... | 1 | 0 | 0 | Given words like "romantic" or "underground", I'd like to use python to go through a list of text data and retrieve entries that contain those words and associated words such as "girlfriend" or "hole-in-the-wall".
It's been suggested that I work with NLTK to do this, but I have no idea where to start and I know nothin... | finding word associations using natural language processing | 0.379949 | 0 | 0 | 2,265 |
35,544,448 | 2016-02-22T01:32:00.000 | 3 | 0 | 1 | 0 | python,pygame | 35,544,509 | 1 | true | 0 | 1 | You should only call pygame.init() and pygame.quit() on 1 and same file. this the main file where your game loop runs.
You will need other scripts for different things but all those you can just import in this main file where game loop runs.
If you find this confusing checkout some pygame projects on github that will... | 1 | 0 | 0 | I am programming a game with multiple script files and I am wondering, on the files that I have used pygame.init(), do I have to call pygame.quit() at the end of the file? | Python & Pygame - Does pygame.quit() have to be written at the end of every .py file? | 1.2 | 0 | 0 | 62 |
35,544,800 | 2016-02-22T02:18:00.000 | 1 | 0 | 1 | 0 | python,excel,combinations | 35,557,793 | 1 | false | 0 | 0 | I am told to use Pandas to get at each of the individual states in your excel file.
I then use a dictionary structure to store state values and look up sates from the above to these. | 1 | 0 | 0 | I have a file that has a column with names, and another with comma separated US licenses, for example, AZ,CA,CO,DC,HI,IA,ID; but any combination of 50 states is possible. I have another file that has a certain value attached to each state, for example AZ=4, CA=30, DC=23, and so on for all 50.
I need to add up the amou... | How do I parse out all US states from comma separated strings in Python from an excel file. | 0.197375 | 0 | 0 | 52 |
35,544,961 | 2016-02-22T02:39:00.000 | 1 | 0 | 0 | 1 | python,unix,lsof | 35,546,294 | 1 | false | 0 | 0 | If you know the PID (eg. 12345) of the process, you can determine the entire argv array by reading the special file /proc/12345/cmdline. It contains the argv array separated by NUL (\0) characters. | 1 | 3 | 0 | I currently have a python script that accomplishes a very useful task in a large network. What I do is use lsof -iTCP -F and a few other options to dump all listening TCP sockets. I am able to get argv[0], this is not a problem. But to get the full argv value, I need to then run a ps, and map the PIDs together, and the... | Is there any way for lsof to show the entire argv array instead of just argv[0] | 0.197375 | 0 | 0 | 150 |
35,545,822 | 2016-02-22T04:27:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,flask,amazon-elastic-beanstalk | 35,546,210 | 1 | true | 1 | 0 | My best guess is that adding if __name__ == '__main__' didn't fix anything, but it coincidentally happened to work that time. | 1 | 1 | 0 | I was setting up a simple flask app on AWS with Elastic Beanstalk, but had a bug that would result in a timeout error when visiting the page
ERROR: The operation timed out. The state of the environment is
unknown.
when running 'eb create'). Ultimately I fixed it by inserting the standard if __name__ == '__main__':... | if __name__ == "__main__" condition with flask/Elastic Beanstalk | 1.2 | 0 | 0 | 1,304 |
35,549,309 | 2016-02-22T08:50:00.000 | 1 | 0 | 0 | 0 | python,mysql,django | 35,551,670 | 2 | false | 1 | 0 | You should delete the migrations folder inside your app folder. You should also delete the database file, if there is one (for SQLite there is a file called db.sqlite3 in the root project folder, but I'm not sure how this works for MySQL). Then run makemigrations and migrate. | 1 | 1 | 0 | I'm trying to reset my django database so I've run manage.py sqlflush and run that output in MySQL.
I've then run manage.py flush. I think this should clear everything.
I've then run manage.py makemigrations which seemed to identify all tables that would need building but when I run manage.py migrate it says nothing ... | Rebuilding Django development server database | 0.099668 | 0 | 0 | 3,441 |
35,551,326 | 2016-02-22T10:31:00.000 | 27 | 0 | 0 | 0 | python,tensorflow,tensorboard | 43,568,782 | 3 | false | 0 | 0 | You should provide a port flag (--port=6007).
But I am here to explain how you can find it and other flags without any documentation. Almost all command line tools have a flag -h or --help which shows all possible flags this tool allows.
By running it you will see information about a port flag and that --logdir allows... | 1 | 63 | 1 | Is there a way to change the default port (6006) on TensorBoard so we could open multiple TensorBoards? Maybe an option like --port="8008"? | Tensorflow Tensorboard default port | 1 | 0 | 0 | 72,323 |
35,552,667 | 2016-02-22T11:39:00.000 | 0 | 0 | 0 | 0 | python,scipy,interpolation,nan | 35,576,424 | 2 | false | 0 | 0 | Spline fitting/interpolation is global, so it's likely that even a single nan is messing up the whole mesh. | 2 | 0 | 1 | I have a gridded velocity field that I want to interpolate in Python. Currently I'm using scipy.interpolate's RectBivariateSpline to do this, but I want to be able to define edges of my field by setting certain values in the grid to NaN. However, when I do this it messes up the interpolation of the entire grid, effecti... | Ignoring NaN when interpolating grid in Python | 0 | 0 | 0 | 776 |
35,552,667 | 2016-02-22T11:39:00.000 | 1 | 0 | 0 | 0 | python,scipy,interpolation,nan | 35,552,764 | 2 | false | 0 | 0 | All languages that implement floating point correctly (which includes python) allow you to test for a NaN by comparing a number with itself.
x is not equal to x if, and only if, x is NaN.
You'll be able to use that to filter your data set accordingly. | 2 | 0 | 1 | I have a gridded velocity field that I want to interpolate in Python. Currently I'm using scipy.interpolate's RectBivariateSpline to do this, but I want to be able to define edges of my field by setting certain values in the grid to NaN. However, when I do this it messes up the interpolation of the entire grid, effecti... | Ignoring NaN when interpolating grid in Python | 0.099668 | 0 | 0 | 776 |
35,555,798 | 2016-02-22T14:11:00.000 | 3 | 0 | 1 | 0 | python,spyder | 35,555,927 | 1 | true | 0 | 0 | Simply removing the line which was an issue and starting Spyder did the trick. Spyder rebuilt the spyder.ini file upon running spyder.exe. | 1 | 1 | 0 | I'm working with WinPython and Spyder, and somehow spyder wouldn't start. It would briefly flash an error message of which the relevant line is: ConfigParser.ParsingError: File contains parsing errors: D:\progs\WinPython-64bit-2.7.10.3\settings\.spyder\spyder.ini [line 431]: u'_/switch to'.
Then delving into that file... | Replace corrupted spyder.ini file (with winpython 64) | 1.2 | 0 | 0 | 1,152 |
35,560,068 | 2016-02-22T17:29:00.000 | 8 | 0 | 1 | 0 | python | 35,560,186 | 2 | true | 0 | 0 | You are missing that Python's division (from Python 3 on) is by default a float division, so you have reduced precision in that. Force the integer division by using // instead of / and you will get the same result. | 1 | 5 | 0 | I'm trying to execute next code int((226553150 * 1023473145) / 5) and python3 gives me an answer 46374212988031352. But ruby and swift give me an answer 46374212988031350.
What do I miss? | python 3 long ints and multiplication | 1.2 | 0 | 0 | 575 |
35,561,072 | 2016-02-22T18:26:00.000 | 0 | 0 | 1 | 0 | python,loops,python-3.x,functional-programming | 35,561,218 | 3 | false | 0 | 0 | In a recursive function there are two main components:
The recursive call
The base case
The recursive call is when you call the function from within itself, and the base case is where the function returns/stops calling itself.
For your recursive call, you want nfactorial(n-1), because this is essentially the definiti... | 1 | 2 | 0 | So we just started learning about loops and got this assignment
def factorial_cap(num): For positive integer n, the factorial of n (denoted as n!), is the product
of all positive integers from 1 to n inclusive. Implement the function that returns the smallest
positive... | Starting loops with python | 0 | 0 | 0 | 124 |
35,561,176 | 2016-02-22T18:32:00.000 | 0 | 0 | 0 | 1 | python,celery | 47,490,225 | 2 | false | 0 | 0 | Regarding the AttributeError message, adding a backend config setting similar to below should help resolve it:
app = Celery('tasks', broker='pyamqp://guest@localhost//', backend='amqp://') | 1 | 0 | 0 | I'm using Celery with RabbitMQ as the broker and redis as the result backend. I'm now manually dispatching tasks to the worker. I can get the task IDs as soon as I sent the tasks out. But actually Celery worker did not work on them. I cannot see the resulted files on my disk. And later when I want to use AsyncResult to... | Celery worker not consuming task and not retrieving results | 0 | 0 | 0 | 1,693 |
35,565,733 | 2016-02-22T23:07:00.000 | 1 | 0 | 1 | 0 | python,selenium,phantomjs | 46,802,328 | 4 | false | 1 | 0 | If you were able to execute in Terminal, just restart PyCharm and it will synchronize the environment variables from the system. (You can check in "RUN" => "Edit Configurations") | 1 | 4 | 0 | note: PhantomJS runs in PyCharm environment, but not IDLE
I have successfully used PhantomJS in Python in the past, but I do not know what to do to revert to that set up.
I am receiving this error in Python (2.7.11): selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH.
I... | PhantomJS was placed in path and can execute in terminal, but PATH error in Python | 0.049958 | 0 | 1 | 8,984 |
35,567,020 | 2016-02-23T01:14:00.000 | 0 | 0 | 0 | 1 | python,linux,sockets | 37,220,484 | 1 | true | 0 | 0 | So it turns out the problem came from the provided websocket module from google cloud sdk. It has a bug where after 8192 bytes it will not continue to read from the socket. This can be fixed by supplying the websocket library maintained by Hiroki Ohtani earlier on your PYTHONPATH than the google cloud sdk. | 1 | 1 | 0 | I've created a docker image based on Ubuntu 14.04 which runs a python websocket client to read from a 3rd party service that sends variable length JSON encoded strings down. I find that the service works well until the encoded string is longer than 8192 bytes and then the JSON is malformed, as everything past 8192 byte... | Websocket client on linux cuts off response after 8192 bytes | 1.2 | 0 | 1 | 91 |
35,570,376 | 2016-02-23T06:28:00.000 | 1 | 1 | 1 | 0 | python,eclipse,ide | 35,571,869 | 1 | true | 0 | 0 | The 'Import Existing Projects into Workspace' wizard has a 'Copy projects into workspace' check box on the first page. Unchecking this option will make Eclipse work on the original files. | 1 | 0 | 0 | I have used the "import existing project" option to import an existing project into workspace. However, eclipse actually makes copies of the original files and create a new project.
So, if I made a change on a file. It only affect on the copied file in workspace. The original file is untouched.
My question is how do I ... | eclipse modify imported project files | 1.2 | 0 | 0 | 98 |
35,571,862 | 2016-02-23T07:59:00.000 | 2 | 1 | 0 | 1 | python,pip,freebsd | 35,946,582 | 2 | false | 0 | 0 | The assumption that powerful and high-profile existing python tools use a lot of different python packages almost always holds true. We use FreeBSD in our company for quite some time together with a lot of python based tools (web frameworks, py-supervisor, etc.) and we never ran into the issue that a certain tool would... | 1 | 5 | 0 | The development environment, we use, is FreeBSD. We are evaluating Python for developing some tools/utilities. I am trying to figure out if all/most python packages are available for FreeBSD.
I tried using a CentOS/Ubuntu and it was fairly easy to install python as well as packages (using pip). On FreeBSD, it was not ... | Is Python support for FreeBSD as good as for say CentOS/Ubuntu/other linux flavors? | 0.197375 | 0 | 0 | 982 |
35,574,857 | 2016-02-23T10:26:00.000 | 2 | 0 | 0 | 0 | python,django,rest,django-rest-framework | 35,583,466 | 1 | false | 1 | 0 | "Normal" Django views (usually) return HTML pages.
Django-Rest-Framework views (usually) return JSON.
I am assuming you are looking for something more like a Single page application.
In this case you will have a main view that will be the bulk of the HTML page. This will be served from "standard" Django view returni... | 1 | 0 | 0 | I want to use Django REST framework for my new project but I am not sure if I can do it efficiently. I would like to be able to integrate easily classical Django app in my API. However I don't know how I can proceed to make them respect the REST framework philosophy. Will I have to rewrite all the views or is there a m... | How to use classic Django app with Django REST framework? | 0.379949 | 0 | 0 | 422 |
35,575,425 | 2016-02-23T10:49:00.000 | 12 | 0 | 1 | 1 | python-2.7,debugging,gdb | 47,475,156 | 2 | true | 0 | 0 | I have the same, with gdb 8.0.1 compiled on Ubunutu 14.04 LST.
Turns out the installation misses the necessary Python files. One indication was that "make install" stopped complaining about makeinfo being missing - although I did not change any of the .texi sources.
My fix was to go into into the build area, into gdb/d... | 1 | 14 | 0 | I have suddenly started seeing this message on nearly every GDB output line whilst debugging:
Python Exception Installation error: gdb.execute_unwinders function is missing
What is this? How do I rectify it? | GDB Error Installation error: gdb.execute_unwinders function is missing | 1.2 | 0 | 0 | 10,678 |
35,576,509 | 2016-02-23T11:36:00.000 | 0 | 0 | 1 | 0 | python,jupyter,jupyter-notebook | 64,781,833 | 3 | false | 0 | 0 | Click F11, to view the Jupyter Notebook in Full Screen Mode. Click F11 once more, to come out of Full Screen Mode. | 1 | 15 | 0 | I'm doing a bit of choropleth map plotting in a Jupyter notebook (with Folium), and I was just wondering if there's any way of making an output cell fullscreen? It would just make the map a bit easier to view. If not, is there an easy way of modifying the maximum height of an output cell? | Making a Jupyter notebook output cell fullscreen | 0 | 0 | 0 | 10,495 |
35,577,179 | 2016-02-23T12:06:00.000 | 0 | 0 | 1 | 0 | python,numpy | 48,353,933 | 1 | false | 0 | 0 | It may be possible that you have have installed pip for some lower version of python. To check it first look for your default python version by:
$python
Now check for your linked version of python with pip
$pip --version
Now see if the two python versions match.
If they don't match then, you need to upgrade you pip... | 1 | 0 | 1 | I have installed numpy-1.11.0b3 by, pip install "numpy-1.11.0b3+mkl-cp35-cp35m-win32.whl". The installation became successful.
But, when I write "import numpy" at the Python Shell (3.5.1), I am getting the error as - ImportError: No module named 'numpy'.
Can anyone suggest me regarding this ?
Regards, Arpan Ghose | getting error for importing numpy at Python 3.5.1 | 0 | 0 | 0 | 429 |
35,577,248 | 2016-02-23T12:10:00.000 | 1 | 0 | 0 | 0 | django,python-3.4 | 35,579,891 | 3 | false | 1 | 0 | With python manage.py these commands are listed:
Available subcommands:
[auth]
changepassword
createsuperuser
[django]
check
compilemessages
createcachetable
dbshell
diffsettings
dumpdata
flush
inspectdb
loaddata
makemessages
makemigrations
migrate
sendtestemail
shell
showmigrations
sqlflush
sqlmigrate
sqlsequencerese... | 2 | 10 | 0 | With running this command:
python manage.py validate
I faced with this error:
Unknown command: 'validate'
What should I do now?
For more explanations:
Linux
Virtualenv
Python 3.4.3+
Django (1, 9, 2, 'final', 0) | django "python manage.py validate" error : unknown command 'validate' | 0.066568 | 0 | 0 | 7,322 |
35,577,248 | 2016-02-23T12:10:00.000 | 0 | 0 | 0 | 0 | django,python-3.4 | 35,597,374 | 3 | false | 1 | 0 | With this command:
pip install Django==1.8.2
the problem will be solved.
Django==1.9.2 does not support some commands. | 2 | 10 | 0 | With running this command:
python manage.py validate
I faced with this error:
Unknown command: 'validate'
What should I do now?
For more explanations:
Linux
Virtualenv
Python 3.4.3+
Django (1, 9, 2, 'final', 0) | django "python manage.py validate" error : unknown command 'validate' | 0 | 0 | 0 | 7,322 |
35,580,213 | 2016-02-23T14:29:00.000 | 0 | 0 | 1 | 0 | python,visual-studio-2015 | 36,571,734 | 1 | true | 0 | 0 | Use 'anaconda' for install many typical packages.
after run your program wihtout error. | 1 | 2 | 0 | I program with python 3.4 and vs 2015.
When I want to add numpy to vs 2015 in pip window I see fllow problem.
"Unable to find vcvarsall.bat"
Can anyone help me? | Unable to find vcvarsall.bat in python 3.4 and vs 2015 | 1.2 | 0 | 0 | 490 |
35,581,528 | 2016-02-23T15:28:00.000 | 1 | 0 | 0 | 0 | python,multithreading,pandas,geopandas | 35,583,196 | 3 | false | 0 | 0 | I am assuming you have already implemented GeoPandas and are still finding difficulties?
you can improve this by further hashing your coords data. similar to how google hashes their search data. Some databases already provide support for these types of operations (eg mongodb). Imagine if you took the first (left) digit... | 2 | 2 | 1 | I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time.
Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise t... | Fastest approach for geopandas (reading and spatialJoin) | 0.066568 | 1 | 0 | 939 |
35,581,528 | 2016-02-23T15:28:00.000 | 1 | 0 | 0 | 0 | python,multithreading,pandas,geopandas | 35,786,998 | 3 | true | 0 | 0 | As it turned out, the most convenient solution in my case is to use pandas.read_SQL function with specific chunksize parameter. In this case, it returns a generator of data chunks, which can be effectively feed to the mp.Pool().map() along with the job;
In this (my) case job consists of 1) reading geoboundaries, 2) s... | 2 | 2 | 1 | I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time.
Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise t... | Fastest approach for geopandas (reading and spatialJoin) | 1.2 | 1 | 0 | 939 |
35,583,348 | 2016-02-23T16:48:00.000 | 0 | 0 | 0 | 0 | python,html,css,jinja2 | 35,588,407 | 1 | false | 1 | 0 | You can use flask, and put your css stylesheet in a folder named "static", at the root of your project. Call this file "style.css". | 1 | 0 | 0 | I am relatively new to Jinja and templating and have been struggling to get this sorted for some time now..
Here's my layout of folders:
templates
base
content
form
styles
newstyle
I have a base template with blockhead/block sidebar/block content/block form layout. I extend it to my content template which has lots ... | How to include template (with CSS) onto a template which is being rendered? | 0 | 0 | 0 | 116 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.