Title stringlengths 15 150 | A_Id int64 2.98k 72.4M | Users Score int64 -17 470 | Q_Score int64 0 5.69k | ViewCount int64 18 4.06M | Database and SQL int64 0 1 | Tags stringlengths 6 105 | Answer stringlengths 11 6.38k | GUI and Desktop Applications int64 0 1 | System Administration and DevOps int64 1 1 | Networking and APIs int64 0 1 | Other int64 0 1 | CreationDate stringlengths 23 23 | AnswerCount int64 1 64 | Score float64 -1 1.2 | is_accepted bool 2
classes | Q_Id int64 1.85k 44.1M | Python Basics and Environment int64 0 1 | Data Science and Machine Learning int64 0 1 | Web Development int64 0 1 | Available Count int64 1 17 | Question stringlengths 41 29k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How do I enable local modules when running a python script as a cron tab? | 31,189,359 | 0 | 1 | 1,207 | 0 | python,cron,beautifulsoup,crontab | ~/.local paths (populated by pip install --user) are available automatically i.e., it is enough if the cron job belongs to the corresponding user.
To configure arbitrary path, you could use PYTHONPATH envvar in the crontab. Do not corrupt sys.path inside your script. | 0 | 1 | 0 | 1 | 2015-07-02T12:52:00.000 | 2 | 0 | false | 31,185,207 | 0 | 0 | 0 | 1 | I just wrote a small python script that uses BeautifulSoup in order to extract some information from a website.
Everything runs fine whenever the script is run from the command line. However run as a crontab, the server returns me this error:
Traceback (most recent call last):
File "/home/ws/undwv/mindfactory.py"... |
what is the maximum number of workers and concurrency can be configured in celery | 31,204,413 | 2 | 1 | 1,364 | 0 | python,celery,celery-task | That's like asking 'how long is a piece of string' and I'm sure there isn't a single simple answer. Certainly it will be more than 8 threads, with a useful upper limit at the maximum concurrent I/O tasks needed, maybe determined by the number of remote users of your service that the I/O tasks are communicating with. Pr... | 0 | 1 | 0 | 0 | 2015-07-03T10:13:00.000 | 1 | 0.379949 | false | 31,204,230 | 0 | 0 | 0 | 1 | If I'm scheduling IO bound task in celery and if my server spec was like Quad Core with 8GB RAM, How many workers and concurrency I can use.
If CPU bound processes are advised to use 4 workers and 8 concurrency for Quad Core processor. Whats the spec for IO bound process.
In my task I will be performing API calls, man... |
subprocess.popen( ) executing Python script but not writing to a file | 31,207,419 | 1 | 0 | 572 | 0 | python,pipe,subprocess,popen | The absolute path of Python in self.runcmd should do the magic!
Try using the absolute path of file name while opening the file in write mode. | 0 | 1 | 0 | 0 | 2015-07-03T10:56:00.000 | 1 | 1.2 | true | 31,205,122 | 0 | 0 | 0 | 1 | I am trying to run a Python program from inside another Python program using these commands:
subprocess.call(self.runcmd, shell=True);
subprocess.Popen(self.runcmd, shell=True); and
self.runcmd = " python /home/john/createRecordSet.py /home/john/sampleFeature.dish "
Now the script runs fine but the file its supposed... |
HiveMQ and IoT control | 31,220,724 | 1 | 0 | 257 | 0 | python,gpio,messagebroker,iot,hivemq | Start HiveMQ with the following: ./bin/run.sh &
Yes it is possible to subscribe to two topics from the same application, but you need to create separate subscribers within your python application. | 0 | 1 | 0 | 1 | 2015-07-03T13:32:00.000 | 2 | 0.099668 | false | 31,208,102 | 0 | 0 | 0 | 1 | I recently installed HiveMQ on a Ubuntu machine and everything works fine. Being new to Linux( I am more on windows guy) , I am stuck with following question.
I started HiveMQ with command as ./bin/run.sh . A window opens and confirm that HiveMQ is running..Great !!!. I started this with putty and when I close the put... |
Using Homebrew python instead of system provided python | 31,891,599 | 0 | 1 | 504 | 0 | python,homebrew | This happened to me when I installed Python 2.7.10 using brew. My PATH was set to /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin and which python returned /usr/local/bin/python (which is symlinked to Python 2.7.10.)
Problem went away when I closed and restarted Terminal application. | 0 | 1 | 0 | 0 | 2015-07-03T14:55:00.000 | 1 | 0 | false | 31,209,635 | 0 | 0 | 0 | 1 | I used Homebrew to install python, the version is 2.7.10, and the system provided version is 2.7.6. My PATH environment variable is set to /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin", so my terminal DOES know to look at the Homebrew bin folder first!
However, when I run python, it still defaults to 2.7.6, the syste... |
Python alternative to os.kill with a return code? | 31,216,258 | 2 | 0 | 1,632 | 0 | python,linux | This question is based on a mistaken understanding of how kill -9 PID behaves (or kill with any other signal -- even though -9 can't be overridden by a process's signal handler, it can still be delayed if, for instance, the target is in a blocking syscall).
Thus: kill -9 "$pid", in shell, doesn't tell you when the sign... | 0 | 1 | 0 | 0 | 2015-07-04T02:13:00.000 | 2 | 0.197375 | false | 31,216,203 | 1 | 0 | 0 | 2 | Is there an alternative to the os.kill function in Python 3 that will give me a return code? I'd like to verify that a process and it's children actually do get killed before restarting them.
I could probably put a kill -0 loop afterwards or do a subprocess.call(kill -9 pid) if I had to but I'm curious if there's a mor... |
Python alternative to os.kill with a return code? | 31,216,218 | 0 | 0 | 1,632 | 0 | python,linux | os.kill() sends a signal to the process. The return code will still be sent to the parent process. | 0 | 1 | 0 | 0 | 2015-07-04T02:13:00.000 | 2 | 0 | false | 31,216,203 | 1 | 0 | 0 | 2 | Is there an alternative to the os.kill function in Python 3 that will give me a return code? I'd like to verify that a process and it's children actually do get killed before restarting them.
I could probably put a kill -0 loop afterwards or do a subprocess.call(kill -9 pid) if I had to but I'm curious if there's a mor... |
mac os X 10.6.8 python 2.7.10 issues with direct typing of two-bytes utf-8 characters | 31,230,459 | 0 | 2 | 61 | 0 | python-2.7,utf-8,interactive-mode | I've found a partial solution to that issue: in the terminal.app settings, checking the 'escape non-ascii input' option lets python grab any utf-8 char; unfortunately, it prevents using them at the tcsh prompt as before; yet bash sees them as it should...
goodbye, tcsh! | 0 | 1 | 0 | 0 | 2015-07-05T12:32:00.000 | 1 | 0 | false | 31,230,376 | 0 | 0 | 0 | 1 | I'm running Mac OS X 10.6.8;
I had been using python 2.5.4 for 8 years and had NO problem, and neither had I with python 2.6 and python 3.1 as well;
but I recently had to install python 2.7.10, which has become the default interpreter, and now there are issues when the interpreter is running and I need to enter express... |
How to downgrade python version on CentOS? | 31,235,259 | -1 | 1 | 8,690 | 0 | python,python-2.7,centos,sha | You can always install a different version of Python using the -altinstall argument, and then run it either in a virtual environment, or just run the commands with python(version) command.
A considerable amount of CentOS is written in Python so changing the core version will most likely break some functions. | 0 | 1 | 0 | 0 | 2015-07-05T21:14:00.000 | 2 | 1.2 | true | 31,235,059 | 0 | 0 | 0 | 1 | I have a dedicated web server which runs CentOS 6.6
I am running some script that uses Python SHA module and I think that this module is deprecated in the current Python version.
I am consider downgrading my Python installation so that I can use this module.
Is there a better option? If not, how should I do it?
These ... |
how can we wire up cluster based softwares using chef? | 31,248,915 | 2 | 5 | 348 | 0 | python,automation,chef-infra,orchestration | If you have a chef server, you can do a search for the node that runs the ambari-server recipe. Then you use the IP of that machine. Alternately, you can use a DNS name for the ambari-server, and then update you DNS entry to point to the new server when it is available.
Other options include using confd with etcd, or... | 0 | 1 | 0 | 0 | 2015-07-06T08:54:00.000 | 3 | 0.132549 | false | 31,241,531 | 0 | 0 | 0 | 1 | As part of a platform setup orchestration we are using our python package to install various software packages on a cluster of machines in cloud.
We have the following scenario:
out of many softwares, one of our software is Ambari(helps in managing hadoop platform).
it works as follows - 'n' number of cluster machines... |
Launch Python script from Swift App | 31,248,808 | 3 | 1 | 3,016 | 0 | python,swift,nstask | This should work:
system("python EXECUTABLE_PATH")
Josh | 0 | 1 | 0 | 0 | 2015-07-06T12:49:00.000 | 1 | 1.2 | true | 31,246,335 | 0 | 0 | 0 | 1 | I'm new to swift and I'm trying to run a Python file from it.
I already got the full path to the file, and my tries with NStask failed so far.
Now I'm somehow stuck launching the python executable with the path to the script as a parameter :-/ I already thought of just creating an .sh file with the appropriate command... |
Bad CPU type in executable when doing arch -i386 pip2 install skype4py | 31,281,242 | 0 | 1 | 1,867 | 0 | python,macos,segmentation-fault,skype4py | Ok, I was not able to solve the problem with Skype4Py on Mac OS. But perhaps someone will be useful to know that I have found a replacement. I used Ruby gem called skype. It works well on Mac OS. So, if you want to send message from script or anything else, just make gem install skype and start to write some ruby code ... | 0 | 1 | 0 | 0 | 2015-07-06T23:26:00.000 | 3 | 0 | false | 31,257,354 | 1 | 0 | 0 | 1 | I have a problem with Skype4Py lib in Mac OS. As I know from documentation in github, in macos skype4py must install with specific arch. But when I try to use arch -i386 pip2 install skype4py I get error message Bad CPU type in executable. I am not experienced user in macos (this is been a remote control in team viewer... |
Async Tasks for Django and Gunicorn | 31,272,086 | 1 | 1 | 420 | 0 | python,django,multithreading,celery | I'm assuming you don't want to wait because you are using an external service (outside of your control) for sending email. If that's the case then setup a local SMTP server as a relay. Many services such as Amazon SES, SendGrid, Mandrill/Mailchimp have directions on how to do it. The application will only have to wait ... | 0 | 1 | 0 | 0 | 2015-07-07T12:24:00.000 | 1 | 1.2 | true | 31,268,494 | 0 | 0 | 1 | 1 | I have a use case where I have to send_email to user in my views. Now the user who submitted the form will not receive an HTTP response until the email has been sent . I do not want to make the user wait on the send_mail. So i want to send the mail asynchronously without caring of the email error. I am using using cel... |
How to modify crontab to run python script? | 31,286,520 | 0 | 0 | 902 | 0 | python,linux,crontab,redhat | Thank you all guys , but I did a little research and I have found a solution , first you have to test sudo python to see if it works with the module , if not you have to do alias for the sudo you put it inside /etc/bashrc [ to make it system wide alias ] , alias sudo='sudo env PATH=$PATH LD_LIBRARY_PATH=$LD_LIBRARY_PAT... | 0 | 1 | 0 | 1 | 2015-07-07T16:48:00.000 | 1 | 1.2 | true | 31,274,717 | 0 | 0 | 0 | 1 | I am using redhat linux platform
I was wondering why when I use python script inside crontab to run every 2 minutes it won't work even though when I do monitor the crond logs using
tail /etc/sys/cron it shows that it called the script , tried to add the path of python , [ I am using python2.6 -- so the path would be /u... |
How to set buffer size in pypcap | 31,293,136 | 0 | 2 | 840 | 0 | python,packet,sniffer | I studied the source code of pypcap and as far as I could see there was no way to set the buffer size from it.
Because pypcap is using the libpcap library, I changed the default buffer size in the source code of libpcap and reinstalled it from source. That solved the problem as it seems.
Tcpdump sets the buffer size b... | 0 | 1 | 1 | 0 | 2015-07-08T09:59:00.000 | 2 | 1.2 | true | 31,289,288 | 0 | 0 | 0 | 1 | I created a packet sniffer using the pypcap Python library (in Linux). Using the .stats() method of the pypcap library, I see that from time to time few packets get dropped by the Kernel when the network is busy. Is it possible to increase the buffer size for the pypcap object so that less packets get dropped (like it ... |
How should a Twisted AMP Deferred be cancelled? | 31,305,323 | 2 | 1 | 186 | 0 | python,twisted,deferred,asynchronous-messaging-protocol | No. There is no way, presently, to cancel an AMP request.
You can't cancel AMP requests because there is no way defined in AMP at the wire-protocol level to send a message to the remote server telling it to stop processing. This would be an interesting feature-addition for AMP, but if it were to be added, you would n... | 0 | 1 | 1 | 0 | 2015-07-08T22:15:00.000 | 1 | 1.2 | true | 31,304,788 | 0 | 0 | 0 | 1 | I have a Twisted client/server application where a client asks multiple servers for additional work to be done using AMP. The first server to respond to the client wins -- the other outstanding client requests should be cancelled.
Deferred objects support cancel() and a cancellor function may be passed to the Deferred'... |
No module named google.protobuf | 45,141,001 | 20 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | Locating the google directory in the site-packages directory (for the proper latter directory, of course) and manually creating an (empty) __init__.py resolved this issue for me.
(Note that within this directory is the protobuf directory but my installation of Python 2.7 did not accept the new-style packages so the __i... | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 1 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find an... |
No module named google.protobuf | 31,325,403 | 2 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | According to your comments, you have multiply versions of python
what could happend is that you install the package with pip of anthor python
pip is actually link to script that donwload and install your package.
two possible solutions:
go to $(PYTHONPATH)/Scripts and run pip from that folder that way you insure
you u... | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 0.044415 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find an... |
No module named google.protobuf | 45,384,713 | 0 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | In my case, MacOS has the permission control.
sudo -H pip3 install protobuf | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 0 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find an... |
No module named google.protobuf | 52,287,475 | 3 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | when I command pip install protobuf, I get the error:
Cannot uninstall 'six'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
If you have the same problem as me, you should do the following commands.
pip install --igno... | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 0.066568 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find an... |
No module named google.protobuf | 46,490,849 | 0 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | I had this problem to when I had a google.py file in my project files.
It is quite easy to reproduce.
main.py: import tensorflow as tf
google.py: print("Protobuf error due to google.py")
Not sure if this is a bug and where to report it. | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 0 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find an... |
AWS ETL with python scripts | 31,363,788 | 0 | 1 | 671 | 0 | python,amazon-web-services,amazon-s3,amazon-emr,amazon-data-pipeline | First thing you want to do is to set 'termination protection' on - on the EMR cluster -as soon as it is launched by Data Pipeline. (this can be scripted too).
Then you can log on to the 'Master instance'. This is under 'hardware' pane under EMR cluster details. (you can also search in EC2 console by cluster id). ... | 0 | 1 | 0 | 0 | 2015-07-10T16:41:00.000 | 1 | 0 | false | 31,346,102 | 0 | 0 | 0 | 1 | I am trying to create a basic ETL on AWS platform, which uses python.
In a S3 bucket (lets call it "A") I have lots of raw log files, gzipped.
What I would like to do is to have it periodically (=data pipeline) unzipped, processed by a python script which will reformat the structure of every line, and output it to anot... |
python astonishing IOError on windows creating files - Errno 13 Permission | 31,347,743 | 0 | 0 | 786 | 0 | python,windows,csv | Damn it, it's already working!, it has been like saying i cannot find my glasses and to have them on.
THanks Brian, it wasn't that the error. The problem was that in my code i was dealing with ubuntu separator besides the full path to the csv output file was completely correct. But I replaced it with os.sep , and start... | 0 | 1 | 0 | 0 | 2015-07-10T17:58:00.000 | 2 | 1.2 | true | 31,347,339 | 0 | 0 | 0 | 1 | I have to run my python script on windows too, and then it began the problems.
Here I'm scraping html locally saved files, and then saving their .csv versions with the data I want. I ran it on my ubuntu and goes for +100k files with no problems. But when I go on windows, it says:
IOError: [Errno 13] Permission denied ... |
Docker not responding to CTRL+C in terminal | 31,355,539 | 1 | 22 | 14,239 | 0 | linux,centos,docker,ipython-notebook | @maybeg's answer already explains very well why this might be happening.
Regarding stopping the unresponsive container, another solution is to simply issue a docker stop <container-id> in another terminal. As opposed to CTRL-C, docker stop does not send a SIGINT but a SIGTERM signal, to which the process might react d... | 0 | 1 | 0 | 0 | 2015-07-10T21:10:00.000 | 5 | 0.039979 | false | 31,350,335 | 0 | 0 | 0 | 1 | Having an issue with Docker at the moment; I'm using it to run an image that launches an ipython notebook on startup. I'm looking to make some edits to ipython notebook itself, so I need to close it after launch.
However, hitting CTRL+C in the terminal just inputs "^C" as a string. There seems to be no real way of usin... |
kill a shell created by vim when ctrl-C doesn't work | 31,391,726 | 1 | 3 | 371 | 0 | python,shell,unix,vim | When you do :! in Vim, you effectively put Vim into background and the running process, in this case py.test, gets the focus. That means you can't tell Vim to kill the process for you since Vim is not getting keystrokes from you.
Ctrl-Z puts Vim into background while running py.test because Vim is the parent process of... | 0 | 1 | 0 | 1 | 2015-07-13T04:56:00.000 | 1 | 0.197375 | false | 31,375,628 | 0 | 0 | 0 | 1 | I'm writing some threaded python code in vim. When I run my tests, with
:! py.test test_me.py
Sometimes they hang and cannot be killed with ctrl-C. So I have to background vim (actually the shell the tests are running in) and pkill py.test. Is there a better way to kill the hanging test suite?
I tried mapping :map ,k:... |
GAE middlewares for modules? | 31,397,559 | 0 | 0 | 59 | 0 | google-app-engine,middleware,google-app-engine-python | The way I approached such scenario (in a python-only project, donno about php) was to use a custom handler (inheriting webapp2.RequestHandler which I was already using for session support). In its customized dispatch() method the user info is collected and stored in the handler object itself.
The implementation of the ... | 0 | 1 | 0 | 0 | 2015-07-13T08:08:00.000 | 1 | 0 | false | 31,378,288 | 0 | 0 | 1 | 1 | Assume that I have few modules on my GAE project (say A, B, C). They shares the users database and sessions.
For example: module A will manage the login/logout actions (through cookies), module B,C will handle other actions. FYI, those modules are developed in both PHP and Python.
Now, I do not want to make user & sess... |
Accept "Content-Encoding: gzip" in Tornado | 31,399,949 | 0 | 2 | 1,594 | 0 | python,tornado | Only way u can change function parse_body_arguments in tornado.httputil file. Otherwise remove Content-Encoding in headers arguments | 0 | 1 | 0 | 0 | 2015-07-14T06:56:00.000 | 2 | 0 | false | 31,399,735 | 0 | 0 | 0 | 2 | I'm processing requests in Tornado that comes with Content-Encoding: gzip header in the body request. The problem is that Tornado shows a warning:
[W 150713 17:22:11 httputil:687] Unsupported Content-Encoding: gzip
I'm doing the unzip operation inside the code and it works like a charm but I'd like to get rid of the ... |
Accept "Content-Encoding: gzip" in Tornado | 31,408,024 | 4 | 2 | 1,594 | 0 | python,tornado | You must opt-in to handling of gzipped requests by passing decompress_request=True to the HTTPServer constructor (or Application.listen). | 0 | 1 | 0 | 0 | 2015-07-14T06:56:00.000 | 2 | 0.379949 | false | 31,399,735 | 0 | 0 | 0 | 2 | I'm processing requests in Tornado that comes with Content-Encoding: gzip header in the body request. The problem is that Tornado shows a warning:
[W 150713 17:22:11 httputil:687] Unsupported Content-Encoding: gzip
I'm doing the unzip operation inside the code and it works like a charm but I'd like to get rid of the ... |
Is there any parallel way of accessing Netcdf files in Python | 31,568,148 | 2 | 5 | 1,241 | 0 | python,io,parallel-processing,netcdf | It's too bad PyPnetcdf is not a bit more mature. I see hard-coded paths and abandoned domain names. It doesn't look like it will take a lot to get something compiled, but then there's the issue of getting it to actually work...
in setup.py you should change the library_dirs_list and include_dirs_list to point to the... | 0 | 1 | 0 | 0 | 2015-07-15T03:16:00.000 | 2 | 0.197375 | false | 31,420,879 | 0 | 0 | 0 | 1 | Is there any way of doing parallel IO for Netcdf files in Python?
I understand that there is a project called PyPNetCDF, but apparently it's old, not updated and doesn't seem to work at all. Has anyone had any success with parallel IO with NetCDF in Python at all?
Any help is greatly appreciated |
travis setup heroku command on Windows 7 64 bit | 31,445,471 | 0 | 1 | 456 | 0 | python,ruby,windows,heroku,travis-ci | If you hadn't had Heroku Toolbelt setup to the $PATH environment variable during installation, here are some steps to check:
Check if Heroku toolbelt is set in PATH variable. If not, cd to your Heroku toolbelt installation folder, then click on the address bar and copy it.
Go to the Control Panel, then click System an... | 0 | 1 | 0 | 1 | 2015-07-15T04:55:00.000 | 1 | 0 | false | 31,421,793 | 0 | 0 | 1 | 1 | Hi there I'm trying to deploy my python app using Travis CI but I'm running into problems when I run the "travis setup heroku" command in the cmd prompt.
I'm in my project's root directory, there is an existing ".travis.yml" file in that root directory.
I've also installed ruby correctly and travis correcty because wh... |
How to copy first 100 files from a directory of thousands of files using python? | 31,427,309 | -1 | 1 | 2,371 | 0 | python | You may try to read a directory directly (as a file) and pick data from there. How successfull would this be is a question of a filesystem you are on. Try first ls or dir commands to see who returns faster. os.listdir() or that funny little program. You'll se that both are in trouble. Here the key is just in that that ... | 0 | 1 | 0 | 0 | 2015-07-15T09:28:00.000 | 2 | -0.099668 | false | 31,426,536 | 1 | 0 | 0 | 1 | I have a huge directory that keeps getting updated all the time. I am trying to list only the latest 100 files in the directory using python. I tried using os.listdir(), but when the size of directory approaches 1,00,000 files, it seems as though listdir() crashes( or i have not waited long enough). I only need the fir... |
/usr/include folder missing in mac | 53,036,986 | 4 | 1 | 3,414 | 0 | python,xcode,macos,terminal | Try on 10.14:
sudo installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg -target / | 0 | 1 | 0 | 0 | 2015-07-15T14:37:00.000 | 2 | 0.379949 | false | 31,433,422 | 0 | 0 | 0 | 1 | I've tried pretty much everything on stackoverflow and other forums to get the /usr/include/ folder on my mac (currently using OS X 10.9.5)
Re-installed Xcode and command line tools (actually, command line tool wasn't one of the downloads available - so I'm guessing it's was already downloaded)
tried /Applications/Ins... |
Where should I put the .pdbrc file on windows so that it is globally visible? | 35,230,270 | 1 | 2 | 274 | 0 | python | If putting the file in C:\Users\<your_user> doesn't work, additionaly try setting your HOME environment variable to C:\Users\<your_user>. Worked for me.
Thanks to @WayneWerner for the solution. | 0 | 1 | 0 | 0 | 2015-07-15T18:39:00.000 | 2 | 0.099668 | false | 31,438,478 | 1 | 0 | 0 | 2 | I am using .pdbrc to store my debugging alias. And I want it to be available globally. Where should this file be on windows? |
Where should I put the .pdbrc file on windows so that it is globally visible? | 31,438,577 | 1 | 2 | 274 | 0 | python | After several tries, I found it.
You can put it in C:\users\your_win_user\.pdbrc | 0 | 1 | 0 | 0 | 2015-07-15T18:39:00.000 | 2 | 0.099668 | false | 31,438,478 | 1 | 0 | 0 | 2 | I am using .pdbrc to store my debugging alias. And I want it to be available globally. Where should this file be on windows? |
Where to run python file on Remote Debian Sever | 31,448,678 | 0 | 0 | 85 | 0 | python,debian,remote-server,directory-structure | Basically you're stuffed.
Your problem is:
You have a script, which produces no error messages, no logging, and no other diagnostic information other than a single timestamp, on an output file.
Something has gone wrong.
In this case, you have no means of finding out what the issue was. I suggest any of the following... | 0 | 1 | 0 | 1 | 2015-07-16T07:30:00.000 | 1 | 0 | false | 31,447,971 | 0 | 0 | 0 | 1 | I have written a python script that is designed to run forever. I load the script into a folder that I made on my remote server which is running debian wheezy 7.0. The code runs , but it will only run for 3 to 4 hours then it just stops, I do not have any log information on it stopping.I come back and check the running... |
Access local files from locally running http server | 66,746,290 | 0 | 3 | 3,484 | 0 | python,web | keep the files in a some folder which you want to access from a localhost
In command prompt go to that location and type
python -m http.server 8080
now type localhost :8080 in browser you will able to access files in that folder
if u want to use some js files for paticular html files then
<script src="http://localho... | 0 | 1 | 0 | 0 | 2015-07-16T20:56:00.000 | 4 | 0 | false | 31,464,366 | 0 | 0 | 0 | 1 | I want to access files in my local machine by using urls. For example "file:///usr/local/home/thapaliya/constants.py". What would be the best way to achieve this? |
Installation of py2 | 31,472,969 | -2 | 1 | 83 | 0 | python,python-2.7,py2exe | Just unpack this exe with tool like 7-zip and you can run py2exe from resulting folder. | 0 | 1 | 0 | 0 | 2015-07-17T09:42:00.000 | 1 | -0.379949 | false | 31,472,881 | 1 | 0 | 0 | 1 | I want to transform a python script in a executable file. That is why, I want to install py2exe
When I try to install the file "py2exe-0.6.9.win32-py2.7.exe", I got the message "Python version 2.7 required, which was not found in the registry"
I suspect that py2exe is not finding my python.exe file (it ask me python di... |
Computing an index that accounts for score and date within Google App Engine Datastore | 31,478,203 | 2 | 0 | 68 | 0 | python,google-app-engine,google-bigquery,google-cloud-datastore,google-prediction | Such a system is often called "frecency", and there's a number of ways to do it. One way is to have votes 'decay' over time; I've implemented this in the past on App Engine by storing a current score and a last-updated; any vote applies an exponential decay to the score based on the last-updated time, before storing bo... | 0 | 1 | 0 | 0 | 2015-07-17T14:11:00.000 | 1 | 1.2 | true | 31,477,842 | 0 | 0 | 1 | 1 | I'm working on an Google App Engine (python) based site that allows for user generated content, and voting (like/dislike) on that content.
Our designer has, rather nebulously, spec'd that the front page should be a balance between recent content and popular content, probably with the assumption that these are just cre... |
Running python in the background and feeding data | 31,479,959 | 0 | 0 | 49 | 0 | python,linux,ssh,sftp | You can try and use a client-server or sockets approach. Your remote PC has a server running listening to commands or data coming into. Your client or local computer can send commands on the port and ip that the remote PC is listening to. The server then parses the data coming in, looks at whatever commands you have de... | 0 | 1 | 0 | 1 | 2015-07-17T15:49:00.000 | 1 | 0 | false | 31,479,763 | 0 | 0 | 0 | 1 | I have this python setup using objects that can perform specific tasks for I2C protocol. Rather than having the script create objects and run a single task when run from a command line command, is there a way to have the objects 'stay alive' in the background and somehow feed the program new data from the command line?... |
subprocess.popen detached from master (Linux) | 31,484,229 | 0 | 3 | 5,358 | 0 | python,linux,subprocess,popen | fork the subprocs using the NOHUP option | 0 | 1 | 0 | 0 | 2015-07-17T18:27:00.000 | 4 | 0 | false | 31,482,397 | 0 | 0 | 0 | 1 | I am trying to open a subprocess but have it be detached from the parent script that called it. Right now if I call subprocess.popen and the parent script crashes the subprocess dies as well.
I know there are a couple of options for windows but I have not found anything for *nix.
I also don't need to call this using su... |
How do I redirect and pass my Google API data after handling it in my Oauth2callback handler on Google App Engine | 31,496,649 | 1 | 0 | 37 | 0 | python-2.7,google-app-engine,oauth-2.0 | I think I found a better way of doing it, I just use the oauth callback to redirect only with no data, and then on the redirect handler I access the API data. | 0 | 1 | 1 | 0 | 2015-07-18T23:34:00.000 | 1 | 0.197375 | false | 31,496,583 | 0 | 0 | 1 | 1 | My Oauth2Callback handler is able to access the Google API data I want - I want to know the best way to get this data to my other handler so it can use the data I've acquired.
I figure I can add it to the datastore, or also perform redirect with the data. Is there a "best way" of doing this? For a redirect is there... |
osx - dyld: Library not loaded Reason: image not found - Python Google Speech Recognition API | 31,508,159 | 0 | 4 | 4,422 | 0 | python-2.7,pycharm,speech | Figured it out - I just forgot to install Homebrew | 0 | 1 | 0 | 0 | 2015-07-19T01:44:00.000 | 2 | 0 | false | 31,497,217 | 0 | 0 | 0 | 1 | When I try to use the Google Speech Rec API I get this error message. Any help?
dyld: Library not loaded: /usr/local/Cellar/flac/1.3.1/lib/libFLAC.8.dylib
Referenced from: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/speech_recognition/flac-mac
Reason: image not found
I'm using PyCharm.... |
How can I set number of parameters from jenkins running Python script? | 31,500,756 | 0 | 0 | 366 | 0 | python-2.7,jenkins | Does jenkins also provide the information 'which users' too or just the 'number of users', so you would have to get the 'which users' by your own? I don't have a jenkins installation with administrativ access, so I cannot check this myself. | 0 | 1 | 0 | 0 | 2015-07-19T08:35:00.000 | 1 | 0 | false | 31,499,363 | 0 | 0 | 0 | 1 | I am running Python job from Jenkins... now my question is as follow:
I am setting number of users as an external parameter, for example I am passing this command:
python /home/py_version/single_run.py $number_of_users
i want to be able to set a way to choose what are the users (in this case users ids) from the jenkin... |
PIP install unable to find ffi.h even though it recognizes libffi | 31,508,671 | 3 | 90 | 93,372 | 0 | python,linux,pip | You need to install the development package for libffi.
On RPM based systems (Fedora, Redhat, CentOS etc) the package is named libffi-devel.
Not sure about Debian/Ubuntu systems, I'm sure someone else will pipe up with that. | 0 | 1 | 0 | 0 | 2015-07-20T03:54:00.000 | 8 | 0.07486 | false | 31,508,612 | 1 | 0 | 0 | 3 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so... |
PIP install unable to find ffi.h even though it recognizes libffi | 31,508,663 | 266 | 90 | 93,372 | 0 | python,linux,pip | You need to install the development package as well.
libffi-dev on Debian/Ubuntu, libffi-devel on Redhat/Centos/Fedora. | 0 | 1 | 0 | 0 | 2015-07-20T03:54:00.000 | 8 | 1 | false | 31,508,612 | 1 | 0 | 0 | 3 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so... |
PIP install unable to find ffi.h even though it recognizes libffi | 38,077,173 | 24 | 90 | 93,372 | 0 | python,linux,pip | To add to mhawke's answer, usually the Debian/Ubuntu based systems are "-dev" rather than "-devel" for RPM based systems
So, for Ubuntu it will be apt-get install libffi libffi-dev
RHEL, CentOS, Fedora (up to v22) yum install libffi libffi-devel
Fedora 23+ dnf install libffi libffi-devel
OSX/MacOS (assuming homebrew is... | 0 | 1 | 0 | 0 | 2015-07-20T03:54:00.000 | 8 | 1 | false | 31,508,612 | 1 | 0 | 0 | 3 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so... |
gdb within emacs: python commands (py and pi) | 31,729,095 | 1 | 1 | 810 | 0 | python,emacs,gdb,gud | I am going to go out on a limb and say this is a bug in gud mode. The clue is the -interpreter-exec line in the error.
What happens here is that gud runs gdb in a special "MI" ("Machine Interface") mode. In this mode, commands and their responses are designed to be machine-, rather than human-, readable.
To let GUIs ... | 0 | 1 | 0 | 1 | 2015-07-20T10:58:00.000 | 2 | 0.099668 | false | 31,514,741 | 0 | 0 | 0 | 1 | I want to debug a c++ program using gdb. I use the pi and the py commands to evaluate python commands from within gdb, which works fine when I invoke gdb from the command line. However, when I invoke gdb from within emacs using M-x gdb and then gdb -i=mi file_name, the following errors occur:
the pi command correctly ... |
cx_Oracle, and Library paths | 31,525,508 | 0 | 1 | 841 | 0 | python,oracle | Well, that was pretty simple. I just had to add it to the .bashrc file in my root directory. | 0 | 1 | 0 | 1 | 2015-07-20T17:30:00.000 | 1 | 1.2 | true | 31,522,754 | 0 | 0 | 0 | 1 | Pretty new to all this so I apologize if I butcher my explanation. I am using python scripts on a server at work to pull data from our Oracle database. Problem is whenever I execute the script I get this error:
Traceback (most recent call last):
File "update_52w_forecast_from_oracle.py", line 3, in
import cx_Ora... |
Is there any way to get the full command line that's executed when using subprocess.call? | 31,528,729 | 2 | 0 | 56 | 0 | python,python-2.7 | If you're not using shell=True, there isn't really a "command line" involved. subprocess.Popen is just passing your argument list to the underlyingexecve() system call.
Similarly, there's no escaping, because there's no shell involved and hence nothing to interpret special characters and nothing that is going to attem... | 0 | 1 | 0 | 0 | 2015-07-20T23:50:00.000 | 1 | 0.379949 | false | 31,528,166 | 0 | 0 | 0 | 1 | I'm using subprocess.call where you just give it an array of argumets and it will build the command line and execute it.
First of all is there any escaping involved? (for example if I pass as argument a path to a file that has spaces in it, /path/my file.txt will this be escaped? "/path/my file.txt")
And is there any w... |
Is os.listdir() deterministic? | 31,535,279 | 0 | 13 | 3,004 | 0 | python | It will probably depend on file system internals. On a typical unix machine, I would expect the order of items in the return value from os.listdir to be in the order of the details in the directory's "dirent" data structure (which, again, depends on the specifics of the file system).
I would not expect a directory to ... | 0 | 1 | 0 | 0 | 2015-07-21T08:58:00.000 | 3 | 0 | false | 31,534,583 | 1 | 0 | 0 | 1 | From Python's doc, os.listdir() returns
a list containing the names of the entries in the directory given by
path. The list is in arbitrary order.
What I'm wondering is, is this arbitrary order always the same/deterministic? (from one machine to another, or through time, provided the content of the folder is the s... |
Why do multiple processes slow down? | 31,543,489 | 0 | 1 | 1,550 | 0 | python,qt,io,hard-drive,child-process | There are no guarantees as to fairness of I/O scheduling. What you're describing seems rather simple: the I/O scheduler, whether intentionally or not, gives a boost to new processes. Since your disk is tapped out, the order in which the processes finish is not under your control. You're most likely wasting a lot of dis... | 0 | 1 | 0 | 0 | 2015-07-21T10:44:00.000 | 2 | 0 | false | 31,536,863 | 1 | 0 | 0 | 1 | Not sure this is the best title for this question but here goes.
Through python/Qt I started multiple processes of an executable. Each process is writing a large file (~20GB) to disk in chunks. I am finding that the first process to start is always the last to finish and continues on much, much longer than the other pr... |
Tornado gzip compressed response for a specific RequestHandler | 31,539,885 | 2 | 2 | 1,628 | 0 | python,tornado | In that handler's initialize() method, call self.transforms.append(tornado.web.GZipContentEncoding) | 0 | 1 | 0 | 0 | 2015-07-21T11:27:00.000 | 1 | 0.379949 | false | 31,537,752 | 0 | 0 | 0 | 1 | How can I serve compressed responses only for a single RequestHandler from my Tornado application? |
PTVS using os.system fails | 31,542,784 | 1 | 0 | 69 | 0 | python,visual-studio,azure,ptvs | After adding the PATH environment variable, all I needed to do was close Visual Studio and open it again. For anyone who struggled with the same issue, just close the programme and it might work! | 0 | 1 | 0 | 0 | 2015-07-21T11:30:00.000 | 1 | 1.2 | true | 31,537,841 | 1 | 0 | 0 | 1 | I am having an issue with Visual Studio.
I have everything set up in my project in the Python Environments including Platformio, which I would like to use.
When I do
os.system("platformio init") it fails and produces this error:
'platformio' is not recognized as an internal or external command, operable program or batc... |
Run python script with droneapi without terminal | 31,924,023 | 1 | 2 | 617 | 0 | python-2.7,dronekit-python,dronekit | I think Sony Nguyen is asking for running the vehicle_state.py outside the Mavproxy command prompt, just like runnning the .py file normally.
I'm also looking for a solution as well. | 0 | 1 | 0 | 0 | 2015-07-21T13:24:00.000 | 3 | 0.066568 | false | 31,540,347 | 0 | 0 | 0 | 1 | I managed to run examples in command prompt after running mavproxy.py and loading droneapi. But when I double click on on my script, it throws me "'local_connect' is not defined", it runs in terminal as was told above, but I cannot run it only with double click. So my question is: Is there any way to run script using d... |
Allow user other than root to restart supervisorctl process? | 31,541,908 | 0 | 6 | 3,601 | 0 | python,supervisord | Maybe you should try restarting your superviord process using user stavros. | 0 | 1 | 0 | 0 | 2015-07-21T14:18:00.000 | 2 | 0 | false | 31,541,685 | 0 | 0 | 0 | 1 | I have supervisord run a program as user stavros, and I would like to give the same user permission to restart it using supervisorctl. Unfortunately, I can only do it with sudo, otherwise I get a permission denied error in socket.py. How can I give myself permission to restart supervisord processes? |
Apache Kafka: Can I set the offset manually | 31,580,503 | 0 | 0 | 294 | 0 | python,twitter,apache-kafka | Don't see how that would be possible, but instead you can:
Use Kafka's API to obtain an offset that is earlier than a given time (getOffsetBefore). Note that the granularity depends on your storage file size IIRC and thus you can get an offset that is quite a bit earlier than the time you specified
Keep a timestamp in... | 0 | 1 | 0 | 0 | 2015-07-23T07:00:00.000 | 1 | 0 | false | 31,580,276 | 0 | 0 | 0 | 1 | So I'm using Apache Kafka as a message queue to relay a Twitter Stream to my consumers. If I want to go back, I want to have a value (offset) which I can send Kafka. So, for eg, if I want to go back one day, I have no idea what the offset would be for that.
Hence, can I set the offset manually? Maybe a linux/epoch tim... |
How do I build the latest Python 2 for Windows? | 33,733,485 | 1 | 0 | 132 | 0 | python,python-2.7 | Since nobody answered, I'll post what I found here.
These instructions are for an 'offline' build machine, e.g. download/obtain everything you need prior to setting up the build environment. I don't connect my build machines to the internet. The instructions assume you downloaded the 2.7.10 PSF source release. This ... | 0 | 1 | 0 | 0 | 2015-07-23T09:02:00.000 | 1 | 1.2 | true | 31,582,768 | 1 | 0 | 0 | 1 | I mean all of it, starting from all sources, and ending up with the .MSI file on the Python website. This includes building the distutils wininst*.exe files. I have found various READMEs that get me some of the way, but no comprehensive guide. |
Python socket with PACKET_MMAP | 31,597,835 | 1 | 1 | 178 | 0 | python,sockets,networking | So it looks like buffer or memoryview will do the trick. Although, there is some discrepancies in the sites I found regarding whether python 2.7 supported this or not, I will have to test it out to make sure | 0 | 1 | 0 | 0 | 2015-07-23T16:37:00.000 | 1 | 0.197375 | false | 31,593,267 | 0 | 0 | 0 | 1 | Is there a PACKET_MMAP or similar flag for python sockets? I know in C one can use a zero-copy/circular buffer with the previous mention flag to avoid having to copy buffers from kernel space to user space but I cannot find anything similar in the python documentation.
Thanks for any input on docs or code to look into. |
Does os.path.sep affect the tarfile module? | 31,600,239 | -2 | 2 | 242 | 0 | python,windows,tarfile | A quick test tells me that a (forward) slash is always used.
In fact, the tar format stores the full path of each file as a single string, using slashes (try looking at a hex dump), and python just reads that full path without any modification. Likewise, at extraction time python hard-replaces slashes with the local s... | 0 | 1 | 0 | 0 | 2015-07-24T00:08:00.000 | 1 | 1.2 | true | 31,600,127 | 1 | 0 | 0 | 1 | Is the path separator employed inside a Python tarfile.TarFile object a '/' regardless of platform, or is it a backslash on Windows?
I basically never touch Windows, but I would kind of like the code I'm writing to be compatible with it, if it can be. Unfortunately I have no Windows host on which to test. |
Compiling a unix make file for windows | 31,606,702 | 1 | 0 | 69 | 0 | python,c,unix,gcc | Answer to your first paragraph: Use MinGW for the compiler (google it, there is a -w64 version if you need that) and MSYS for a minimal environment including shell tools the Makefile could need. | 0 | 1 | 0 | 0 | 2015-07-24T09:17:00.000 | 1 | 0.197375 | false | 31,606,659 | 0 | 0 | 0 | 1 | I have a c-program which includes a make file that works fine on unix systems. Although I would like to compile the program for windows using this make file, how can i go around doing that?
Additionally I have python scripts that call this c-program using ctypes, I don't imagine I will have to much of an issue getting ... |
Google Analytics Management API - Insert method - Insufficient permissions HTTP 403 | 31,866,981 | 0 | 2 | 480 | 1 | api,python-2.7,google-analytics,insert,http-error | The problem was I using a service account when I should have been using an installed application. I did not need a service account since I had access using my own credentials.That did the trick for me! | 0 | 1 | 0 | 0 | 2015-07-24T23:46:00.000 | 2 | 0 | false | 31,621,373 | 0 | 0 | 1 | 1 | I am trying to add users to my Google Analytics account through the API but the code yields this error:
googleapiclient.errors.HttpError: https://www.googleapis.com/analytics/v3/management/accounts/**accountID**/entityUserLinks?alt=json returned "Insufficient Permission">
I have Admin rights to this account - MANAGE US... |
How to rollback a python application | 31,622,554 | 0 | 0 | 181 | 0 | python,google-app-engine,rollback | In the windows command prompt, reference your python executable:
eg:
[cmd]
cd C:\Program Files (x86)\Google\google_appengine (ie: [GAE dir])
C:\Python27\python.exe appcfg.py rollback [deploy dir] | 0 | 1 | 0 | 0 | 2015-07-25T02:27:00.000 | 1 | 1.2 | true | 31,622,256 | 0 | 0 | 0 | 1 | I am running on Windows 8 and I was recently uploading an application using the standard Google App Engine launcher but it froze mid way and when I closed it and reopened it and tried to upload again it would say a transaction is already in progress for this application and that I would need to rollback the application... |
Collecting results from celery worker with asyncio | 43,289,761 | 2 | 2 | 2,413 | 0 | python,celery,python-asyncio | I implement on_finish function of celery worker to publish a message to redis
then in the main app uses aioredis to subscribe the channel, once got notified, the result is ready | 0 | 1 | 0 | 0 | 2015-07-26T11:30:00.000 | 2 | 0.197375 | false | 31,636,454 | 0 | 0 | 0 | 1 | I am having a Python application which offloads a number of processing work to a set of celery workers. The main application has to then wait for results from these workers. As and when result is available from a worker, the main application will process the results and will schedule more workers to be executed.
I woul... |
Default values for PyCharm Terminal? | 43,356,885 | 1 | 2 | 1,300 | 0 | python,path,terminal,pycharm | I came across this error too in PhpStorm, to fix it simply navigate through to...
Preferences > Tools > Terminal
Under 'Application Settings' click [...] at the end of Shell path and open the .bash profile.
This should grey out the Shell path to '/bin/bash'
You can now launch Terminal. | 0 | 1 | 0 | 1 | 2015-07-27T02:54:00.000 | 2 | 0.099668 | false | 31,644,298 | 0 | 0 | 0 | 2 | I accidentally changed the "Shell path" specified in the Terminal setting for PyCharm and now I am getting this error:
java.io.IOException:Exec_tty error:Unkown reason
I replaced the default value with the string returned by echo $PATH which is:
/usr/local/cuda-7.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/b... |
Default values for PyCharm Terminal? | 31,661,642 | 1 | 2 | 1,300 | 0 | python,path,terminal,pycharm | The default value is the value of the $SHELL environment variable, which is normally /bin/bash. | 0 | 1 | 0 | 1 | 2015-07-27T02:54:00.000 | 2 | 1.2 | true | 31,644,298 | 0 | 0 | 0 | 2 | I accidentally changed the "Shell path" specified in the Terminal setting for PyCharm and now I am getting this error:
java.io.IOException:Exec_tty error:Unkown reason
I replaced the default value with the string returned by echo $PATH which is:
/usr/local/cuda-7.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/b... |
How can I ensure cron job runs only on one host at any time | 31,645,469 | -1 | 0 | 416 | 0 | python,database,cron,crontab,distributed | simple way:
- start cron before need time (for example two minutes)
- force synchronize time (using ntp or ntpdate) (optional paranoid mode)
- wait till expected time, run job | 0 | 1 | 0 | 0 | 2015-07-27T05:11:00.000 | 1 | -0.197375 | false | 31,645,343 | 0 | 0 | 1 | 1 | I have a django management command run as a cron job and it is set on multiple hosts to run at the same time. What is the best way to ensure that cron job runs on only one host at any time? One approach is to use db locks as the cron job updates a MySQL db but I am sure there are better(django or pythonic) approaches t... |
Using mpi4py (or any python module) without installing | 31,676,562 | 1 | 2 | 283 | 0 | python,python-2.7,numpy,mpi4py | Did you try pip install --user mpi4py?
However, I think the best solution would be to just talk to the people in charge of the cluster and see if they will install it. It seems pretty useless to have a cluster without mpi4py installed. | 0 | 1 | 0 | 0 | 2015-07-28T11:38:00.000 | 1 | 0.197375 | false | 31,675,214 | 0 | 1 | 0 | 1 | I have some parallel code I have written using numpy and mpi4py modules. Till now I was running it on my laptop but now I want to attack bigger problem sizes by using the computing clusters at my university. The trouble is that they don't have mpi4py installed. Is there anyway to use the module by copying the necessar... |
Python Script on Google App Engine, which scrapes only updates from a website | 31,717,519 | 1 | 1 | 65 | 0 | python,google-app-engine,web-scraping | Doesn't the website have RSS or API or something?
Anyway, you could store the list of scraped news titles (might not be unique though) / IDs / URLs as entity IDs in the datastore right after you send them to your email & just before sending the email you would first check whether the news IDs exist in the datastore wit... | 0 | 1 | 0 | 0 | 2015-07-30T06:42:00.000 | 1 | 0.197375 | false | 31,716,833 | 0 | 0 | 1 | 1 | I am hosting a Python script on Google App Engine which uses bs4 and mechanize to scrap news section of a website, it runs every 2 hours and sends an email to me all the news.
The Problem is, I want only the Latest news to be sent as mail, As of now it sends me all the news present every time.
I am storing all the news... |
install HDF5 and pytables in ubuntu | 31,719,735 | 11 | 25 | 76,957 | 0 | python,ubuntu-14.04,hdf5,pytables | Try to install libhdf5-7 and python-tables via apt | 0 | 1 | 0 | 0 | 2015-07-30T09:02:00.000 | 4 | 1.2 | true | 31,719,451 | 1 | 0 | 0 | 1 | I am trying to install tables package in Ubuntu 14.04 but sems like it is complaining.
I am trying to install it using PyCharm and its package installer, however seems like it is complaining about HDF5 package.
However, seems like I cannnot find any hdf5 package to install before tables.
Could anyone explain the proced... |
Handling a linux system shutdown operation "gracefully" | 31,732,143 | 0 | 1 | 214 | 0 | python,linux,signals,shutdown | When Linux is shutting down, (and this is slightly dependent on what kind of init scripts you are using) it first sends SIGTERM to all processes to shut them down, and then I believe will try SIGKILL to force them to close if they're not responding to SIGTERM.
Please note, however, that your script may not receive the ... | 0 | 1 | 0 | 1 | 2015-07-30T19:01:00.000 | 1 | 0 | false | 31,731,980 | 0 | 0 | 0 | 1 | I'm developing a python script that runs as a daemon in a linux environment. If and when I need to issue a shutdown/restart operation to the device, I want to do some cleanup and log data to a file to persist it through the shutdown.
I've looked around regarding Linux shutdown and I can't find anything detailing which,... |
Python Process Terminated due to "Low Swap" When Writing To stdout for Data Science | 31,735,713 | 1 | 1 | 70 | 0 | python,memory,amazon-web-services,subprocess | the process gets terminated on my mac due to "Low Swap" which I believe refers to lack of memory
SWAP space is part of your Main Memory - RAM.
When a user reads a file it puts in it Main Memory (caches, and RAM). When its done it removes it.
However, when a user writes to a file, changes need to be recorded. One probl... | 0 | 1 | 0 | 0 | 2015-07-30T23:06:00.000 | 1 | 1.2 | true | 31,735,552 | 0 | 0 | 0 | 1 | I'm new to python so I apologize for any misconceptions.
I have a python file that needs to read/write to stdin/stdout many many times (hundreds of thousands) for a large data science project. I know this is not ideal, but I don't have a choice in this case.
After about an hour of running (close to halfway completed),... |
Killing a daemon process through cron job/runnit | 31,787,179 | 0 | 0 | 48 | 0 | python-2.7,cron | The best way out is to create this daemon as a child thread so it automatically gets killed when parent process is killed | 0 | 1 | 0 | 1 | 2015-07-31T05:45:00.000 | 1 | 1.2 | true | 31,738,875 | 0 | 0 | 0 | 1 | I have a python file which starts 2 threads - 1 is daemon process 2 is to do other stuff. Now what I want is to check if my 2 thread is stopped then 1 one also should stop. I was suggested to do so by cron job/runnit.. I am completely new to these so can you please help me achieve the goal
Thanks |
Will my database connections have problems? | 31,741,461 | 0 | 0 | 49 | 0 | python,sql,django,celery | The only time when you are going to run into issues while using db with celery is when you use the database as backend for celery because it will continuously poll the db for tasks. If you use a normal broker you should not have issues. | 0 | 1 | 0 | 0 | 2015-07-31T07:09:00.000 | 2 | 0 | false | 31,740,127 | 0 | 0 | 1 | 2 | In my django project, I am using celery to run a periodic task that will check a URL that responds with a json and updating my database with some elements from that json.
Since requesting from the URL is limited, the total process of updating the whole database with my task will take about 40 minutes and I will run th... |
Will my database connections have problems? | 31,740,391 | 0 | 0 | 49 | 0 | python,sql,django,celery | While requesting information from your database you are reading your database. And in your celery task your are writing data into your database. You can write only once at a time but read as many times as you want as there is no lock permission on database while reading. | 0 | 1 | 0 | 0 | 2015-07-31T07:09:00.000 | 2 | 0 | false | 31,740,127 | 0 | 0 | 1 | 2 | In my django project, I am using celery to run a periodic task that will check a URL that responds with a json and updating my database with some elements from that json.
Since requesting from the URL is limited, the total process of updating the whole database with my task will take about 40 minutes and I will run th... |
How to setup Pycharm and JDK on ubuntu | 31,741,486 | 3 | 0 | 3,954 | 0 | java,python,ubuntu | When you have downloaded a package from Oracle site, unpack it and copy its contents into for example /usr/lib/jvm/jdk1.8.0_51/.
Then, type following commands:
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_51/bin/java" 1
sudo update-alternatives --install "/usr/bin/javac" "javac" "/us... | 0 | 1 | 0 | 0 | 2015-07-31T07:50:00.000 | 2 | 0.291313 | false | 31,740,878 | 1 | 0 | 0 | 1 | I am going to develop some functionality using python and I need to setup pycharm but it depends on some dependencies like open JDK of oracle.
How can setup these two. |
Python will not execute Java program: 'java' is not recognized | 31,745,847 | 0 | 1 | 1,290 | 0 | python,python-2.7,command-line,subprocess | give absolute path of java location
in my system path is C:\Program Files\Java\jdk1.8.0_45\bin\java.exe | 0 | 1 | 0 | 0 | 2015-07-31T12:01:00.000 | 2 | 0 | false | 31,745,699 | 0 | 0 | 1 | 2 | I am trying to get Python to call a Java program using a command that works when I enter it into the command line.
When I have Python try it with subprocess or os.system, it says:
'java' is not recognized as an internal or external command, operable
program or batch file.
From searching, I believe it is because whe... |
Python will not execute Java program: 'java' is not recognized | 61,620,608 | 0 | 1 | 1,290 | 0 | python,python-2.7,command-line,subprocess | You have to set the PATH variable to point to the java location.
import os
os.environ["PATH"] += os.pathsep + os.pathsep.join([java_env])
java_env will be a string containing the directory to java.
(tested on python 3.7) | 0 | 1 | 0 | 0 | 2015-07-31T12:01:00.000 | 2 | 0 | false | 31,745,699 | 0 | 0 | 1 | 2 | I am trying to get Python to call a Java program using a command that works when I enter it into the command line.
When I have Python try it with subprocess or os.system, it says:
'java' is not recognized as an internal or external command, operable
program or batch file.
From searching, I believe it is because whe... |
Worker role and web role counterpart in GAE | 31,790,837 | 1 | 1 | 97 | 0 | python,google-app-engine,azure,web-applications | yes there is. look at backend and frontend instances. your question is too broad to go into more detail. in general the backend type of instance is used for long running tasks but you could also do everyrhing in the frontend instance. | 0 | 1 | 0 | 0 | 2015-08-03T14:34:00.000 | 2 | 0.099668 | false | 31,790,076 | 0 | 0 | 1 | 1 | I am currently working with MS Azure. There I have a worker role and a web role. In worker role I start an infinite loop to process some data continously. The web role is performing the interaction with the client. There I use a MVC Framework, which on server side is written in C# and on client side in Javascript.
Now... |
Starting a python script at boot - Raspbian | 31,791,309 | 0 | 0 | 344 | 0 | python,linux,arm,raspberry-pi,init.d | Ah bah, let's just give a quick answer.
After creating a script in /etc/init.d, you need to add a soft-link to the directory /etc/rc2.d, such as sudo ln -s /etc/init.d/<your script> /etc/rc2.d/S99<your script>. Assuming, of course, that you run runlevel 2. You can check that with the command runlevel.
The S means the s... | 0 | 1 | 0 | 1 | 2015-08-03T14:37:00.000 | 1 | 1.2 | true | 31,790,133 | 0 | 0 | 0 | 1 | I have a python script. This script is essentially my own desktop/UI. However, I would like to replace the default Raspbian (Raspberry Pi linux distro) desktop enviroment with my own version. How would I go about:
Disabling the default desktop and
Launching my python script (fullscreen) at startup?
This is on the Ras... |
What does app configuration mean? | 31,796,794 | 1 | 1 | 3,561 | 0 | python-2.7,google-app-engine,web-applications,configuration,app.yaml | To "configure your app," generally speaking, is to specify, via some mechanism, parameters that can be used to direct the behavior of your app at runtime. Additionally, in the case of Google App Engine, these parameters can affect the behavior of the framework and services surrounding your app.
When you specify these p... | 0 | 1 | 0 | 0 | 2015-08-03T16:27:00.000 | 2 | 0.099668 | false | 31,792,302 | 0 | 0 | 1 | 1 | I am working on Google App Engine (GAE) which has a file called (app.yaml). As I am new to programming, I have been wondering, what does it mean to configure an app? |
Is the application code visible to others when it is run? | 31,794,311 | 1 | 0 | 75 | 0 | python,flask | No. The code won't be viewable. Server side code is not accessible unless you give someone access or post it somewhere public. | 0 | 1 | 0 | 0 | 2015-08-03T18:22:00.000 | 2 | 1.2 | true | 31,794,152 | 0 | 0 | 1 | 1 | I don't want other people to see my application code. When I host my application, will others be able to see the code that is running? |
How to install python smtplib module in ubuntu os | 35,091,800 | 7 | 13 | 58,280 | 0 | python,module,smtplib | I will tell you a probable why you might be getting error like Error no module smtplib
I had created program as email.py
Now email is a module in python and because of that it start giving error for smtplib also
then I had to delete email.pyc file created and then rename email.py to mymail.py
After that no error of smt... | 0 | 1 | 0 | 1 | 2015-08-03T20:27:00.000 | 4 | 1 | false | 31,796,174 | 0 | 0 | 0 | 1 | I tried to install python module via pip, but it was not successful.
can any one help me to install smtplib python module in ubuntu 12.10 OS? |
How Do I Turn Off Python Error Checking in vim? (vim terminal 7.3, OS X 10.11 Yosemite) | 31,800,107 | 2 | 1 | 421 | 0 | python,macos,vim,osx-yosemite | Vim doesn't check Python syntax out of the box, so a plugin is probably causing this issue.
Not sure why an OS upgrade would make a Vim plugin suddenly start being more zealous about things, of course, but your list of installed plugins (however you manage them) is probably the best place to start narrowing down your p... | 0 | 1 | 0 | 1 | 2015-08-04T01:01:00.000 | 1 | 1.2 | true | 31,799,087 | 0 | 0 | 0 | 1 | Overview
After upgrading to 10.11 Yosemite, I discovered that vim (on the terminal) highlights a bunch of errors in my python scripts that are actually not errors.
e.g.
This line:
from django.conf.urls import patterns
gets called out as an [import-error] Unable to import 'django.conf.urls'.
This error is not true becau... |
Share objects between celery tasks | 31,877,500 | 1 | 3 | 2,958 | 0 | python,celery,fileparsing | Using Memcached sounds like a much easier solution - a task is for processing, memcached is for storage - why use a task for storage?
Personally I'd recommend using Redis over memcached.
An alternative would be to try ZODB - it stores Python objects natively. If your application really suffers from serialization overhe... | 0 | 1 | 0 | 0 | 2015-08-04T08:58:00.000 | 1 | 1.2 | true | 31,804,892 | 1 | 0 | 0 | 1 | I have got a program that handle about 500 000 files {Ai} and for each file, it will fetch a definition {Di} for the parsing.
For now, each file {Ai} is parsed by a dedicated celery task and each time the definition file {Di} is parsed again to generate an object. This object is used for the parsing of the file {Ai} (... |
Query total CPU usage of all instances of a process on Linux OS | 31,830,627 | 0 | 0 | 114 | 0 | python,c++,c,linux | Here is the only way to do that I can think. It is a bit confusing but if you follow the steps it is very simple:
If I want to select total cpu use of Google Chrome process:
$ps -e -o pcpu,comm | grep chrome | awk '{ print $1 }' | paste -sd+ |
bc -l | 0 | 1 | 0 | 1 | 2015-08-05T03:02:00.000 | 1 | 0 | false | 31,822,714 | 0 | 0 | 0 | 1 | I have a python server that forks itself once it receives a request. The python service has several C++ .so objects it can call into, as well as the python process itself.
My question is, in any one of these processes, I would like to be able to see how much CPU all instances of this server are currently using. So ... |
testing celery job that runs each night | 31,877,460 | 0 | 0 | 158 | 0 | python,testing,celery | To facilitate testing you should first run the task from ipython to verify that it does what it should.
Then to verify scheduling you should change the celerybeat schedule to run in the near future, and verify that it does in fact run.
Once you have verified functionality and schedule you can update the celerybeat sch... | 0 | 1 | 0 | 1 | 2015-08-05T09:46:00.000 | 1 | 0 | false | 31,828,928 | 0 | 0 | 0 | 1 | I have a periodical celery job that is supposed to run every night at midnight. Of course I can just run the system and leave it overnight to see the result. But I can see that it's not going to be very efficient in terms of solving potential problems and energy.
In such situation, is there a trick to make the testing... |
Maximum Beaglebone Black UART baud? | 33,552,144 | 6 | 5 | 5,044 | 0 | python,pyserial,beagleboneblack,uart,baud-rate | The AM335x technical reference manual (TI document spruh73) gives the baud rate limits for the UART sub-system in the UART section (section 19.1.1, page 4208 in version spruh73l):
Baud rate from 300 bps up to 3.6864 Mbps
The UART modules each have a 48MHz clock to generate their timing. They can be configured in on... | 0 | 1 | 0 | 1 | 2015-08-05T21:03:00.000 | 3 | 1.2 | true | 31,842,785 | 0 | 0 | 0 | 2 | I have been looking around for UART baud rates supported by the Beaglebone Black (BB). I can't find it in the BB system reference manual or the datasheet for the sitara processor itself. I am using pyserial and the Adafruit BBIO library to communicate over UART.
Does this support any value within reason or is it more s... |
Maximum Beaglebone Black UART baud? | 31,902,876 | 0 | 5 | 5,044 | 0 | python,pyserial,beagleboneblack,uart,baud-rate | The BBB reference manual does not contain any information on Baud Rate for UART but for serial communication I usually prefer using value of BAUDRATE = 115200, which works in most of the cases without any issues. | 0 | 1 | 0 | 1 | 2015-08-05T21:03:00.000 | 3 | 0 | false | 31,842,785 | 0 | 0 | 0 | 2 | I have been looking around for UART baud rates supported by the Beaglebone Black (BB). I can't find it in the BB system reference manual or the datasheet for the sitara processor itself. I am using pyserial and the Adafruit BBIO library to communicate over UART.
Does this support any value within reason or is it more s... |
Generating maximum wifi activity through 1 computer | 31,860,813 | 1 | 7 | 93 | 0 | python,linux,curl,wifi,bandwidth | Simply sending packets as fast as possible to a random destination (that is not localhost) should work.
You'll need to use udp (otherwise you need a connection acknowledge before you can send data).
cat /dev/urandom | pv | nc -u 1.1.1.1 9123
pv is optional (but nice).
You can also use /dev/zero, but there may be a risk... | 0 | 1 | 0 | 0 | 2015-08-06T15:57:00.000 | 1 | 1.2 | true | 31,860,476 | 0 | 0 | 0 | 1 | I need to generate a very high level of wifi activity for a study to see if very close proximity to a transceiver can have a negative impact on development of bee colonies.
I have tried to write an application which spawns several web-socket server-client pairs to continuously transfer mid-sized files (this approach h... |
Sharing a resource (file) across different python processes using HDFS | 31,934,576 | 2 | 5 | 132 | 0 | python,hdfs,race-condition,ioerror | (Setting aside that it sounds like HDFS might not be the right solution for your use case, I'll assume you can't switch to something else. If you can, take a look at Redis, or memcached.)
It seems like this is the kind of thing where you should have a single service that's responsible for computing/caching these result... | 0 | 1 | 0 | 0 | 2015-08-06T16:05:00.000 | 1 | 1.2 | true | 31,860,630 | 1 | 0 | 0 | 1 | So I have some code that attempts to find a resource on HDFS...if it is not there it will calculate the contents of that file, then write it. And next time it goes to be accessed the reader can just look at the file. This is to prevent expensive recalculation of certain functions
However...I have several processes ru... |
How to load IPython shell with PySpark | 66,149,862 | 1 | 33 | 26,058 | 0 | python,apache-spark,ipython,pyspark | Tested with spark 3.0.1 and python 3.7.7 (with ipython/jupyter installed)
To start pyspark with IPython:
$ PYSPARK_DRIVER_PYTHON=ipython pyspark
To start pyspark with jupyter notebook:
$ PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS=notebook pyspark | 0 | 1 | 0 | 0 | 2015-08-06T17:36:00.000 | 8 | 0.024995 | false | 31,862,293 | 0 | 0 | 0 | 1 | I want to load IPython shell (not IPython notebook) in which I can use PySpark through command line. Is that possible?
I have installed Spark-1.4.1. |
Celery worker stops consuming from a specific queue while it consumes from other queues | 32,602,968 | 1 | 0 | 1,570 | 0 | python,django,rabbitmq,celery | I found the problem in my code,
So in one of my task i was opening a connection to parse using urllib3 that was getting hung.
After moving out that portion in async task, things are working fine now. | 0 | 1 | 0 | 0 | 2015-08-06T19:12:00.000 | 1 | 1.2 | true | 31,863,996 | 0 | 0 | 1 | 1 | I am using rabbitmq as broker, there is a strange behaviour that is happening in my production environment only. Randomly sometimes my celery stops consuming messages from a queue, while it consumes from other queues.
This leads to pileup on messages in queue, if i restart my celeryd everything starts to work fine.
"/... |
flask application deployment: rabbitmq and celery | 31,885,764 | 1 | 0 | 603 | 0 | python,deployment | I don't see why you couldn't deploy on the same node (that's essentially what I do when I'm developing locally), but if you want to be able to rapidly scale you'll probably want them to be separate.
I haven't used rabbitmq in production with celery, but I use redis as the broker and it was easy for me to get redis as... | 0 | 1 | 0 | 0 | 2015-08-07T14:01:00.000 | 1 | 0.197375 | false | 31,879,606 | 0 | 0 | 1 | 1 | My web app is using celery for async job and rabbitmq for messaging, etc. The standard stuff. When it comes to deployment, are rabbitmq and celery normally deployed in the same node where the web app is running or separate? What are the differences? |
How to run PyQt4 app with sudo privelages in Ubuntu and keep the normal user style | 42,756,312 | 0 | 0 | 1,223 | 0 | python,linux,qt,ubuntu,pyqt | This is a hacky solution.
Install qt-qtconf. sudo apt-get install qt4-qtconfig
Run sudo qtconfig or gksudo qtconfig.
Change GUI Style to GTK+.
Edited. | 0 | 1 | 0 | 0 | 2015-08-08T13:11:00.000 | 2 | 0 | false | 31,893,477 | 0 | 0 | 0 | 1 | Ok the title explains it all. But just to clarify.
I have Ubuntu and programed a GUI app with Qt Designer 4 and PyQt4. The program works fine running python main.py in terminal.
Last week I made an update and now the program needs sudo privelages to start. So I type sudo python main.py.
But Oh my GODDDDDDD. What an un... |
Twisted unexpected connection lost | 32,285,162 | 0 | 2 | 860 | 0 | python,twisted | The only way to support a cross-platform unexpected disconnection (unplug) is to implement a application-level ping message to ping clients in a specific interval. | 0 | 1 | 0 | 0 | 2015-08-09T11:10:00.000 | 2 | 0 | false | 31,903,574 | 0 | 0 | 0 | 1 | I wrote a TCP server using Python Twisted to send/receive binary data from clients.
When a client close their application or calls the abortConnection method, I get the connectionLost event normally but when the client disconnects unexpectedly, I don't get the disconnect event, therefore, I can't remove the disconnect... |
Running ApScheduler in Gunicorn Without Duplicating Per Worker | 31,929,832 | 0 | 7 | 1,182 | 0 | python,uwsgi,gunicorn,apscheduler | I'm not aware of any way to do this with either, at least not without some sort of RPC. That is, run APScheduler in a separate process and then connect to it from each worker. You may want to look up projects like RPyC and Execnet to do that. | 0 | 1 | 0 | 0 | 2015-08-10T02:22:00.000 | 1 | 0 | false | 31,910,812 | 0 | 0 | 1 | 1 | The title basically says it all. I have gunicorn running my app with 5 workers. I have a data structure that all the workers need access to that is being updated on a schedule by apscheduler. Currently apscheduler is being run once per worker, but I just want it run once period. Is there a way to do this? I've tried us... |
"make" builds wrong python version | 31,931,647 | 1 | 2 | 364 | 0 | linux,python-2.7,build,compilation,mod-wsgi | I'll document this here as the fix, also to hopefully get a comment from Graham as to why this might be needed;
Changing
make
to
LD_RUN_PATH=/usr/local/lib make
was the answer, but i had to use this for building both python2.7.10 and mod_wsgi. Without using LD_RUN_PATH on mod_wsgi I still got the dreaded;
[warn] mod_... | 0 | 1 | 0 | 0 | 2015-08-11T00:04:00.000 | 1 | 0.197375 | false | 31,931,087 | 1 | 0 | 0 | 1 | System : SMEServer 8.1 (CentOS 5.10) 64bit, system python is 2.4.3
There is an alt python at /usr/local/bin/python2.7 (2.7.3) which was built some time ago.
Goal : build python2.7.10, mod_wsgi, django. First step is python 2.7.10 to replace the (older and broken) 2.7.3
What happens:
When i build the latest 2.7 python a... |
linux switch between ananconda python 3.4 and 2.7 | 31,965,393 | 0 | 0 | 259 | 0 | python,python-2.7,python-3.4,anaconda | I think I can answer my own question. Python 2.7 seems to be the default. If I activate 3.x with
source activate py3k
I need to reboot to go back to 2.7, which, being the default, happens automatically.
If anyone knows a cleaner way, please let me know. | 0 | 1 | 0 | 0 | 2015-08-11T12:23:00.000 | 1 | 0 | false | 31,941,685 | 1 | 0 | 0 | 1 | I do most of my work in Python 2.7, but I've recently encountered some tutorials that require 3.4. Fine. I checked and Anaconda allows installation of both under Linux (Fedora 22 to be precise). However, now I seem to be stuck in 3.4. I followed the Anaconda directions, entering:
conda create -n py3k python=3 anac... |
what is a robust way to execute long-running tasks/batches under Django? | 31,952,520 | 1 | 1 | 1,698 | 0 | python,django,batch-processing | I'm not sure how your celery configuration makes it unstable but sounds like it's still the best fit for your problem. I'm using redis as the queue system and it works better than rabbitmq from my own experience. Maybe you can try it see if it improves things.
Otherwise, just use cron as a driver to run periodic tasks.... | 0 | 1 | 0 | 0 | 2015-08-11T21:31:00.000 | 1 | 1.2 | true | 31,952,327 | 0 | 0 | 1 | 1 | I have a Django app that is intended to be run on Virtualbox VMs on LANs. The basic user will be a savvy IT end-user, not a sysadmin.
Part of that app's job is to connect to external databases on the LAN, run some python batches against those databases and save the results in its local db. The user can then explore t... |
why does elastic beanstalk not update? | 31,955,222 | 2 | 2 | 2,185 | 0 | python,amazon-web-services,amazon-elastic-beanstalk,pyramid | Are you committing your changes before deploying?
eb deploy will deploy the HEAD commit.
You can do eb deploy --staged to deploy staged changes. | 0 | 1 | 0 | 1 | 2015-08-12T02:14:00.000 | 1 | 0.379949 | false | 31,954,968 | 0 | 0 | 1 | 1 | I'm new to the world of AWS, and I just wrote and deployed a small Pyramid application. I ran into some problems getting set up, but after I got it working, everything seemed to be fine. However, now, my deployments don't seem to be making a difference in the environment (I changed the index.pt file that my root url ro... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.