Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Vim plugins don't always load?
17,131,966
1
3
938
0
python,vim,plugins
All folders in the rtp (runtimepath) option need to have the same folder structure as your $VIMRUNTIME ($VIMRUNTIME is usually /usr/share/vim/vim{version}). So it should have the same subdirectory names e.g. autoload, doc, plugin (whichever you need, but having the same names is key). The plugins should be in their corresponding subdirectory. Let's say you have /path/to/dir (in your case it's ~/.vim) is in your rtp, vim will look for global plugins in /path/to/dir/plugin look for file-type plugins in /path/to/dir/ftplugin look for syntax files in /path/to/dir/syntax look for help files in /path/to/dir/doc and so on... vim only looks for a couple of recognized subdirectories† in /path/to/dir. If you have some unrecognized subdirectory name in there (like /path/to/dir/plugins), vim won't see it. † "recognized" here means that a subdirectory of the same name can be found in /usr/share/vim/vim{version} or wherever you have vim installed.
0
1
0
1
2013-06-15T23:28:00.000
4
0.049958
false
17,128,878
0
0
0
1
I was trying to install autoclose.vim to Vim. I noticed I didn't have a ~/.vim/plugin folder, so I accidentally made a ~/.vim/plugins folder (notice the extra 's' in plugins). I then added au FileType python set rtp += ~/.vim/plugins to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder. The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in ~/.vim/plugin but not in ~/.vim/plugins?
Benchmarking System performance of Python System
17,139,897
1
0
780
0
python,linux,benchmarking,inotify
I would try and remove as many other processes as possible in order to get a repeatable benchmark. For example, I would set up a separate, dedicated server with an NFS mount to the directories. This server would only run inotify and the Python script. For simple server measurements, I would use top or ps to monitor CPU and memory. The real test is how quickly your script "drains" the directories, which depends entirely on your process. You could profile the script and see where it's spending the time.
0
1
0
1
2013-06-16T23:09:00.000
1
0.197375
false
17,138,569
0
0
0
1
I'm looking at using inotify to watch about 200,000 directories for new files. On creation, the script watching will process the file and then it will be removed. Because it is part of a more compex system with many processes, I want to benchmark this and get system performance statistics on cpu, memory, disk, etc while the tests are run. I'm planning on running the inotify script as a daemon and having a second script generating test files in several of the directories (randomly selected before the test). I'm after suggestions for the best way to benchmark the performance of something like this, especially the impact it has on the Linux server it's running on.
Preventing Embedded Python from Running Shell Commands?
17,153,614
0
0
114
0
c++,python,shell
One option is to remove all the modules that allow running arbitrary shell commands, i.e.: subprocess.py*, os.py*... and include only the modules that the end users are allowed to have immediate access to.
0
1
0
0
2013-06-17T17:38:00.000
2
1.2
true
17,153,483
0
0
0
2
I want to embed Python 3.x in our C++ application to allow scripting of maintenance and other tasks. It so far does everything we need including file manipulation. The problem is that to meet some specifications (like PCI), we aren't allowed to arbitrarily run shell commands such as with subprocess.call or popen. Is there a way to prevent and similar calls from working in embedded Python?
Preventing Embedded Python from Running Shell Commands?
17,153,992
0
0
114
0
c++,python,shell
Unless your application is really locked down I don't think you can prevent someone from loading their own python module from an arbitrary directory (with import) so I don't think you can prevent execution of arbitrary code as long as you have python embedded.
0
1
0
0
2013-06-17T17:38:00.000
2
0
false
17,153,483
0
0
0
2
I want to embed Python 3.x in our C++ application to allow scripting of maintenance and other tasks. It so far does everything we need including file manipulation. The problem is that to meet some specifications (like PCI), we aren't allowed to arbitrarily run shell commands such as with subprocess.call or popen. Is there a way to prevent and similar calls from working in embedded Python?
Parse what you google search
24,497,812
0
0
205
0
python,google-chrome
A few options you might consider, with their advantages and disadvantages: URL: advantage: as Chris mentioned, accessing the URL and manually changing it is an option. It should be easy to write a script for this, and I can send you my perl script if you want disadvantage: I am not sure if you can do it. I made a perl script for that before, but it didn't work because Google states that you can't use its services outside the Google interface. You might face the same problem Google's search API: advantage: popular choice. Good documentation. It should be a safe choice disadvantage: Google's restrictions. Research other search engines: advantage: they might not have the same restrictions as Google. You might find some search engines that let you play around more and have more freedom in general. disadvantage: you're not going to get results that are as good as Google's
0
1
1
0
2013-06-17T21:03:00.000
3
0
false
17,156,844
0
0
0
1
I'd like to write a script (preferably in python, but other languages is not a problem), that can parse what you type into a google search. Suppose I search 'cats', then I'd like to be able to parse the string cats and, for example, append it to a .txt file on my computer. So if my searches were 'cats', 'dogs', 'cows' then I could have a .txt file like so, cats dogs cows Anyone know any APIs that can parse the search bar and return the string inputted? Or some object that I can cast into a string? EDIT: I don't want to make a chrome extension or anything, but preferably a python (or bash or ruby) script I can run in terminal that can do this. Thanks
Is there a onSessionClose in WampServerProtocol?
17,166,189
1
1
156
0
python,autobahn
There is no WAMP specific session close (since WAMP does not have a closing handshake separate from WebSocket). You can use the onClose hook. Another point you might have a look at: the recommended way of accessing databases from Twisted applications is via twisted.enterprise.adbapi which automatically manages a database connection pool on a background thread pool - independent of frontend protocol instances (like WAMP protocol instances). Disclaimer: I am original author of Autobahn and work for Tavendo.
0
1
0
0
2013-06-18T04:54:00.000
1
1.2
true
17,160,797
0
0
1
1
I'm using Autobahn Python to make a WAMP server. I open up a database connection in onSessionOpen of my subclass of WampServerProtocol, and of course need to close it when the connection closed. However, I can't find a session close handler in either the tutorials or the docs.
Realtime data processing with Python
17,173,870
1
2
1,294
0
python,real-time,tornado
It really depends on what you want to do with the Tweets. Simply reading a stream of Tweets has not been an issue that I've seen. In fact that can be done on an AWS Micro Instance. I even run more advanced regression algorithms on the real-time feed. The scalability problem arises if you try to process a set of historical Tweets. Since Tweets are produced so fast, processing historical Tweets can be very slow. That's when you should try to parallelize.
0
1
0
0
2013-06-18T15:53:00.000
1
0.197375
false
17,173,464
0
0
0
1
I am working on a project which is going to consume data from Twitter Stream API and count certain hashtags. But I have difficulties in understanding what kind architecture I need in my case. Should I use Tornado or is there more suitable frameworks for this?
Running a Celery worker in unittest
18,316,377
1
6
774
0
python,unit-testing,integration-testing,celery
I'm not sure if it's worthwhile to explicitly test the transportation mechanism (i.e. the sending of the task parameters through celery) using a unit test. Personally, I would write my test as follows (can be split up in several unit tests): Use the code from project B to generate a task with sample parameters. Encode the task parameters using the same method used by Celery (i.e. pickling the parameters or encoding them as JSON). Decode the task parameters again, checking that no corruption occured. Call the task function, making sure that it produces the correct result. Perform the same encoding/decoding sequence for the results of the task function. Using this method, you will be able to test that The task generation works as intended The encoding & decoding of the task parameters and results works as expected If necessary, you can still independently test the functioning of the transportation mechanism using a system test.
0
1
0
1
2013-06-19T02:20:00.000
1
0.197375
false
17,181,923
0
0
1
1
I have the following setup: Django-Celery project A registers task foo Project B: Uses Celery's send_task to call foo Project A and project B have the same configuration: SQS, msgpack for serialization, gzip, etc. Each project lives on a different github repository I've unit-tested calls to "foo" in project A, without using Celery at all, just foo(1,2,3) and assert the result. I know that it works. I've unit-tested that send_task in project B sends the right parameters. What I'm not testing, and need your advise on is the integration between the two projects. I would like to have a unittest that would: Start a worker in the context of project A Send a task using the code of project B Assert that the worker started in the first step gets the task, with the parameters I sent in the second step, and that the foo function returned the expected result. It seems to be possible to hack this by using python's subprocess and parsing the output of the worker, but that's ugly. What's the recommended approach to unit-testing in cases like this? Any code snippet you could share? Thanks!
how to remove task from celery with redis broker?
61,655,803
0
15
22,805
0
python,celery,celery-task,celeryd
try to remove the .state file and if you are using a beat worker (celery worker -B) then remove the schedule file as well
0
1
0
0
2013-06-19T06:25:00.000
5
0
false
17,184,244
0
0
0
2
I Have add some wrong task to a celery with redis broker but now I want to remove the incorrect task and I can't find any way to do this Is there some commands or some api to do this ?
how to remove task from celery with redis broker?
61,655,939
3
15
22,805
0
python,celery,celery-task,celeryd
The simplest way is to use the celery control revoke [id1 [id2 [... [idN]]]] (do not forget to pass the -A project.application flag too). Where id1 to idN are task IDs. However, it is not guaranteed to succeed every time you run it, for valid reasons... Sure Celery has API for it. Here is an example how to do it from a script: res = app.control.revoke(task_id, terminate=True) In the example above app is an instance of the Celery application. In some rare ocasions the control command above will not work, in which case you have to instruct Celery worker to kill the worker process: res = app.control.revoke(task_id, terminate=True, signal='SIGKILL')
0
1
0
0
2013-06-19T06:25:00.000
5
0.119427
false
17,184,244
0
0
0
2
I Have add some wrong task to a celery with redis broker but now I want to remove the incorrect task and I can't find any way to do this Is there some commands or some api to do this ?
How do you cleanly remove Python when it was installed with 'make altinstall'?
17,201,432
13
13
18,817
0
python,linux,installation
As far as I know, there’s no automatic way to do this. But you can go into /usr/local and delete bin/pythonX and lib/pythonX (and maybe lib64/pythonX). But more generally, why bother? The whole point of altinstall is that many versions can live together. So you don't need to delete them. For your tests, what you should do is use virtualenv to create a new, clean environment, with whichever python version you want to use. That lets you keep all your altinstalled Python versions and still have a clean environment for tests. Also, do the same (use virtualenv) for development. Then your altinstall’ed Pytons don't have site packages. They just stay as clean, pristine references.
0
1
0
0
2013-06-19T17:40:00.000
1
1.2
true
17,197,743
1
0
0
1
How do you cleanly remove Python when it was installed with make altinstall? I'm not finding an altuninstall or such in the makefile, nor does this seem to be a common question. In this case I'm working with Python 2.7.x in Ubuntu, but I expect the answer would apply to earlier and later versions/sub-versions. Why? I'm doing build tests of various Python sub-versions and would like to do those tests cleanly, so that there are no "leftovers" from other versions. I could wipe out everything in /usr/local/lib and /usr/local/bin but there may be other things there I'd like not to remove, so having a straightforward way to isolate the Python components for removal would be ideal.
python - Getting Pip to use MinGW32 on Windows?
19,987,999
0
8
5,583
0
python,pip,mingw32
I've struggled with this issue for a while, and found that sometimes pip seems to ignore MSYS_HOME and can't find pydistutils.cfg, in which case the only recourse seems to be manually copying the pydistutils.cfg into your virtualenv right before running pip install. It sure would be great if someone could definitively figure out why it sometimes finds this file and sometimes does not. Environment variables seem excessively finicky in MinGW.
0
1
0
0
2013-06-19T19:18:00.000
2
0
false
17,199,544
1
0
0
1
This has been asked and answered a few times, but as you'll see none of the previous answers work for me -- I feel like something had changed to make all the old answers outdated. Or at the very least, I'm in some kind of edge case: On a Windows 7 box, I've installed MinGW32 and Python 2.7 (32 bit version) (I've also tried this with Python 2.6 and got the same results). Path environment variable is set correctly. I've edited cygwincompiler.py to remove refrences to -mno-cygwin. And I've put the correct distutils.cfg file into C:\Python27\Lib\distutils. To be clear: distutils.cfg contains [build] compiler=mingw32 on two lines. I've also (just to be safe) put pydistutils.cfg in my %HOME% directory. And put setup.cfg in my current directory when running pip. They have same content as distutils.cfg. I know this is all working because pip install cython and pip install pycrypto both compile successfully. However, mysteriously, some packages still give me the unable to find vcvarsall.bat error. Two examples are: pyproj and numpy. It's as if sometimes pip knows to use the MinGW compiler and sometimes it doesn't? Moreover, if I use the MSYS shell that comes with MinGW then magically pip install numpy succeeds. But pip install pyproj still fails with an unable to find vcvarsall.bat. I've tried this out on several machines all with the exact same results. Anybody have any idea what's going on here? Why would pip know to use mingw32 to compile some c modules and not others? Also, why does pip install numpy work inside the MSYS shell but not inside the cmd shell? BONUS: Many, many older answers suggest installing Visual Studio 2008 as a way of solving a vcvarsall.bat error. But as of this past May, microsoft is no longer distributing this software. Does anyone know of a place where one can still download VS2008? I ask because it's possible that being able to use vcvarsall.bat instead of MinGW would solve this problem.
will python code written in windows work in linux?
17,201,159
5
4
7,110
0
python,windows
Mostly, yes, as long as you keep to using the tools Python provides you and don't write code that is platform specific. Use os.path.join() to build paths, for example. Open files in binary mode when you deal with binary data, text mode when it's text data. Etc. Python code itself is platform agnostic; the interpreter on Linux can read python code written on Windows just fine and vice versa. The general rule of thumb is to pay attention to the documentation of the modules you are using; any platform specific gotchas should be documented.
0
1
0
0
2013-06-19T20:54:00.000
2
1.2
true
17,201,129
0
0
0
1
I would like to write some Python code in Windows using QtPy. But before I do that I'd like to know that I can use the code I wrote in Python. I understand that the complied program won't work due to different platforms but will there be any issues with regards to the *.py files I write in windows vs linux? I've been trying to install QtPy on my Mint installation and I just don't know what the problem is. Which is why I wanna go this route. I'd also like my code to work on the raspberry pi. Could you guys advise me to this end? Thanks!
Open root owned system files for reading with python
17,207,385
1
0
746
0
python,file-io,permissions
One option would be to do this somewhat sensitive "su" work in a background process that is disconnected from the web. Likely running via cron, this script would take the root owned log files, possibly change them to a format that the web-side code could deal with easily like loading them into a database, or merely unzipping them and placing them into a different location with slightly more laxed permissions. Then the web-side code could easily have access to the data without having to jump through the "su" hoops. From my perspective this plan does not seem to violate your contractual rules. The web server config, permissions, etc remain intact.
0
1
0
1
2013-06-20T07:05:00.000
2
1.2
true
17,207,280
0
0
0
1
The business case... The app server (Ubuntu/nginx/postgresql/python) that I use writes gzipped system log files as root to /var/log I need to present data from these log files to users' browsers My approach I need to do a fair bit of searching and string manipulation server side so I have a python script that deals with the opening and processing and then returns a nicely formatted JSON result set. The python (cgi) script is then called using ajax from the web page. My problem The script works perfectly when called from the command line as SU but (...obviously) the file opening method I'm using ( gzip.open(filename) ) is failing when invoked as user www-data by the webserver. Other useful info The app server concerned is (contractually rather than physically) a bit of a black box - I have SU access, I can write scripts, I can read anything but I can't change file permissions, add additional python libs or or mess with config. The subset of users who can would use this log extract also have the SU password so could be presented with a login dialog that I could pass to the script. Given the restrictions I have, how would you go about it?
What are the libguestfs basic packages
17,213,744
1
0
304
0
python,linux
It depends on the distro you are using and the version of libguestfs, but let's assume Fedora/RHEL/Debian and you're using libguestfs ≥ 1.18. In that case for local mount functionality you will only need the basic library package, called libguestfs on Fedora-like or libguestfs0 on Debian-like distros. You may also want the fusermount tool which is part of FUSE. If you're using guestfish, then you'll need the tools package. On Fedora you can just depend on /usr/bin/guestfish which does the Right Thing. On Debian it's in a package called guestfish. If you're using libguestfs through bindings (eg. from Python) then you should also depend upon the bindings package, eg. python-libguestfs (Fedora) or python-guestfs (Debian).
0
1
0
0
2013-06-20T08:34:00.000
1
1.2
true
17,208,910
0
0
0
1
I have created installer for Linux tool, this tool depends on libguestfs. The question is, what are the minimum required libguestfs packages I need to install in order for my tool to work?
Python "in memory DB style" data types
17,218,735
1
1
221
0
python,types,raspberry-pi
There are multiple cache layers between a Python program and a database. In particular, the Linux disk block cache may keep your database in core depending on patterns of usage. Therefore, you should not assume that writing to a database and reading back is necessarily slower than some home-brew cache that you'd put in your application. And code that you write to prematurely optimize your DB is going to be infinitely more buggy than code you don't write. For the workload as you've specified it, MySQL strikes me as a little heavyweight relative to SQLite, but you may have unstated reasons to require it.
0
1
0
0
2013-06-20T16:00:00.000
3
1.2
true
17,218,398
0
0
0
2
I am creating a weather station using a Raspberry Pi. I have a mySQL database setup for the different sensors (temp, humidity, pressure, rain, etc) and am now getting to processing the wind sensors. I have a python program that watches the GPIO pins for the anemometer and counts the pulses to calculate the wind speed. It also reads from a wind vane processes through an ADC to get the direction. For the other sensors I only process them every few minutes and dump the data directly to the DB. Because I have to calculate a lot of things from the wind sensor data, I don't necessarily want to write to the DB every 5 seconds and then have to read back the past 5 minutes of data to calculate the current speed and direction. I would like to collect the data in memory, do the processing, then write the finalized data to the DB. The sensor reading is something like: datetime, speed, direction 2013-6-20 09:33:45, 4.5, W 2013-6-20 09:33:50, 4.0, SW 2013-6-20 09:33:55, 4.3, W The program is calculating data every 5 seconds from the wind sensors. I would like to write data to the DB every 5 minutes. Because the DB is on an SD card I obviously don't want to write to the DB 60 times, then read it back to process it, then write it to the permanent archival DB every 5 minutes. Would I be better off using a list of lists? Or a dictionary of tuples keyed by datetime? {datetime.datetime(2013, 6, 20, 9, 33, 45, 631816): ('4.5', 'W')} {datetime.datetime(2013, 6, 20, 9, 33, 50, 394820): ('4.0', 'SW')} {datetime.datetime(2013, 6, 20, 9, 33, 55, 387294): ('4.3', 'W')} For the latter, what is the best way to update a dictionary? Should I just dump it to a DB and read it back? That seems like an excessive amount of read/writes a day for so little data.
Python "in memory DB style" data types
17,218,570
0
1
221
0
python,types,raspberry-pi
In my general experience, it is easier to work with a dictionary keyed by datetime. A list of lists can get very confusing, very quickly. I'm not certain, however, how to best update a dictionary. It could be that my Python is rusty, but it would seem to me that dumping to a DB and reading back is a bit redundant, though it may just be that your statement was a smidgen unclear. Is there any way you can dump to a variable inside of your program? If not, I think dumping to DB and reading back may be your only option...but again, my Python is a bit rusty. That said, while I don't want to be a Programmaticus Takeitovericus, but I was wondering if you've ever looked into XML for data storage? I ended up swapping to it because I found it was easier to work with than a database, and involved far less reading and writing. I don't know your project specs, so this suggestion may be pointless to you altogether.
0
1
0
0
2013-06-20T16:00:00.000
3
0
false
17,218,398
0
0
0
2
I am creating a weather station using a Raspberry Pi. I have a mySQL database setup for the different sensors (temp, humidity, pressure, rain, etc) and am now getting to processing the wind sensors. I have a python program that watches the GPIO pins for the anemometer and counts the pulses to calculate the wind speed. It also reads from a wind vane processes through an ADC to get the direction. For the other sensors I only process them every few minutes and dump the data directly to the DB. Because I have to calculate a lot of things from the wind sensor data, I don't necessarily want to write to the DB every 5 seconds and then have to read back the past 5 minutes of data to calculate the current speed and direction. I would like to collect the data in memory, do the processing, then write the finalized data to the DB. The sensor reading is something like: datetime, speed, direction 2013-6-20 09:33:45, 4.5, W 2013-6-20 09:33:50, 4.0, SW 2013-6-20 09:33:55, 4.3, W The program is calculating data every 5 seconds from the wind sensors. I would like to write data to the DB every 5 minutes. Because the DB is on an SD card I obviously don't want to write to the DB 60 times, then read it back to process it, then write it to the permanent archival DB every 5 minutes. Would I be better off using a list of lists? Or a dictionary of tuples keyed by datetime? {datetime.datetime(2013, 6, 20, 9, 33, 45, 631816): ('4.5', 'W')} {datetime.datetime(2013, 6, 20, 9, 33, 50, 394820): ('4.0', 'SW')} {datetime.datetime(2013, 6, 20, 9, 33, 55, 387294): ('4.3', 'W')} For the latter, what is the best way to update a dictionary? Should I just dump it to a DB and read it back? That seems like an excessive amount of read/writes a day for so little data.
How to make my Python module available system wide on Linux?
63,105,985
0
14
38,144
0
python,linux,module,installation
You could also have a folder that contains all your global modules e.g. my_modules/ then localize the site-packages/ folder of the Python version your are using (you will find it easily by using this command: python -m site). Now, cd to this folder and create a custom file with a .pth extension. In this file, add the absolute path to the folder that contains all the modules you want to make available system wide and save it. Your module should be available now
0
1
0
0
2013-06-21T13:37:00.000
6
0
false
17,236,675
0
0
0
1
I made myself a little module which I happen to use quite a lot. Whenever I need it I simply copy it to the folder in which I want to use it. Since I am lazy I wanted to install it so that I can call it from anywhere, even the interactive prompt. So I read a bit about installing here, and concluded I needed to copy the file over to /usr/local/lib/python2.7/site-packages. That however, doesn't seem to do anything. Does anybody know where I need to copy my module for it to work system wide?
Python programming with CodeRunner
17,243,806
0
1
1,192
0
python,wxpython,coderunner
CodeRunner should work fine with Python 2.7, but you'll have to make sure that a compatible version of wxPython is installed in your site-packages. To verify that wxPython works correctly, you should start up a python instance in your terminal and run some test code (i.e., import the library and run some of the functions). If this works correctly, then CodeRunner should be able to run that code accordingly (because it uses the same process to run your apps).
0
1
0
0
2013-06-21T20:24:00.000
4
0
false
17,243,749
1
0
0
1
im using a program called CodeRunner to program Python code. I want to install wxpython but how do I figure out what version python im using? when i do python -v in Terminal it says im on 2.7, but is that the same version that CodeRunner is using? Is there a way for me to find out?
How can I display out-of-range ascii characters?
17,254,020
0
1
527
0
python,ascii,less-unix
is the remote machine maybe set to unicode (modern linux distros are), then make sure you are running putty with the unicode setting too.
0
1
0
0
2013-06-22T18:19:00.000
2
0
false
17,253,919
1
0
0
1
I'm connected to a Linux machine using PuTTY. On the Linux machine, I'm running a python script that takes a list of characters and prints each character, together with its index, in order. Some of the characters in my list fall outside the range of printable ascii characters. These irregular characters are corrupting my output. Sometimes they simply don't appear, while other times they actually delete large chunks of valid text. I thought I could correct this by turning off buffering, but the problem still occurs when I run the script using the python -u flag. Interestingly, this problem does not occur when I pipe my input to the less reader. In less, irregular characters show up like this: <A9>, <A7>, ^V, ^@, etc. No chunks of text are missing. I'm not sure where my problem lies. Is there a way to configure my terminal so that unpiped output will still show irregular characters?
How to seamlessly maintain code of django celery in a multi node environment
17,270,294
2
1
582
0
python,django,celery,django-celery,celery-task
For this type of situation I have in the past made a egg of all of my celery task code that I can simply rsync or copy in some fashion to my worker nodes. This way you can edit your celery code in a single project that can be used in your django and on your work nodes. So in summary create a web-app-celery-tasks project and make it into an installable egg and have a web-app package that depends on the celery tasks egg.
0
1
0
0
2013-06-24T05:52:00.000
1
1.2
true
17,268,766
0
0
1
1
I have a Django application which uses django-celery, celery and rabbitmq for offline, distributed processing. Now the setup is such that I need to run the celery tasks (and in turn celery workers) in other nodes in the network (different from where the Django web app is hosted). To do that, as I understand I will need to place all my Django code in these separate servers. Not only that, I will have to install all the other python libraries which the Django apps require. This way I will have to transfer all the django source code to all possible servers in the network, install dependencies and run some kind of an update system which will sync all the sources across nodes. Is this the right way of doing things? Is there a simpler way of making the celery workers run outside the web application server where the Django code is hosted ? If indeed there is no way other than to copy code and replicate in all servers, is there a way to copy only the source files which the celery task needs (which will include all models and views - not so small a task either)
Confused about DBus
17,271,574
1
1
460
0
python,dbus
So does this mean that my website needs to run a DBUS service to allow me to call methods from it into my program? A dbus background process (a daemon) would run on your web server, yes. In fact dbus provides two daemons. One is a system daemon which permits objects to receive system information (e.g. printer availability for exampple) and the second is a general user application to application IPC daemon. It is the second daemon that you definitely use for different applications to communicate. I am coding in Python, so I am not sure if I can run a Python script on my website that would allow me to run a DBUS service. There is no problem using python; dbus has bindings for many languages (e.g Java, perl, ruby, c++, Python). dbus objects can be mapped to python objects. the most logical solution would be to run a single DBUS service that somehow imports method from different programs and can be queried by others who want to run those methods. Is that possible? Correct - dbus provides a mechanism by which a client process will create dbus object or objects which allow that process to other services to other dbus-aware processes.
0
1
0
0
2013-06-24T08:22:00.000
2
0.099668
false
17,270,936
0
0
1
1
Ok, so, I might be missing the plot a bit here, but would really like some help. I am quite new to development etc. and have now come to a point where I need to implement DBus (or some other inter-program communication). I am finding the concept a bit hard to understand though. My implementation will be to use an HTML website to change certain variables to be used in another program, thus allowing for the program to be dynamically changed in its working. I am doing this on a raspberry PI using Raspbian. I am running a webserver to host my website, and this is where the confusion comes in. As far as I understand, DBus runs a service which allows you to call methods from a program in another program. So does this mean that my website needs to run a DBUS service to allow me to call methods from it into my program? To complicate things a bit more, I am coding in Python, so I am not sure if I can run a Python script on my website that would allow me to run a DBUS service. Would it be better to use JavaScript? For me, the most logical solution would be to run a single DBUS service that somehow imports method from different programs and can be queried by others who want to run those methods. Is that possible? Help would be appreciated! Thank you in advance!
Is there an easy way to tell the difference in network settings between two systems running Fedora 12?
17,279,971
0
0
73
0
python,linux,http,proxy,urllib2
Permanent network settings are stored in various files in /etc/networking and /etc/network-scripts. You could use diff to compare what's in those files between the system. However, that's just the network stuff (static v.s. dynamic, routes, gateways, iptables firewalls, blah blah blah). If there's no differences there, you'll have to start expanding the scope of your search.
0
1
1
0
2013-06-24T16:02:00.000
1
0
false
17,279,906
0
0
0
1
I'm having a pretty unique problem. I'm using the python module urllib2 in order to get http responses from a local terminal. At first, urllib2 would only work with non-local addresses (i.e. google.com, etc.) and not local webservers. I eventually deduced that urllib2 was not respecting the no_proxy environment variable. If I manually erased the other proxy env variables in the code (i.e. set http_proxy to ''), then it seemed to fix it for my CentOS 6 box. However, I have a second machine running Fedora 12 that needs to run the same python script, and I cannot for the life of me get urllib2 to connect to the local terminal. If I set http_proxy to '' then I can't access anything at all - not google, not the local terminal. However, I have a third machine running Fedora 12 and the fix that I found for CentOS 6 works with that one. This leads me to my question. Is there an easy way to tell the difference between Fedora 12 Box#1 (which doesn't work) and Fedora 12 Box#2 which does? Maybe there's a list of linux config files that could conceivably affect the functionality of urllib2? I know /etc/environment can affect it with proxy-related environment variables and I know the routing tables could affect it. What else am I missing? Note: - Pinging the terminal with both boxes works. Urllib2 can only fetch http responses from the CentOS box and Fedora 12 Box#2, currently. Info: I've tested this with Python 2.6.2 Python 2.6.6 Python 2.7.5 on all three boxes. Same results each time.
building python packages on one linux distro and running them from another
17,291,000
0
1
36
0
python,linux,scientific-computing
There will not be any problem if you change the way of packaging. Ubuntu needs a different type of packaging while centos needs another type. Build your package and pack it in such a way as normal centos packages and then use it in centos
0
1
0
0
2013-06-25T06:52:00.000
2
0
false
17,290,936
1
0
0
2
Is it problematic to build python packages (numpy, scipy, matplotlib, h5py,...) on one linux distro (ubuntu) and run them from another distro (centos)? I am asking this because our computing cluster has centos machines while my pc is ubuntu.
building python packages on one linux distro and running them from another
17,291,237
0
1
36
0
python,linux,scientific-computing
Use distutils to package as eggs and specify the dependencies you should not have too many problems - the pyo files that are zipped into the eggs work fine across platforms. You might like to take a look at pypiserver to set up a local pypi that you can pip from.
0
1
0
0
2013-06-25T06:52:00.000
2
0
false
17,290,936
1
0
0
2
Is it problematic to build python packages (numpy, scipy, matplotlib, h5py,...) on one linux distro (ubuntu) and run them from another distro (centos)? I am asking this because our computing cluster has centos machines while my pc is ubuntu.
Calling command exe with python
17,301,654
1
1
218
0
python,pipe
Take a look at subprocess communicate and pipe examples.
0
1
0
0
2013-06-25T15:36:00.000
2
0.099668
false
17,301,571
0
0
0
1
I am trying to run a command exe from Python while passing in parameters. I have looked at a few other question, and the reason why my question is different is because I first want to call a cmd exe program while passing in some parameters, then I have to wait for 10 sec for the exe to prompt me for some username, and then some password. then I want to pipe this output out to a file. So is there a way to pass more arguments if a process is already called previously? How do I make a cmd exe stay open, because as soon as I call it, the process dies. Thanks
Using only relative path instead of full path
17,302,657
1
0
98
0
python,windows,relative-path
Just try '.\file_name' as your path 2 issues . = current directory, (.. is up one), and you need to escape the \ as \ if using windows file separators.
0
1
0
0
2013-06-25T16:15:00.000
1
0.197375
false
17,302,404
1
0
0
1
I'm a very new python user using python 2.6.2 and my question is simple. I want to only have the relative path "\file_name" in an input file instead of the full path like "c:\folder_a\folder_b\file_name" but when I use the relevant path in my input files I get the error "Windows Error [Error 2]: The system cannot find the file specified..." otherwise my code works fine. What do I need to do/change so the system can use the relative path? It seems since I'm running the script from the same folder such as "c:\folder_a\folder_b>python script_name" in the command terminal the relevant path alone should work.
celery read files from different computers
17,370,082
1
1
162
0
python,celery
This question lacks details. I guess you need distributed file system, with which both of your computers can work. There are plenty of solutions: GridFS in MongoDB, HDFS in Hadoop. You can also try more simple solution like SSHFS. In this case, one of your servers will connect to other's server file system. rsync can clone remote directory if you are not worried about consistency.
0
1
0
0
2013-06-25T23:33:00.000
1
0.197375
false
17,309,279
0
0
0
1
I have a program output files. I use celery to output the files with two parallel computers so the output files will distributed in two computers. How can I write a program read files from these two computers?
running python jobs in sequence
45,596,425
0
1
598
0
python,job-scheduling
If you want to set the cron in windows task scheduler is there ,before that write batch file write all commands to run python programs one by one and put it for task scheduler, It works
0
1
0
0
2013-06-27T14:53:00.000
3
0
false
17,346,488
1
0
0
1
I have to run a a list of Python jobs one by one after the successful completion of each job. How can I accomplish this in development environment, I know I can use a scheduler in production environment? FOr example: module1.py module2.py module3.py module4.py module5.py I need to run module1.py then after its successful completion need to trigger module2, then module3.. I have heard of CRON scheduler, Can I install it in windows environment and set it up? Also, Im on windows environment and use Pydev to develop my applications.
How to authenticate a user in a RESTful api and get their user id? (Tornado)
17,350,408
0
1
513
0
python,rest,authentication,tornado,userid
I am assuming that your authentication function talks to a database and that each page in you app hits the database one or more times. With that in mind, you should probably just authenticate each request. Many cloud/web applications have multiple database queries per page and run just fine. So when performance does get to be problem in your app (it probably won't for a long time), you'll likely already have an average of n queries per page where n is greater than 1. You can either work on bringing down that average or work on making those queries faster.
0
1
0
0
2013-06-27T16:17:00.000
1
0
false
17,348,253
0
0
1
1
I would like to maintain statelessness but I also don't want to call my login function on each authenticated request. Would using tornado's secure cookie functionality be feasible for storing the userid in each request for a mobile app? I'm trying to keep performance in mind, so although basic http authentication would work, I dont want to call a login function on each request to get the users id.
Long-running I/O-bound processes in AppEngine: tasks or threads?
17,354,787
0
2
193
0
python,google-app-engine,asynchronous
It depends on how long the "interaction" takes. Appengine has a limit of 60 seconds per HTTP Requests. If your external systems send data periodically then I would advice to grab the data in small chunks to respect the 60 seconds limit. Aggregate those into blobs and then process the data periodically using tasks.
0
1
0
0
2013-06-27T18:31:00.000
1
1.2
true
17,350,684
0
0
1
1
My Python AppEngine app interacts with slow external systems (think receiving data from narrow-band connections). Half-hour-long interactions are a norm. I need to run 10-15 of such interactions in parallel. My options are background tasks and "background threads" (not plain Python threads). Theoretically they look about the same. I'd stick with tasks since background threads don't run on the local development server. Are there any significant advantages of one approach over the other?
Failing to define the Python interpeter for PyDev in Eclipse
20,921,210
1
0
288
0
python,eclipse,pydev
i had this problem too, it turns out i had used "&" in the path of my eclipse folder. i renamed the folder using just normal characters, and pydev installed fine. i believe the path statement for the location of the eclipse folder has to be strict unicode without any other characters
0
1
0
0
2013-06-27T18:58:00.000
1
0.197375
false
17,351,154
1
0
0
1
I installed Python 32bit on W7. I then "installed" Eclipse 32bit. I successfully added PyDev to Eclipse. I then go to PyDev->Interpreter-Python, and click on "new", browse to C:\Python27\python.exe, click ok, and get the following error: Error getting info on interpreter. Common reasons include -Using an unsupported version -Specifying and invalid interpreter Reasons: See error log for details. Log: org.xml.sax.SAXParseException; lineNumber: 4; columnNumber: 23; The reference to entity "g" must end with the ';' delimiter. Any ideas how to fix this? Thanks!
Django Dynamic Scraper Project does not run on windows even though it works on Linux
17,374,282
0
0
194
0
python,django,web-scraping,scraper,scraperwiki
Step # 1 download django-dynamic-scraper-0.3.0-py2.7.tar.gz file Step # 2 Unzip it and change the name of the folder to: django-dynamic-scraper-0.3.0-py2.7.egg Step # 3 paste the folder into C:\Python27\Lib\site-packages
0
1
0
0
2013-06-28T11:53:00.000
1
0
false
17,364,120
0
0
1
1
I am trying to make a project in dynamic django scraper. I have tested it on linux and it runs properly. When I try to run the command: syndb i get this error /*****************************************************************************************************************************/ python : WindowsError: [Error 3] The system cannot find the path specified: 'C:\Python27\l ib\site-packages\django_dynamic_scraper-0.3.0-py2.7.egg\dynamic_scraper\migrations/.' At line:1 char:1 + python manage.py syncdb + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (WindowsError: [...migrations/.':String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError /*****************************************************************************************************************************/ The admin server runs properly with the command python manage.py runserver Kindly guide me how i can remove this error
Celery Logs into file
46,843,345
10
28
63,725
0
python,celery
If you want to log everything, you can use the following command -f celery.logs You can also specify different log levels as well. For suppose if you want log warning and errors add like following. --loglevel=warning -f celery.logs
0
1
0
0
2013-06-28T14:05:00.000
2
1
false
17,366,579
0
0
0
1
Can someone please help and tell me how to get the celery task debug details to a log file? I have a requirement to have the details of celery task logged into a .log file. Can you please make some suggestions on how this can be done without impacting the performance of the task?
Enthought Canopy 1.1 giving error icui18n: cannot open shared object file: No such file or directory
20,845,995
2
4
6,614
0
python,enthought,icu,canopy
Sorry for the late response (we are monitoring the enthought tag, please use it for to get our attention more quickly). These warning messages Unable to load library icui18n are spurious and shouldn't affect the usability of Canopy. We have turned these warnings off in version 1.2, which is coming out during the first week of January. Please post again if you see issues.
0
1
0
0
2013-06-28T21:54:00.000
2
0.197375
false
17,374,262
1
0
0
1
I have tried to get enthought canopy and follow the procedure. However, when I tried to run ./canopy, it gave this error: Unable to load library icui18n "Cannot load library icui18n: (icui18n: cannot open shared object file: No such file or directory)". I cannot sudo because I am using the university's supercomputing account, no permission to do so. Any advice?
is there a way to synchronize write to a file among different processes (not threads)
17,376,105
0
4
1,617
0
python-2.7,locking,shared-file
Just a thought... Couldn't you put a 'lock' file in the same directory as the file your trying to write to? In your distributed processes check for this lock file. If it exists sleep for x amount and try again. Likewise, when the process that currently has the file open finishes the process deletes the lock file? So if you have in the simple case 2 processes called A and B: Process A checks for lock file and if it doesn't exist it creates the lock file and does what it needs to with the file. After it's done it deletes this lock file. If process A detects the lock file then that means process B has the file, so sleep and try again later....rinse repeat.
0
1
0
0
2013-06-29T02:14:00.000
2
0
false
17,376,033
1
0
0
1
My project requires being run on several different physical machines, which have shared file system among them. One problem arising out of this is how to synchronize write to a common single file. With threads, that can be easily achieved with locks, however my program consists of processes distributed on different machines, which I have no idea how to synchronize. In theory, any way to check whether a file is being opened right now or any lock-like solutions will do, but I just cannot crack out this by myself. A python way would be particularly appreciated.
python run in powershell with script give 'non-utf' error
17,384,352
2
0
1,256
0
python,powershell
It seems your file is started with Unicode BOM. Try to save your file in Utf-8 without BOM.
0
1
0
0
2013-06-29T20:13:00.000
1
0.379949
false
17,384,280
1
0
0
1
I am a python beginner, trying to run from power shell on vista. when trying to call a simple script with: python vc.py gives error: "File "vcpy", line 1 syntaxError: Non-UTF-8 code starting with '\xff' ... where vc.py is: import sys print sys.version It does work when I invoke instead: cat vc.py | python The problem with this latter approach is that it is giving us problems with the raw input function.
Running Python script from cmd line but start with import in code
17,406,315
6
0
1,408
0
python,command-line,import
Two solutions here: You can run the script using python like this: python my_program.py or add this at the top of the file: #!/usr/bin/env python which will switch from bash to python to run this script
0
1
0
0
2013-07-01T13:57:00.000
1
1.2
true
17,406,252
0
0
0
1
I am running a Python script from Linux command line, and the script itself, on the first line, import several modules. I got some error message and searched online. Here is a reply from the author of the Python script: it appears that you are running dexseq_count.py as if it were a shell script, rather than from Python. As a consequence, the first line of the script is interpreted as the Linux command 'import' rather than as Python code, leading to the error you report. I am curious if the first line of import in Python has been mis-interpretated in Linux, and if so, how can I solve this problem? I have to run in the cmd line instead of in Python. Thanks so much!
Good location to store .py files on Mac
17,407,429
1
0
498
0
python,macos,python-2.7
The standard directory which is already searched by python depends on the version of python. For the Apple installed python 2.7 it is /Library/Python/2.7/site-packages the README in that directory says This directory exists so that 3rd party packages can be installed here. Read the source for site.py for more details.
0
1
0
1
2013-07-01T14:46:00.000
1
1.2
true
17,407,276
1
0
0
1
I've written some python modules that I'd like to be able to import anytime on Mac OS X. I've done some googling and I've gotten some mixed responses so I'd like to know what the "best" practice is for storing those files safely. I'm running Python2.7 and I want to make sure I don't mess with the Mac install of Python or anything like that. Thanks for the help
Where does console output go when Eclipse PyUnit Test Runner configured to use Nose
19,227,424
2
2
969
0
python,eclipse,nose,python-unittest
I eventually found in the Preferences > PyDev > PyUnit menu that adding -s to the Parameters for test running stopped this. The parameter prevents the capture of stdout that nose does by default. The alternate --nocapture parameter should work too.
0
1
0
1
2013-07-01T16:18:00.000
1
1.2
true
17,409,127
0
0
0
1
I'm using Eclipse / PyDev and PyUnit on OSX for development. It was recommended to me that I use Nose to execute our suite of tests. When I configure Nose as the test runner, however, output from the interactive console (either standalone or during debugging) disappears. I can type commands but do not see any output. Is this normal, or am I missing some configuration?
starting a python 3.3. script at ubuntu startup
17,413,212
0
0
1,070
0
python,ubuntu
The shebang line #!/usr/bin/python3 should work if sh, bash, etc. is trying to launch your script. It it is being run from another script as python myscript.py you'll have to find that script and get it to launch the script using python3 myscripy.py
0
1
0
0
2013-07-01T20:21:00.000
1
1.2
true
17,412,982
0
0
0
1
The standard python version of ubuntu 13.04 is python 2.7. I know that I can call a python script of version 3.3 by calling python3.3 or python3 in terminal instead of only "python", which starts the version 2.7... e.g. python3 myscript.py But now I have a version 3.3. script in the system start routine and can only tell the path to the file. The system recognizes it as a python script (in the shebang with #!/usr/bin/python3) But how to open it with the correct version? It is tried to be opened with the standard python install so it wont work nor even show up.
How can I get Python to watch a USB port for any device?
17,516,390
0
0
1,021
0
python,windows,usb,pyusb
What about polling? Create a Python app that enumerates a list of attached USB devices every couple of seconds or so. Keep a list/dictionary of your initially detected devices, and compare to that to determine what was attached/detached since your last polling iteration. This isn't the best approach, and enumerating all the devices takes a short while, so not too sure this would be the most CPU efficient method.
0
1
0
0
2013-07-01T22:44:00.000
1
0
false
17,414,855
0
0
0
1
On a windows OS, how can I get python to detect if anything is plugged in to a specific USB location on the computer. For example "Port_#0002.Hub_#0003" I've tried pyUSB which worked fine for detecting a specific device, but I couldn't seem to figure out how to just check a specific port/hub location for any kind of device.
Testing concurrent access in GAE
17,441,520
1
0
221
0
google-app-engine,python-2.7,app-engine-ndb
Please vote if it solve your problem :) GAE works like that: You can have multiple instances of program with separated code space - mean instance has not access to other instance. You can have multiple threads in program instance if you mark code as thread safe - mean each instance has access to same code/memory (counter in you case) - you need locking to avoid conflicts. Memcache is synchronized - updated of value is available to all programs and their threads - there is no concurrent races - mean you can read recent cache value and track if it not change during your changes. How to simulate concurrent access to piece of code? - You should not simulate you should use clear locking at level of thread or program - since it very hard to simulate concurrent races - it is not know who will win program or thread race since in each environment result is undefined - mean Linux, Windows, Python.
0
1
0
0
2013-07-03T05:27:00.000
1
0.197375
false
17,440,323
0
0
1
1
Is it possible to simulate concurrent access to a piece of code in Google App Engine? I am trying to unit test a piece of code that increments a counter. It is possible that the code will be used by different instances of the app concurrently and although I have made the datastore access sections transactional and also used memcache cas I would feel better if there was some way to test it. I have tried setting up background threads but Testbed seems to be creating a new environment for each thread.
When I use Mylyn with PyDev I see the context in the task but python files are not shown in any explorer
17,459,126
2
2
1,155
0
python,eclipse,pydev,mylyn
The problem seems to be the "PyDev Navigator Content". Ho to solve this make sure the Mylyn focus is disabled for the explorer view in the View Menu -> Customise View on the "Content" tab disable "PyDev Navigator Content" ==> now the python files are shown correctly when the Mylyn context is enabled Note: This works in both: the "PyDev Package Explorer" and in the "Project Explorer" You may add *.pyc to Preferences->Mylyn->Context->Resources -- this will prevent the pyc files to appear in your context when they are compiled automatically...
0
1
0
0
2013-07-03T22:22:00.000
1
0.379949
false
17,459,068
1
0
0
1
I use mylyn 3.9.0 with PyDev 2.7.5 with the PyDev Mylyn integration 0.4.0. Mylyn seems to build up contexts correctly (I can see the context tree in the task/context tab). But the python files are not shown "PyDev Package Explorer" nor in the "Project Explorer". What could prevent the python files to appear? Uninstalling the PyDev Mylyn integration did not help.
How to revert back to original Ubuntu Python installation?
17,461,770
1
0
751
0
python,ubuntu
Just never build anything like this using ./configure && make && make install on systems with package manager. This may (and will) cause unforeseen consequences. In order to uninstall what you have installed without package manager cd to folder with makefile and perform make uninstall. If you want to install other version of python alongside with your existing version, find appropriate package and use update-alternatives to choose default one.
0
1
0
0
2013-07-04T04:07:00.000
1
0.197375
false
17,461,720
1
0
0
1
I downloaded and built a new installation of Python on my Ubuntu 12.04 system without realizing that Ubuntu already comes with it installed. Whatever I did messed things up as one of the modules I need is no longer working. Is there a way to revert back to the original install? Thx!
How to programatically validate WPA passphrase on Linux?
17,960,144
0
1
1,144
0
linux,python-2.7,wifi,wpa
Well, a not-so-straightforward (yet the only possible) way to go about fulfilling your needs would be initiating a four-way handshake with the AP. Since you're coding in Python, Scapy would be your best option for crafting EAPOL message packets. You'll have to know the structure of the EAPOL packets though, and fully implement it in your code. You'll also have to recode, in Python, the functions for key generation, most (if not all) of which are PRFs **(Pseudo Random Functions); alternatively, you could import ready-compiled .DLL's to do the encoding for you. However, it would be enough to manage only the first 3 messages from the four-way handshake: If, after several connection attempts, the AP doesn't send the 3rd key message, then the MIC (Message Integrity Check) from the STA didn't match the one generated by the AP, and the password is thus invalid. Otherwise, it is. Note: wpa_supplicant follows the same procedure for authentication and connection, however it continues on for obtaining extra information like IP address and what not... That's why I said it's the only possible way.
0
1
0
0
2013-07-05T08:09:00.000
1
1.2
true
17,484,086
0
0
0
1
I'm trying to validate the user's input of SSID and WPA Passphrase for a WPA connection. My program is a Python program running on an embedded Linux platform. I can validate an Access Point with SSID exists by parsing the output of a iwlist scan subprocess. Validating the Passphrase, however, is less straight forward. So far, the only solution I've come up with is to parse the output of wpa_supplicant -Dwext -iwlan0 -c/tmp/wpa_supplicant.conf looking for "pre-shared key may be incorrect" or the kernel message "OnDeAuth Reason code(15)" (which means WLAN_REASON_4WAY_HANDSHAKE_TIMEOUT according to the wpa_supplicant source). Interpreting a handshake timeout as an invalid Passphrase seems plain wrong. Besides that, that approach requires waiting for some output from a subprocess and assumes the absence of error messages means the Passphrase is valid. Googling around this just returns me a lot of questions and advice on how to hack a WPA connection! There's no wpa_cli or iwevent in the yum repository for my target platform and I'm unsure how to go about getting a third-party python package running on my target. Question: What's the simplest way of validating the Wifi WPA Passphrase?
Installing python (same version) on accident twice
17,488,424
0
0
609
0
python,scipy,reinstall
How about sudo port uninstall python27?
0
1
0
0
2013-07-05T10:10:00.000
1
1.2
true
17,486,322
0
1
0
1
I accidentally installed python 2.7 again on my mac (mountain lion), when trying to install scipy using macports: sudo port install py27-scipy ---> Computing dependencies for py27-scipy ---> Dependencies to be installed: SuiteSparse gcc47 cctools cctools-headers llvm-3.3 libffi llvm_select cloog gmp isl gcc_select ld64 libiconv libmpc mpfr libstdcxx ppl glpk zlib py27-nose nosetests_select py27-setuptools python27 bzip2 db46 db_select gettext expat ncurses libedit openssl python_select sqlite3 py27-numpy fftw-3 swig-python swig pcre I am still using my original install of python (and matplotlib and numpy etc), without scipy. How do I remove this new version? It is taking up ~2Gb space.
Celery task executing code from another programming language
17,991,236
1
3
1,334
0
python,ipc,celery
I've found the solution. It's also a variation on Solution 2, use Thrift for RPC with the actual job code. The code is written in the target language and a Thrift IDL describes it to the Thrift compiler which can generate both client and server. The client is obviously Python code and the server is in the target language. Any similar alternative to Thrift will do, like other RPC code generators. Thanks for all the answers, I hope this ends up helping someone someday.
0
1
0
0
2013-07-05T11:33:00.000
2
1.2
true
17,487,923
0
0
0
1
I'm trying to call compiled/interpreted code from a Celery task. The code is written in something other than Python. I want to know if there is a better solution to the problem than the ones I'm thinking of. Solution 1. Start another process and execute/interpret the piece of code I'm interested in. This has the overhead of creating and killing a process. For a very small task, that overhead may be too high. Solution 2. Use a Listener process that can execute code from a target language. It could listen on a local socket for function signatures (aka add(2,2), execute and return the result on the same socket. The listener could also implement something like a process/thread pool to handle multiple tasks efficiently. Solution 3 (thanks to AndrewS). Building a worker process (connected to the broker). It implies rewriting the Celery worker into the target language. This is the most expensive version of the three in terms of development effort.
Run a vim command from a python script
17,502,075
1
1
876
0
python,linux,bash,vim
You should be using Vim's Python API, not an external Python script. see :h python. You can access all that info directly through its functions. You can evaluate a vim command with vim.command() to interface with the clipboard. There are other ways to get at the clipboard using e.g. PyGTK, or perhaps more directly through python-xlib, but would probably be more difficult.
0
1
0
0
2013-07-06T04:49:00.000
2
0.099668
false
17,499,757
0
0
0
1
I have configured a keyboard shortcut using xbindkeys to run a python script. Now, while editing any vim file if that user press that keyboard shortcut- I want my python script to run this command to paste the path and line no to the system clipboard- :let @+=expand("%") . ':' . line(".") Then I want my script to copy that path from the system clipboard and process it further Can you please suggest me a good solution for this. Thanks In advance
Shortcut to python IDLE on mac os
17,507,189
2
0
3,751
0
python,macos,python-idle
You can use a shell script to do what you need on osx. Instructions to open a python interpreter in terminal: First create a file called python.command Open the file in any text editor and copy and paste the following #!/bin/bash python If you want to open the IDLE Standalone App do the following (repeat step 1,2 but paste the following) but make sure your path is correct: #!/bin/bash open -a /Applications/Python\ 2.7/IDLE.app
0
1
0
0
2013-07-06T20:49:00.000
2
1.2
true
17,507,005
1
0
0
2
I want to create a shortcut for IDLE on my mac so that I don't need to go through the terminal. Is there a way to do this? I was trying Automator but I'm not that familiar with it. Thanks.
Shortcut to python IDLE on mac os
17,507,212
-1
0
3,751
0
python,macos,python-idle
You can use spotlight to open IDLE: Press CMD (Apple) + SPACE, type "idle" and hit ENTER... my preferred way to open any programs on the mac
0
1
0
0
2013-07-06T20:49:00.000
2
-0.099668
false
17,507,005
1
0
0
2
I want to create a shortcut for IDLE on my mac so that I don't need to go through the terminal. Is there a way to do this? I was trying Automator but I'm not that familiar with it. Thanks.
Appengine errors not appearing in logs
17,520,267
0
0
63
0
python,google-app-engine,error-handling
When I was taking some screenshots as tony asked in the comments, I found the solution. These errors are all HEAD requests. Since my app doesn't support them, they generate a 405 HTTP response code which is shown on the dashboard as error but then in the logs they don't get the error icon. They just seem to be fine at first sight.
0
1
0
0
2013-07-07T19:36:00.000
1
1.2
true
17,515,574
0
0
1
1
I have the following problem: I can see some mysterious errors on the Appengine Dashboard but when I go to the logs I can't find any relevant entries. Otherwise the URIs are working fine when I request them. If I click on the links on the dashboard which take me to the logs with a prefilled regexp filter, the logs are empty. I only have one guess: When a request takes longer to load and the user closes the browser window/tab, before the page has been loaded, theese kind of errors are generated but not logged. But I can't prove this assumption. This guess is based on what I see sometimes when developing locally with the SDK. I use the python SDK. I only have one live version of the app. Do you maybe have any clues what happens here? Thanks.
Testing Python Packages
17,518,734
2
1
78
0
python
You can use pip -e install <path-to-package> to install a package in editable mode. Then, you can make changes to the source code, and not have to install it again. This is best done, as always, in a virtualenv, so it is isolated from the rest of your system.
0
1
0
1
2013-07-08T03:00:00.000
1
1.2
true
17,518,614
1
0
0
1
I have the source code to a python package that is typically installed using pup or easy-install. How do I locally install the code after I've made changes? I'd like to be able to run commands on the terminal as if I've installed it with pip and then reinstall/have it detect code changes to try it again.
CFD monitoring program
17,549,425
3
1
366
0
c++,python,linux
If it were me, I'd try to change the CFD code to be a library instead of an application, and then I'd expose it to Python. Then I'd write a Python script that would invoke the library and get the results, iterating as needed. If the CFD code doesn't take very long to run a single iteration, this will be more efficient than launching the CFD standalone program over and over. And perhaps more importantly it will allow exchange of rich data between the CFD code and the supervisor, rather than only text files.
0
1
0
0
2013-07-09T12:27:00.000
1
1.2
true
17,548,159
0
0
0
1
I run a lot of computational fluid dynamics (CFD) calculations. For many reasons, I would like to write a program which will monitor the output of the log file given by the CFD solver and adjust its control parameters accordingly. I have a few ideas but would like to ask for advice as to what would be the best way to do this. My thoughts: could run the program constantly and import the convergence parameters at a fixed time interval or when the log file changes could use some system or platform specific utilities to monitor the CFD process the CFD runs in parallel on the same machine so (probably) will need a way to control the parallel processes (the CFD code I'm using is OpenFOAM which utilises OpenMPI to parallelise its processes) For completeness, I run on Ubuntu 12.04 and would prefer the program to be written in C/C++ or alternatively Python. Thanks a lot
Where does stuff "print" to when not running application from terminal?
17,566,504
5
4
1,553
0
python,macos,stdout,py2app
Where the stdout and stderr streams are redirect to depends on how you run the application, and on which OSX release you are. When you run the application from the Terminal ("MyApp.app/Contents/MacOS/MyApp") the output ends up in the terminal window. When you run the application by double clicking, or using the open(1) command, the output ends up in Console.app when using OSX before 10.8 and is discarded on OSX 10.8. I have a patch that redirects output to the logs that Console.app reads even for OSX 10.8, but that is not in a released version at this point in time. P.S. I'm the maintainer of py2app.
0
1
0
0
2013-07-09T16:13:00.000
3
0.321513
false
17,553,182
1
0
0
1
So I have a python application that is being bundled into a .app using py2app. I have some debugging print statements in there that would normally print to stdout if I was running the code in the terminal. If I just open the bundled .app, obviously I don't see any of this output. Is any of this actually being printed somewhere even though I don't see it?
How can I make a Python virtualenv meant for another platform?
17,554,381
4
2
1,454
0
python,virtualenv
Virtualenvs are not a packaging mechanism. There is no reason a virtualenv should ever leave the computer it was created on. It won't work, the virtualenv is 100% specific to your OS, CPU architecture, Python version, etc. There are a number of solutions for packaging. The old and still current way is specifying dependencies in setup.py, and running setup.py install on the target machine. Note that this can happen inside a virtualenv, you just have to create the virtualenv and run setup.py in there. Both virtualenv and the standard library venv in 3.3 provide ways of doing this automatically after virtualenv creation. If you absolutely must create a binary distribution (e.g. because you need an extension module and the end user doesn't have a compiler), you need an egg or a wheel or one of the .py-to-binary converters (py2exe, PyInstaller, cx_Freeze, etc.). You will need access to an OS X machine to create that. And at least the wheel and the egg are usually installed anyway, so using them doesn't save you any of this hassle. That's because they are formats for binary distribution, their primary purpose is pushing the build step from the end user to the developers, not to remove the installation step. In summary: Just create a script which creates a virtualenv and installs your application as well as the required libraries.
0
1
0
0
2013-07-09T17:04:00.000
2
1.2
true
17,554,093
1
0
0
1
I'm writing a program on a computer running Ubuntu with an x86-64 processor that needs to run on a computer running OS X with an x86 processor. I'm probably not going to be able to do any kind of library installation, so a venv is pretty much the only option I know of. How can I make one targeted for that platform? If I can't, is there a better way to ship the libraries with the program?
infinite error message python
17,585,244
0
0
100
0
python,cmd,warnings,indefinite
Try opening and storing information from one file at a time? We dont have enough information to understand what is wrong with your code. We really dont have much more than "I tried to open 185 fits files" and "too many open files"
0
1
0
0
2013-07-11T03:35:00.000
1
0
false
17,584,635
0
0
0
1
I am in deep trouble at the moment. After every letter I type on my python command prompt in Linux, I get an error message: sys:1: GtkWarning: Attempting to store changes into `/u/rnayar/.recently-used.xbel', but failed: Failed to create file '/u/rnayar/.recently-used.xbel.L6ETZW': Too many open files Hence I can type nothing on python, and the prompt is stuck. I tried to open 185 fits files, containing some data, and feed in some of that data into an array. I cannot abandon the command window, because I already have significant amounts of information stored on it. Does anybody know how I can stop the error message and get it working as usual?
How does one make a twistedmatrix subprocess continue processing after the client disconnects?
17,590,814
1
1
70
0
python,multithreading,subprocess,twisted
There's nothing in Twisted's child-process support that will automatically kill the child process when any particular TCP client disconnects. The behavior you're asking about is basically the default behavior.
0
1
0
0
2013-07-11T08:37:00.000
1
1.2
true
17,588,779
0
0
0
1
I`m creating a twisted tcp server that needs to make subprocess command line call and relay the results to the client while still connected. But the subprocess needs to continue running until it is done, even after the client disconnects. Is it possible to do this? And if so, please send me in the right direction..Its all new to me. Thanks in advance!
Automate multiple installers
17,596,459
1
1
409
0
python,automation,installation
what i meant is to combine all the installers into 1 single big installer. I am not sure, if you mean to make one msi out of several. If you have built the msis, this is possible to work out, but in most situations there were reasons for the separation. But for now I assume as the others, that you want a setup which combines all msi setups into one, e.g. with a packing/selfextracting part, but probably with some own logic. This is a very common setup pattern, some call it "bootstrapper". Unfortunately the maturity of most tools for bootstrapping is by far not comparable to the msi creation tools so most companies I know, write kind of an own bootstrapper with the dialogs and the control logic they want. This can be a very expensive job. If you have not high requirements, it may sound a simple job. Just starting a number of processes after each other. But what about a seamless process bar, what about uninstallation (single or bundled), what about repair, modify, what about, if one of them fails or needs a reboot also concerning repair/uninstall/modify/update. And so on. As mentioned, one of the first issues of bundling several setups into one is about caring how many and which uninstall entries shall the user see, and if it is ok that your bootstrapper does not create an own, combining one. If this is not an issue for you, then you have chances to find an easy solution. I know at least three tools for bootstrappers, some call it suites or bundles. I can only mention them here: WiX has at least something called "Burn". Google for WiX Burn and you will find it. I haven't used it yet, so I can't tell about. InstallShield Premier, which is not really what most people call a cheap product, allows setup "Suites" which is the same. I don't want to comment the quality here. In the Windows SDK there is (has been?) a kind of template of a setup.exe to show how to start installation of msi out of a program. I have never looked into that example really to tell more about it.
0
1
0
0
2013-07-11T13:14:00.000
3
0.066568
false
17,594,382
1
0
0
2
I have written a small python script that i want to share with other users.(i want to keep it as a script rather than and exe so that users can edit the codes if they need to) my script has several external libraries for python which doesn't come with basic python. But the other users doesn't have python and the required libraries installed in their PCs . So,For convenient, I am wondering if there's any way to automate the installation process for installing python and the external libraries they need. To make things more clear, what i meant is to combine all the installers into 1 single big installer. For you information, all the installers are window x86 MSI installers and there are about 5 or 6 of them. Is this possible?Could there be any drawbacks of doing this? EDIT: All the users are using windows XP pro 32 bit python 2.7
Automate multiple installers
17,594,560
1
1
409
0
python,automation,installation
I would suggest using NSIS. You can bundle all the MSI installers (including python) into one executable, and install them in "silent mode" in whatever order you want. NSIS also has a great script generator you can download. Also, you might be interested in activepython. It comes with pip and automatically adds everything to your path so you can just pip install most of your dependencies from a batch script.
0
1
0
0
2013-07-11T13:14:00.000
3
1.2
true
17,594,382
1
0
0
2
I have written a small python script that i want to share with other users.(i want to keep it as a script rather than and exe so that users can edit the codes if they need to) my script has several external libraries for python which doesn't come with basic python. But the other users doesn't have python and the required libraries installed in their PCs . So,For convenient, I am wondering if there's any way to automate the installation process for installing python and the external libraries they need. To make things more clear, what i meant is to combine all the installers into 1 single big installer. For you information, all the installers are window x86 MSI installers and there are about 5 or 6 of them. Is this possible?Could there be any drawbacks of doing this? EDIT: All the users are using windows XP pro 32 bit python 2.7
Django runserver bound to 0.0.0.0, how can I get which IP took the request?
17,599,320
0
6
10,672
0
python,django,manage.py
If your goal is to ensure the load balancer is working correctly, I suppose it's not an absolute requirement to do this in the application code. You can use a network packet analyzer that can listen on a specific interface (say, tcpdump -i <interface>) and look at the output.
0
1
0
0
2013-07-11T13:44:00.000
2
0
false
17,595,066
0
0
1
1
I'm running a temporary Django app on a host that has lots of IP addresses. When using manage.py runserver 0.0.0.0:5000, how can the code see which of the many IP addresses of the machine was the one actually hit by the request, if this is even possible? Or to put it another way: My host has IP addresses 10.0.0.1 and 10.0.0.2. When runserver is listening on 0.0.0.0, how can my application know whether the user hit http://10.0.0.1/app/path/etc or http://10.0.0.2/app/path/etc? I understand that if I was doing it with Apache I could use the Apache environment variables like SERVER_ADDR, but I'm not using Apache. Any thoughts? EDIT More information: I'm testing a load balancer using a small Django app. This app is listening on a number of different IPs and I need to know which IP address is hit for a request coming through the load balancer, so I can ensure it is balancing properly. I cannot use request.get_host() or the request.META options, as they return what the user typed to hit the load balancer. For example: the user hits http://10.10.10.10/foo and that will forward the request to either http://10.0.0.1/foo or http://10.0.0.2/foo - but request.get_host() will return 10.10.10.10, not the actual IPs the server is listening on. Thanks, Ben
Python web crawler multithreading and multiprocessing
17,601,331
0
0
763
0
python,multithreading,performance,multiprocessing,web-crawler
Look into grequests, it doesn't do actual muti-threading or multiprocessing, but it scales much better than both.
0
1
1
0
2013-07-11T18:51:00.000
1
0
false
17,601,124
0
0
1
1
Briefly idea, My web crawler have 2 main jobs. Collector and Crawler, The collector will collecting all of the url items for each sites and storing non duplicated url. The crawler will grab the urls from storage, extract needed data and store its back. 2 Machines Bot machine -> 8 core, Physical Linux OS (No VM on this machine) Storage machine -> mySql with clustering (VM for clustering), 2 databases (url and data); url database on port 1 and data port 2 Objective: Crawled 100 sites and try to decrease the bottle neck situation First case: Collector *request(urllib) all sites , collect the url items for each sites and * insert if it's non duplicated url to Storage machine on port 1. Crawler *get the url from storage port 1 , *request site and extract needed data and *store it's back on port 2 This cause the connection bottle neck for both request web sites and mySql connection Second case: Instead of inserting across the machine, Collector store the url on my own mini database file system.There is no *read a huge file(use os command technic) just *write (append) and *remove header of the file. This cause the connection request web sites and I/O (read,write) bottle neck (may be) Both case also have the CPU bound cause of collecting and crawling 100 sites As I heard for I/O bound use multithreading, CPU bound use multiprocessing How about both ? scrappy ? any idea or suggestion ?
dd command PERMISSION DENIED Python
17,619,054
1
1
3,234
0
python,root,permission-denied,dd
I know I have to do some kind of root thing? Indeed you do! If you are using linux, sudo is the idiomatic way to escalate your user's privilege. So instead invoke 'sudo dd if=/dev/sdb of=/dev/null' (for example). If your script must be noninteractive, consider adding something like admin ALL = NOPASSWD: ALL to your sudoers, or something similar.
0
1
0
0
2013-07-12T15:20:00.000
2
1.2
true
17,618,361
0
0
0
1
I have used a command line to run a dd command in Python, however, whenever I try to actually run the command, I get: dd: opening '/dev/sdb': Permission denied I know I have to do some kind of root thing? And I only need a certain section of my code to run the dd command, so I don't need to 'root' the whole thing; but the whole 'root' concept confuses me... Help would be HIGHLY appreciated!!
Is there a way to "version" my python distribution?
17,623,026
5
3
98
0
python
You want to use virtualenv. It lets you create an application(s) specific directory for installed packages. You can also use pip to generate and build a requirements.txt
0
1
0
1
2013-07-12T19:53:00.000
3
0.321513
false
17,622,992
1
0
0
2
I'm working by myself right now, but am looking at ways to scale my operation. I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add /usr/local/lib/python2.7/site-packages/ (or whatever) to an svn repo? This doesn't solve the problems with PATHs, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out. How have other people solved this?
Is there a way to "version" my python distribution?
42,163,489
0
3
98
0
python
For the same goal, i.e. having the exact same Python distribution as my colleagues, I tried to create a virtual environment in a network drive, so that everybody of us would be able to use it, without anybody making his local copy. The idea was to share the same packages installed in a shared folder. Outcome: Python run so unbearably slow that it could not be used. Also installing a package was very very sluggish. So it looks there is no other way than using virtualenv and a requirements file. (Even if unfortunately often it does not always work smoothly on Windows and it requires manual installation of some packages and dependencies, at least at this time of writing.)
0
1
0
1
2013-07-12T19:53:00.000
3
0
false
17,622,992
1
0
0
2
I'm working by myself right now, but am looking at ways to scale my operation. I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add /usr/local/lib/python2.7/site-packages/ (or whatever) to an svn repo? This doesn't solve the problems with PATHs, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out. How have other people solved this?
Jython and Python Lib Dependencies
18,564,145
0
0
312
0
python,maven,dependencies,jython
ObsPy relies on ctypes which works only for CPython - so I'm afraid you won't get it running under Jython.
0
1
0
0
2013-07-13T21:36:00.000
1
0
false
17,634,435
1
0
1
1
I am contributing on an open source Java project, and I am trying to use the Python tool ObsPy via the Jython PythonInterpreter. My problem is that I am having trouble figuring out how to include the ObsPy library in the Jython buildpath. Is it possible to use Maven in order to include the ObsPy library in a manner that the Jython runtime will recognize it? Thanks, and sorry I could not provide any existing code on this issue.
Python: errno2 No such file or directory
17,635,284
0
0
3,121
0
python
In general, windows defaults to the user directory in the command prompt. Saying "python ex1.py" is trying to find ex1.py in the C:\User\Username directory. Try moving your python script there or moving to the python projects folder using cd. Either way should fix the issue.
0
1
0
0
2013-07-13T23:57:00.000
2
0
false
17,635,269
1
0
0
1
I am learning Python from "Learn Python the Hard Way" and searched up quite a bit on it with no solutions as of yet. I configured the path for python to work on the command prompt. But whenever I type in "python ex1.py" it comes up with an error: Errno2 No such file or directory! The code is a simple print code, nothing much there. But I do not know why it's showing this! I have all these exercises in the python directory C:\python27\projects\ex1.py
Logging User Login in App Engine
17,655,687
0
0
35
0
google-app-engine,python-2.7
Log only if its been some time since they entered the app. if you really want to do it at the login level hou can but you will need to setup SSO on the domain.
0
1
0
0
2013-07-15T11:54:00.000
1
0
false
17,653,659
0
0
1
1
I have a Python application on AppEngine that requires users to log in. Is there any way to write a log entry on logging in? Users could hit the log in screen from any URL and will reload pages throughout their session so adding it to code would add numerous entries when all I want is one at the point of authentication.
Pyinstaller GLIBC_2.15 not found
17,654,364
6
3
6,782
0
python-2.7,pyinstaller
Cyrhon FAQ section says: Under Linux, I get runtime dynamic linker errors, related to libc. What should I do? The executable that PyInstaller builds is not fully static, in that it still depends on the system libc. Under Linux, the ABI of GLIBC is backward compatible, but not forward compatible. So if you link against a newer GLIBC, you can't run the resulting executable on an older system. The supplied binary bootloader should work with older GLIBC. However, the libpython.so and other dynamic libraries still depends on the newer GLIBC. The solution is to compile the Python interpreter with its modules (and also probably bootloader) on the oldest system you have around, so that it gets linked with the oldest version of GLIBC. and How to get recent Python environment working on old Linux distribution? The issue is that Python and its modules has to be compiled against older GLIBC. Another issue is that you probably want to use latest Python features and on old Linux distributions there is only available really old Python version (e.g. on Centos 5 is available Python 2.4).
0
1
0
0
2013-07-15T12:33:00.000
1
1.2
true
17,654,363
1
0
0
1
Generated an executable on Linux 32-bit Ubuntu 11 and tested it on a 32-bit Ubuntu 10 and it failed with a "GLIBC_2.15" not found.
Delete entities from a datastore table without knowing ancestor
17,659,738
2
1
80
0
python,google-app-engine,app-engine-ndb
Provided you have indexed them to be queried by date, you can query the entities by date. The query will return the entities of interest. You can find out the ancestor of a given entity from its key - the ancestor's key is part of the entity's key.
0
1
0
0
2013-07-15T16:49:00.000
1
1.2
true
17,659,500
0
0
1
1
Is it possible to delete entities from a datastore table without knowing their ancestor? I wish to delete all entities older than a specific date, but there are many different ancestors.
DSTAT as a Python API ?
17,770,525
2
2
468
0
python,linux,operating-system,profiling
The Dstat source-code includes a few sample programs using Dstat as a library.
0
1
0
1
2013-07-16T04:11:00.000
1
0.379949
false
17,667,871
0
0
0
1
Is it possible, on a linux box, to import dstat and use it as an api to collect OS metrics and then compute stats on them? I have downloaded the source and tried to collect some metrics, but the program seems to be optimized for command line usage. Any suggestions as to how to get my desired functionality either using Dstat or any another library?
Trying to send an EOF signal (Ctrl+D) signal using Python via Popen()
17,712,430
5
2
3,348
0
python-2.7,posix,popen,eof
EOF isn't really a signal that you can raise, it's a per-channel exceptional condition. (Pressing Ctrl+D to signal end of interactive input is actually a function of the terminal driver. When you press this key combination at the beginning of a new line, the terminal driver tells the OS kernel that there's no further input available on the input stream.) Generally, the correct way to signal EOF on a pipe is to close the write channel. Assuming that you created the Popen object with stdin=PIPE, it looks like you should be able to do this.
0
1
1
1
2013-07-16T14:00:00.000
1
1.2
true
17,678,620
0
0
0
1
I'm trying to get Python to send the EOF signal (Ctrl+D) via Popen(). Unfortunately, I can't find any kind of reference for Popen() signals on *nix-like systems. Does anyone here know how to send an EOF signal like this? Also, is there any reference of acceptable signals to be sent?
How can I manage separate app.yaml / cron.yaml for my app engine source code?
17,686,245
0
1
169
0
google-app-engine,python-2.7,google-analytics
I've been using two branches where the app.yaml files are different. But it requires that I have to be careful when merging and explicitly NOT merge the app.yaml file. It's still a pain.
0
1
0
0
2013-07-16T15:49:00.000
1
0
false
17,681,217
0
0
1
1
I have two versions of my app, prod and dev that I manage from one git repo. The way I've been managing them is to constantly switch and uncomment lines in both my app.yaml and my cron.yaml depending upon which version I want to upload. I was wondering if anyone had better experience managing two different versions within one git repo.
Where is libmysqlclient.16.dylib
46,163,996
0
0
378
0
python,mysql,django,macos
Try sudo find / -iname libmysqlclient.*
0
1
0
0
2013-07-16T17:01:00.000
1
0
false
17,682,705
0
0
0
1
I've been following advice from StackOverflow posts and I've been asked for MySQLdb to verify that I have libmysqlclient.16.dylib on my compute. Where do I find this in OS X 10.8?
Python - Detect if remote computer is on
17,682,961
0
9
20,956
0
python,udp,heartbeat
yes thats the way to go. kind of like sending heartbeat ping. Since its UDP and since its just a header message you can reduce the frequency to say 10 seconds. This should not cause any measurable system perf degradation since its just 2 systems we are talking about. I feel here, UDP is might be better compared to TCP. Its lightweight, does not consume a lot of system resources and is theoretically faster. Downside would be there could be packet loss. You can circumvent that by putting in some logic like when 10 packets (spaced 10 seconds apart) are not received consecutively then declare the other system unreachable.
0
1
1
0
2013-07-16T17:06:00.000
5
0
false
17,682,818
0
0
0
1
So, I have an application in Python, but I want to know if the computer (which is running the application) is on, from another remote computer. Is there any way to do this? I was thinking to use UDP packets, to send some sort of keep-alive, using a counter. Ex. every 5 mins the client sends an UDP 'keep-alive' packet to the server. Thanks in advance!
Bind engine to the notebook session
17,712,389
0
0
77
0
ipython
This is not possible with the notebook at this time. The notebook cannot use a kernel that it did not start. You could possibly write a new KernelManager that starts kernels remotely distributed across your machines and plug that into the notebook server, but you cannot attach an Engine or other existing kernel to the notebook server.
0
1
0
0
2013-07-17T08:04:00.000
1
0
false
17,694,389
0
0
0
1
Is it possible to run notebook server with a kernel scheduled as processes on remote cluster (ssh or pbs), (with common directory on NFS)? For example I have three servers with GPU and would like to run a notebook on one of them, but I do not like to start more than one notebook server. It would be ideal to have notebook server on 4th machine which would in some way scheduele kernels automatically or manually. I did some trials with making cluster with one engine. Using %%px in each cell is almost a solution, but one cannot use introspection and the notebook code in fact is dependent on the cluster configuration which is not very good.
Datastore vs Memcache for high request rate game
17,816,617
1
3
341
1
python,google-app-engine,memcached,google-cloud-datastore,app-engine-ndb
As a commenter on another answer noted, there are now two memcache offerings: shared and dedicated. Shared is the original service, and is still free. Dedicated is in preview, and presently costs $.12/GB hour. Dedicated memcache allows you to have a certain amount of space set aside. However, it's important to understand that you can still experience partial or complete flushes at any time with dedicated memcache, due to things like machine reboots. Because of this, it's not a suitable replacement for the datastore. However, it is true that you can greatly reduce your datastore usage with judicious use of memcache. Using it as a write-through cache, for example, can greatly reduce your datastore reads (albeit not the writes). Hope this helps.
0
1
0
0
2013-07-17T14:13:00.000
2
0.099668
false
17,702,165
0
0
1
1
I have been using the datastore with ndb for a multiplayer app. This appears to be using a lot of reads/writes and will undoubtedly go over quota and cost a substantial amount. I was thinking of changing all the game data to be stored only in memcache. I understand that data stored here can be lost at any time, but as the data will only be needed for, at most, 10 minutes and as it's just a game, that wouldn't be too bad. Am I right to move to solely use memcache, or is there a better method, and is memcache essentially 'free' short term data storage?
Can I limit write access of a program to a certain directory in osx? Also set maximum size of the directory and memory allocated
17,755,743
0
0
139
0
python,macos,memory,permissions
import os stats = os.stat('possibly_big_file.txt') if (stats.st_size > TOOBIG): print "Oh no....."
0
1
0
0
2013-07-19T21:15:00.000
2
0
false
17,755,621
1
0
0
1
I am writing code with python that might run wild and do unexpected things. These might include trying to save very large arrays to disk and trying to allocate huge amounts of memory for arrays (more than is physically available on the system). I want to run the code in a constrained environment in Mac OSX 10.7.5 with the following rules: The program can write files to one specific directory and no others (i.e. it cannot modify files outside this directory but it's ok to read files from outside) The directory has a maximum "capacity" so the program cannot save gigabytes worth of data Program can allocate only a finite amount of memory Does anyone have any ideas on how to set up such a controlled environment? Thanks.
How to properly deploy python webserver application with extension deps?
17,759,977
1
0
148
0
python,swig,web-deployment
If you need the package to be installed by some user the easiest way will be to write the setup.py - but no just with simple setup function like most of installers. If you look at some packages, they have very complicated setup.py scripts which builds many things and C extensions with installation scripts for many external dependences. The LD_PATH problem you can solve like this. If your application have an entry-point like some script which you save in python's bin directory (or system /usr/bin) you override LD_PATH like export LD_PATH="/my/path:$LD_PATH". If your package is system service, like some servers or daemons, you can write system package, for example debian package or rpm. Debian has a lot of scripts and mechanism to point out the dependencies with packages. So, if you need some system libraries on the list you write it down in package source and debian will install them when you will be installing your package. For example your package have dependencies for SWIG and other DEV modules, and your C extension will be built properly.
0
1
0
0
2013-07-20T07:22:00.000
1
1.2
true
17,759,832
0
0
0
1
I developed my first webserver app in Python. It's a but unusual, because it does not only depend on python modules (like tornado) but also on some proprietary C++ libs wrapped using SWIG. And now it's time to deliver it (to Linux platform). Due to dependency on C++ lib, just sending sources with requirements.txt does not seem enough. The only workaround would be to have exact Linux installation to ensure binary compatibility of the lib. But in this case there will be problems with LD_PATH etc. Another option is to write setup.py to create sdist and then deploy it with pip install. Unfortunately that would mean I have to kill all instances of the server before installing my package. The workaround would be to use virtualenv for each instance though. But maybe I'm missing something much simpler?
How to setup the Mac terminal for Programming with Python?
17,768,272
1
0
263
0
python,ide,development-environment
Assuming that Python is already on your computer: Go to /Applications folder Then open Utilities Double Click Terminal to open it and get a command line type 'python' in the command prompt Your all set!
0
1
0
0
2013-07-20T23:09:00.000
1
1.2
true
17,767,579
1
0
0
1
I would like to know from you guys how you have set up your Mac terminal for python programming. I havent done anything big so far (have used ide's until now) with python in terminal but I think that you can do all kinds of fancy things (automatic fill up functions, colors, ...). Any suggestions?? Thanks you guys!
Free PaaS without / with less port forwarding to bind to custom ports?
21,300,896
0
0
362
0
python-2.7,httprequest,openshift,paas
You can also bind to arbitrary ports as long as you either want to talk out on that port or want to only consume it internally. For example some people write IRC bots using OpenShift, which goes out on port 6666 or 6667. If you want to have something listening to requests that come from outside openshift then you can only bind to 8080
0
1
0
0
2013-07-23T03:00:00.000
2
0
false
17,800,796
0
0
1
2
I am currently engaged in a project which has the following requirements. The application is written in Python, The Application has two threads running at any instance, one is the 'server' and the other is the 'app-logic'. The server listens on port 6000 (or any such custom port) and reads the incoming message (which is plain text commands), then passes that message to app-logic,which then processes the input, creates an output, and then passes the outbound message to the server. The server then writes to the port 7000 of the client(or any such fixed port, the client is specially made to read from their aforementioned port) So far I have tried The Google-App-Engine and let it go because of issues regarding Threading. I tried OpenShift and they did not support binding to a custom port. They only supported binding to port 8080 (which is fine), but to that, they had forwarded traffic from somewhere else. So as it turns out, the 'server' in my application reads the inbound stream is from a different port of the same machine that I have been allocated for the site, and since the messages are not HTTP format, I have no way of writing back to the client. Is there any PaaS that supports an app of this nature?? Update: I have finished the project some time back using Openshift. It was a piece of cake to work around this problem that I had once we use a third party messaging service such as pubnub or pusher.
Free PaaS without / with less port forwarding to bind to custom ports?
18,064,738
0
0
362
0
python-2.7,httprequest,openshift,paas
I have found one way! That is to use the DIY cartridge (Do it yourself) in Openshift, install Python and run "Websockets". Of course this still means that the transmissions should be of HTTP. The other option is to move to IaaS (infrastructure as a service) rather than PaaS.
0
1
0
0
2013-07-23T03:00:00.000
2
0
false
17,800,796
0
0
1
2
I am currently engaged in a project which has the following requirements. The application is written in Python, The Application has two threads running at any instance, one is the 'server' and the other is the 'app-logic'. The server listens on port 6000 (or any such custom port) and reads the incoming message (which is plain text commands), then passes that message to app-logic,which then processes the input, creates an output, and then passes the outbound message to the server. The server then writes to the port 7000 of the client(or any such fixed port, the client is specially made to read from their aforementioned port) So far I have tried The Google-App-Engine and let it go because of issues regarding Threading. I tried OpenShift and they did not support binding to a custom port. They only supported binding to port 8080 (which is fine), but to that, they had forwarded traffic from somewhere else. So as it turns out, the 'server' in my application reads the inbound stream is from a different port of the same machine that I have been allocated for the site, and since the messages are not HTTP format, I have no way of writing back to the client. Is there any PaaS that supports an app of this nature?? Update: I have finished the project some time back using Openshift. It was a piece of cake to work around this problem that I had once we use a third party messaging service such as pubnub or pusher.
Websocket connection between two servers
17,835,810
0
0
1,390
0
python-2.7,websocket,tornado
WebSocket was designed for low-latency bidirectional browser<->service communication. It's placed on top of TCP/IP and brings along some overhead. It was designed to solve all the problems that you simply do not have when it's about front-end<->back-end communication, because there we're talking about a defined environment which is under your control. Hence, I would recommend going back to the basics and do simple TCP/IP communication between your front-end and back-end.
0
1
1
0
2013-07-24T03:07:00.000
2
0
false
17,824,526
0
0
1
1
Im using python tornado as web server and I have a backend server and a frontend server. I want to create browser-frontend-backend connection. Can anyone help me how to do this? I know how to create websocket connection between frontend and browser but I have no idea how to connect my frontend server to backend server to stream realtime data parsed by my backend server.
Google App Engine Remote API + OAuth
20,981,524
0
5
907
0
python,google-app-engine,oauth-2.0,google-oauth
You can't use OAuth2 to connect to your app with remote_api_stub/shell. This option is not provided.
0
1
1
0
2013-07-25T11:45:00.000
2
0
false
17,857,138
0
0
1
1
I'm using GAE remote api to access the data store of my app. The authentication to GAE is made using remote_api_stub.ConfigureRemoteApi with an authentication function that returns a user name and a password. Is there a way for authenticating using an access_token, for example OAuth or OAuth 2.0?
How to stop a running program on PyDev
17,868,662
1
3
2,429
0
python,eclipse,pydev
Ctrl+Alt+F9 should terminate all launches, which will do the job for you.
0
1
0
0
2013-07-25T19:50:00.000
1
0.197375
false
17,867,473
0
0
0
1
I am afraid that my question may be duplicate, but I could not find good answers for my question in stackoverflow. I use PyDev under Eclipse. And I often run my programs by opening a Python Console (Ctrl+Alt+Enter on the editor) for quick-and-easy debugging. The thing is I do not know how to stop the running program on the way. Ctrl+C, Ctrl+Z, or Ctrl+Break did not work. If I click [terminate] icon, the whole Console disappears, in which I do not want. Is there any ways to stop the running program and go back to command line? Thanks
Python Fabric config file (~/.fabricrc) is not used
17,929,568
0
0
675
0
python,fabric
Overrode the "env" variable via parameter in the function. Dumb mistake.
0
1
0
1
2013-07-26T18:15:00.000
1
0
false
17,888,244
0
0
0
1
I'm developing a task where I need to have a few pieces of information specific to the environment. I setup the ~/.fabricrc file but when I run the task via command line, the data is not in the env variable I don't really want to add the -c config to simplify the deployment. in the task, I'm calling env.cb_account and I have in ~/.fabricrc cb_account=foobar it throws AttributeError Has anybody else run into this problem? I found the information when I view env outside of my function/task. So now the question is how do I get that information into my task? I already have 6 parameters so I don't think it would be wise to add more especially when those parameters wouldn't change.
How to run a Python program without a window?
17,894,151
2
0
89
0
python
Run it with pythonw.exe instead of python.exe.
0
1
0
0
2013-07-27T04:07:00.000
2
0.197375
false
17,894,135
1
0
0
2
Is it possible to make a program in Python that, when run, does not actually open any window (including command prompt)? For example, opening the program would appear to do nothing, but in reality, the program is running in the background somewhere. Thanks!
How to run a Python program without a window?
17,894,155
3
0
89
0
python
Are you running the python program by double clicking *.py file in Windows? Then, rename the *.py file to *.pyw.
0
1
0
0
2013-07-27T04:07:00.000
2
1.2
true
17,894,135
1
0
0
2
Is it possible to make a program in Python that, when run, does not actually open any window (including command prompt)? For example, opening the program would appear to do nothing, but in reality, the program is running in the background somewhere. Thanks!
GAE Blobstore file-like API deprecation timeline (py 2.7 runtime)
17,959,762
1
5
677
0
google-app-engine,python-2.7
Official response from Chris Ramsdale, Product Manager, Google App Engine: while there's currently no defined date for decommissioning this API, we are committed to supporting it throughout the remainder of the year (2013). please don't hesitate to reach out to me directly [redacted], if you have further questions (this thread is fine as well).
0
1
0
0
2013-07-27T22:20:00.000
1
1.2
true
17,903,025
0
0
1
1
This is a question for the App Engine team. Last week we realized that the App Engine team had marked the file-like API for writing and reading to the blobstore as being deprecated and likely to be removed in the future. We have quite a bit of infrastructure relying on that API that now we need to port to the alternative they suggest (Google Cloud Storage) and this is not a trivial effort (especially considering our current backlog). So the question is: how soon will this file-like API be unavailable? It's fairly important for us to know as depending on the answer, we might shuffle our backlog to prioritize the porting of using the Blobstore to GCS. Thanks.
Run both php and python at the same time on google app engine
17,911,235
1
0
431
0
php,python,google-app-engine,runtime
Quite simply, no. You'll have to use separate modules, or pick one language and use it for both of the things you describe.
0
1
0
1
2013-07-28T15:15:00.000
3
0.066568
false
17,909,688
0
0
1
3
I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication. Is there any way to achieve this?
Run both php and python at the same time on google app engine
17,911,325
0
0
431
0
php,python,google-app-engine,runtime
Segregate your applications in different modules and communicate between the two using the GAE Data Store or Memcache. Your applications can signal each other using a GET request with the name of the Memcache key or the url safe data store key.
0
1
0
1
2013-07-28T15:15:00.000
3
0
false
17,909,688
0
0
1
3
I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication. Is there any way to achieve this?
Run both php and python at the same time on google app engine
17,914,860
0
0
431
0
php,python,google-app-engine,runtime
You can achieve the proxy pattern by simply making http requests from one module to the other, using the URLFetch service.
0
1
0
1
2013-07-28T15:15:00.000
3
0
false
17,909,688
0
0
1
3
I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication. Is there any way to achieve this?
Running GAE remote_api_shell.py in a iPython notebook web interface
17,931,783
0
0
259
0
google-app-engine,ipython,ipython-notebook
iPython Notebook has profile directories in ~/.ipython, which have a startup directory for Python scripts that can be used to do the customization of sys.path and login credentials as remote_api_shell.py does.
0
1
0
0
2013-07-28T20:46:00.000
1
1.2
true
17,912,720
0
0
1
1
I'd like to run modified remote_api_shell.py in an iPython notebook web interface, so that non-technical users with a basic grasp of Python could have read-only access to our production database. Has anyone set something like this up, and what's the best way of going about it?
Running PyObjC application (built in Xcode) on previous version of Mac OS?
17,934,374
0
0
271
0
python,xcode,macos,pyobjc
Use py2app to create the application bundle, and do that using a separate install of Python (that is don't use /System/Library/Framework/Python.framework). The python install you use should be compiled with the MACOSX_DEPLOYMENT_TARGET set to the minimum OSX release you want to support. When you do this, it should be possible to deploy to older OSX releases. I regularly do this for building apps on a 10.8 machine that get deployed to a 10.5 machine. You do need to take some care when including other libraries, especially when those include a configure script: sometimes the configure script detects functionality that is available on the build machine, but not on the deployment machine. BTW. You need to link against the same version of Python as you use at runtime. CPython's ABI is not compatible between feature releases (that is, the 2.6 ABI is not necessarily compatible with the 2.7 ABI). For python 3.x there is a stable ABI that is compatible between feature releases, but AFAIK that's primarily targeting Python extensions and I don't know how useful that is for embedding Python in your application.
0
1
0
0
2013-07-29T07:41:00.000
2
0
false
17,918,480
1
0
0
1
I am using Xcode to build a PyObjC application. The app runs fine on the build machine (running 10.8) but crashes on startup on a machine running 10.6, because it fails to find the Python 2.7 installation. Fair enough -- the preinstalled Python on 10.6 is Python 2.5. But I don't really care which Python version my app uses, I just want it to use the latest version of Python it can find. How can I either: A) Tell my app to use the latest version of Python available on the host system, OR B) Bundle the entire Python source into my app? I have been very frustrated by this issue and any help would be greatly appreciated!
WebSocket messages get queued when client disconnected
18,775,244
1
0
1,868
0
python,websocket,tornado
Browsers may handle websocket client messages in a separate thread, which is not blocked by sleep. Even if a thread of your custom application is not active, when you force it to sleep (like sleep(100)), TCP connection is not closed in this case. The socket handle is still managed by OS kernel and the TCP server still sends the messages until it reaches the TCP client's receive window overflow. And even after this an application on server side can still submit new messages successfully, which are buffered on TCP level on server side until TCP outgoing buffer is overflown. When outgoing buffer is full, an application should get error code on send request, like "no more space". I have not tried myself, but it should behave like this. Try to close the client (terminate the process), you will see totally different picture - the server will notice disconnect. Both cases, disconnect and overflow, are difficult to handle on server side for highly reliable scenarios. Disconnect case can be converted to overflow case (websocket server can buffer messages up to some limit on user space while client is being reconnected). However, there is no easy way to handle reliably overflow of transmit buffer limit. I see only one solution - propagate overflow error back to originator of the event, which raised the message, which has been discarded due to overflow.
0
1
1
0
2013-07-29T21:22:00.000
1
0.197375
false
17,934,427
0
0
0
1
We have a server, written using tornado, which sends asynchronous messages to a client over websockets. In this case, a javascript app running in Chrome on a Mac. When the client is forcibly disconnected, in this case by putting the client to sleep, the server still thinks it is sending messages to the client. Additionally, when the client awakens from sleep, the messages are delivered in a burst. What is the mechanism by which these messages are queued/buffered? Who is responsible? Why are they still delivered? Who is reconnecting the socket? My intuition is that even though websockets are not request/response like HTTP, they should still require ACK packets since they are built on TCP. Is this being done on purpose to make the protocol more robust to temporary drops in the mobile age?
'python' is not recognized as an internal or external command
27,385,986
314
124
662,372
0
python,cmd
Try "py" instead of "python" from command line: C:\Users\Cpsa>py Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:38:22) [MSC v.1600 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>>
0
1
0
0
2013-07-30T17:04:00.000
15
1
false
17,953,124
1
0
0
7
So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error: 'python' is not recognized as an internal or external command I have tried setting the path but no avail. Here is my path: C:\Program Files\Python27 As you can see, this is where my Python is installed. I don't know what else to do. Can someone help?
'python' is not recognized as an internal or external command
57,309,414
0
124
662,372
0
python,cmd
If you uninstalled then re-installed, and running 'python' in CLI, make sure to open a new CMD after your installation for 'python' to be recognized. 'py' will probably be recognized with an old CLI because its not tied to any version.
0
1
0
0
2013-07-30T17:04:00.000
15
0
false
17,953,124
1
0
0
7
So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error: 'python' is not recognized as an internal or external command I have tried setting the path but no avail. Here is my path: C:\Program Files\Python27 As you can see, this is where my Python is installed. I don't know what else to do. Can someone help?
'python' is not recognized as an internal or external command
58,568,261
0
124
662,372
0
python,cmd
Option 1 : Select on add environment var during installation Option 2 : Go to C:\Users-> AppData (hidden file) -> Local\Programs\Python\Python38-32(depends on version installed)\Scripts Copy path and add to env vars path. For me this path worked : C:\Users\Username\AppData\Local\Programs\Python\Python38-32\Scripts
0
1
0
0
2013-07-30T17:04:00.000
15
0
false
17,953,124
1
0
0
7
So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error: 'python' is not recognized as an internal or external command I have tried setting the path but no avail. Here is my path: C:\Program Files\Python27 As you can see, this is where my Python is installed. I don't know what else to do. Can someone help?
'python' is not recognized as an internal or external command
45,619,149
0
124
662,372
0
python,cmd
Another helpful but simple solution might be restarting your computer after doing the download if Python is in the PATH variable. This has been a mistake I usually make when downloading Python onto a new machine.
0
1
0
0
2013-07-30T17:04:00.000
15
0
false
17,953,124
1
0
0
7
So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error: 'python' is not recognized as an internal or external command I have tried setting the path but no avail. Here is my path: C:\Program Files\Python27 As you can see, this is where my Python is installed. I don't know what else to do. Can someone help?
'python' is not recognized as an internal or external command
48,145,947
11
124
662,372
0
python,cmd
Type py -v instead of python -v in command prompt
0
1
0
0
2013-07-30T17:04:00.000
15
1
false
17,953,124
1
0
0
7
So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error: 'python' is not recognized as an internal or external command I have tried setting the path but no avail. Here is my path: C:\Program Files\Python27 As you can see, this is where my Python is installed. I don't know what else to do. Can someone help?