Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Python command not working in command prompt
| 71,605,377
| 4
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
Python 3.10 uses py and not python.
Try py --version if you are using this version.
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.034769
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 61,078,712
| 1
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
I wanted to add a common problem that happens on installation. It is possible that the path installation length is too long. To avoid this change the standard path so that it is shorter than 250 characters.
I realized this when I installed the software and did a custom installation, on a WIN10 operation system. In the custom install, it should be possible to have Python added as PATH variable by the software
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.008695
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 60,397,519
| 4
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
Even after following the instructions from the valuable answers above, calling python from the command line would open the Microsoft Store and redirect me to a page to download the software.
I discovered this was caused by a 0 Ko python.exe file in AppData\Local\Microsoft\WindowsApps which was taking precedence over my python executable in my PATH.
Removing this folder from my PATH solved it.
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.034769
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 52,572,393
| 3
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
Here's one for for office workers using a computer shared by others.
I did put my user path in path and created the PYTHONPATH variables in my computer's PATH variable. Its listed under Environment Variables in Computer Properties -> Advanced Settings in Windows 7.
Example:
C:\Users\randuser\AppData\Local\Programs\Python\Python37
This made it so I could use the command prompt.
Hope this helped.
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.026081
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 46,435,281
| 1
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
If you are working with command prompt and if you are facing the issue even after adding python path to system variable PATH.
Remember to restart the command prompt (cmde.exe).
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.008695
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 45,970,626
| 3
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
Just go with the command py. I'm running python 3.6.2 on windows 7 and it works just fine.
I removed all the python paths from the system directory and the paths don't show up when I run the command echo %path% in cmd. Python is still working fine.
I ran into this by accidentally pressing enter while typing python...
EDIT: I didn't mention that I installed python to a custom folder C:\Python\
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.026081
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 34,934,533
| 1
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
When you add the python directory to the path (Computer > Properties > Advanced System Settings > Advanced > Environmental Variables > System Variables > Path > Edit), remember to add a semicolon, then make sure that you are adding the precise directory where the file "python.exe" is stored (e.g. C:\Python\Python27 if that is where "python.exe" is stored). Then restart the command prompt.
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.008695
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 13,596,605
| 2
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
Add the python bin directory to your computer's PATH variable. Its listed under Environment Variables in Computer Properties -> Advanced Settings in Windows 7. It should be the same for Windows 8.
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.01739
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 13,596,981
| 99
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
It finally worked!!!
I needed to do things to get it to work
Add C:\Python27\ to the end of the PATH system variable
Add C:\Python27\ to the end of the PYTHONPATH system variable
I had to add these to both for it to work.
If I added any subdirectories, it did not work for some reason.
Thank you all for your responses.
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 1.2
| true
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 38,766,602
| 47
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
The video was very useful.
Go to system properties -> Advance ( or type "system env" in
start menu.)
Click environment variables
Edit the 'PATH' variable
Add 2 new paths 'C:\Python27' and 'C:\Python27\scripts'
Run cmd again and type python.
it worked for me
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 1
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python not getting IP if cable connected after script has started
| 13,643,155
| 0
| 1
| 772
| 0
|
python,networking,httplib
|
After alot more research, the glibc problem jedwards suggested, seemed to be the problem. I did not find a solution, but made workaround for my usecase.
Considering I only use one URL, I added my own "resolv.file" .
A small daemon gets the IP address of the URL when PHY reports cable connected. This IP is saved to "my own resolv.conf". From this file the python script retrieves the IP to use for posts.
Not really a good solution, but a solution.
| 0
| 1
| 1
| 0
|
2012-11-28T13:47:00.000
| 3
| 0
| false
| 13,606,584
| 0
| 0
| 0
| 1
|
I hope this doesn't cross into superuser territory.
So I have an embedded linux, where system processes are naturally quite stripped. I'm not quite sure which system process monitors to physical layer and starts a dhcp client when network cable is plugged in, but i made one myself.
¨
The problem is, that if i have a python script, using http connections, running before i have an IP address, it will never get a connection. Even after i have a valid IP, the python still has
"Temporary error in name resolution"
So how can I get the python to realize the new connection available, without restarting the script?
Alternatively , am I missing some normal procedure Linux runs normally at network cable connect.
The dhcp client I am using is udhcpc and python version is 2.6. Using httplib for connections.
|
using a Python Win32Com .py in Unix - QC OTA library
| 13,610,968
| 0
| 0
| 577
| 0
|
python,unix,python-2.7,hp-quality-center,comobject
|
OTA is a win 32 COM library. In theory it's not intended to run on Linux.
You can try to use WINE on linux but you will need to run your python application inside WINE as well.
| 0
| 1
| 0
| 0
|
2012-11-28T16:04:00.000
| 1
| 0
| false
| 13,609,322
| 0
| 0
| 0
| 1
|
Please forgive me if my question confuses you.
I have to use HP Quality Center's QC OTA Library (DLL) in my Python Script.
I was able to do this on my Windows System after Registering that DLL using Com Makepy Utility . The utility gave me a .py for that .dll inside the gen_py folder.
Here is my question,
Will i be able to use that same registered .py file on a Unix system as well? or Do i have any other alternatives to let my Python Script use that Quality Center Library file in Unix as Python compatible class?
|
how to run a python script, only when user is not actively working?
| 13,612,274
| 3
| 1
| 121
| 0
|
python,cpu,python-idle
|
If you run the script as low priority (nice 20 python foo.py), it will be running all the time, but won't have much of a noticeable impact on higher priority processes (which will be all of them, because 20 is the lowest priority level).
| 0
| 1
| 0
| 0
|
2012-11-28T18:44:00.000
| 1
| 0.53705
| false
| 13,612,225
| 0
| 0
| 0
| 1
|
i want to run a very long working python script, and its hard on the CPU.
there is a way to find out if the user is actively working? (moving mouse and keyboard?)
Edit: running on windows only. priority is not a good idea, still taking a lot of CPU.
|
How to setup WSGI server to run similarly to Apache?
| 13,619,836
| 3
| 1
| 300
| 0
|
python,tornado,wsgi,cherrypy
|
What you are after would possibly happen anyway for WSGI severs. This is because any Python exception only affects the current request and the framework or WSGI server would catch the exception, log it and translate it to a HTTP 500 status page. The application would still be in memory and would continue to handle future requests.
What we get down to is what exactly you mean by 'crashes Apache process'.
It would be rare for your code to crash, as in cause the process to completely exit due to a core dump, the whole process. So are you being confused in your terminology in equating an application language level error to a full process crash.
Even if you did find a way to crash a process, Apache/mod_wsgi handles that okay and the process will be replaced. The Gunicorn WSGI server will also do that. CherryPy will not unless you have a process manager running which monitors it and the process monitor restarts it. Tornado in its single process mode will have the same problem. Using Tornado as the worker in Gunicorn is one way around that plus I believe Tornado itself may have some process manager in it now for running multiple process which allow it to restart processes if they die.
Do note that if your application bug which caused the Python exception is bad enough and it corrupts state within the process, subsequent requests may possibly have issues. This is the one difference with PHP. With PHP, after any request, whether successful or not, the application is effectively thrown away and doesn't persist. So buggy code cannot affect subsequent requests. In Python, because the process with loaded code and retained state is kept between requests, then technically you could get things in a state where you would have to restart the process to fix it. I don't know of any WSGI server though that has a mechanism to automatically restart a process if one request returned an error response.
| 0
| 1
| 0
| 1
|
2012-11-29T04:49:00.000
| 2
| 0.291313
| false
| 13,619,021
| 0
| 0
| 1
| 1
|
I'm coming from PHP/Apache world where running an application is super easy. Whenever PHP application crashes Apache process running that request will stop but server will be still ruining happily and respond to other clients. Is there a way to have Python application work in a smilar way. How would I setup wsgi server like Tornado or CherryPy so it will work similarly? also, how would I run several applications from one server with different domains?
|
Virtualenv and python - how to work outside the terminal?
| 13,619,252
| 2
| 3
| 2,074
| 0
|
python,virtualenv
|
Tell Eclipse or Idle that the python interpreter is django_venv/bin/python instead of /usr/bin/python
| 0
| 1
| 0
| 0
|
2012-11-29T04:55:00.000
| 2
| 1.2
| true
| 13,619,088
| 1
| 0
| 1
| 1
|
When I enter my virtual environment (source django_venv/bin/activate), how do I make that environment transfer to apps run outside the terminal, such as Eclipse or even Idle? Even if I run Idle from the virtualenv terminal window command line (by typing idle), none of my pip installed frameworks are available within Idle, such as SQLAlchemy (which is found just fine when running a python script from within the virtual environment).
|
Transactions in Web2Py over Google App Engine
| 13,892,159
| 1
| 0
| 192
| 0
|
python,google-app-engine,web2py
|
Mutual exclusion is already built into DBMS so we just have to use that. Lets take an example.
First, your table in the model should be defined in such a way that your room number should be unique (use UNIQUE constraint).
When User1 and User2 both query for a room, they should get a response saying the room is vacant. When both the users send the "BOOK" request for that room at the same time, the booking function should directly insert the "BOOK" request of both users into the db. But only one will actually be executed (because of the UNIQUE constraint) and the other will produce a DAL exception. Catch the exception and respond to the user whose "BOOK" request was unsuccesful, saying You just missed this room by an instant :-)
Hope this helped.
| 0
| 1
| 0
| 0
|
2012-11-29T09:45:00.000
| 1
| 0.197375
| false
| 13,622,895
| 0
| 0
| 1
| 1
|
I'm making a room reservation system in Web2Py over Google App Engine.
When a user is booking a Room the system must be sure that this room is really available and no one else have reserved it just a moment before.
To be sure I make a query to see if the room is available, then I make the reservation. The problem is how can I do this transaction in a kind of "Mutual exclusion" to be sure that this room is really for this user?
Thank you!! :)
|
PyDev interpreter indication within USB drive
| 13,632,373
| 0
| 2
| 1,265
| 0
|
python,eclipse,eclipse-plugin
|
Have you considered giving the interpreter location as a relative path? For example:
..\..\..\python\python.exe. I am not sure what the working directory of PyDev is, but if you put enough .. in, Windows will stop at the drive root.
| 0
| 1
| 0
| 0
|
2012-11-29T10:33:00.000
| 4
| 0
| false
| 13,623,771
| 1
| 0
| 0
| 3
|
I'm using PortableApps application with portable eclipse and portable python installed. I've equipped my eclipse with PyDev plugin enabling me to run and debug my files on whatever windows-based platform I'd like. The problem is in order to use the interpreter inside my USB stick, I need to address the proper location of the python interpreter in PyDev settings. with USB drive connected to different computers, I get different drive letter for my USB stick which would make problem locating the path of the installed python inside my USB stick.
Is there any way to enforce eclipse's PyDev plugin to look for python interpreter which is installed inside my USB permanently?!
|
PyDev interpreter indication within USB drive
| 14,310,218
| 0
| 2
| 1,265
| 0
|
python,eclipse,eclipse-plugin
|
Have you tried to use subst? You can config with some letter like Z: or X: and in any computed that you plug your pen drive, you just run a .bat with the subst and your environment is ready ...
| 0
| 1
| 0
| 0
|
2012-11-29T10:33:00.000
| 4
| 0
| false
| 13,623,771
| 1
| 0
| 0
| 3
|
I'm using PortableApps application with portable eclipse and portable python installed. I've equipped my eclipse with PyDev plugin enabling me to run and debug my files on whatever windows-based platform I'd like. The problem is in order to use the interpreter inside my USB stick, I need to address the proper location of the python interpreter in PyDev settings. with USB drive connected to different computers, I get different drive letter for my USB stick which would make problem locating the path of the installed python inside my USB stick.
Is there any way to enforce eclipse's PyDev plugin to look for python interpreter which is installed inside my USB permanently?!
|
PyDev interpreter indication within USB drive
| 18,883,477
| 0
| 2
| 1,265
| 0
|
python,eclipse,eclipse-plugin
|
use a relative eclipse path variable like:
{eclipse_home}..\..\..\PortablePython\App\python.exe
| 0
| 1
| 0
| 0
|
2012-11-29T10:33:00.000
| 4
| 0
| false
| 13,623,771
| 1
| 0
| 0
| 3
|
I'm using PortableApps application with portable eclipse and portable python installed. I've equipped my eclipse with PyDev plugin enabling me to run and debug my files on whatever windows-based platform I'd like. The problem is in order to use the interpreter inside my USB stick, I need to address the proper location of the python interpreter in PyDev settings. with USB drive connected to different computers, I get different drive letter for my USB stick which would make problem locating the path of the installed python inside my USB stick.
Is there any way to enforce eclipse's PyDev plugin to look for python interpreter which is installed inside my USB permanently?!
|
Consistent backups in python
| 13,625,736
| 0
| 8
| 1,558
| 0
|
python,winapi,backup,volume-shadow-service
|
I would look into IronPython on your Windows client side. Simply because this will give you access to COM+ DLLs and other WinAPI objects. It's .NET, but it would still be python. I've not used it enough to say for 100% certainty it will work with VSS, but it should.
| 0
| 1
| 0
| 0
|
2012-11-29T10:55:00.000
| 3
| 0
| false
| 13,624,198
| 0
| 0
| 0
| 1
|
I'm working on a remote backup solution in python. The server part will run on Unix/Linux because it will use hard links for efficient incremental backups.
The client part, however, will have to run on Windows too, and file locking can be a problem.
From what I've researched, Volume Shadow Copy Service (VSS) is the thing I need. Similar to a LVM snapshot, and isn't affected by file locking.
THe VSS API, however, doesn't seem to be implemented in pywin32.
My current idea is to use some wrapper that will create the a temporary VSS snapshot, run the client, and delete it afterwards.
I'm wondering if anyone has experience in this scenario.
|
Adding PyMongo to Python IDE
| 32,381,638
| 0
| 0
| 666
| 0
|
python,ide,pymongo
|
If you are in windows platform just install the pymongo.exe file and it will install in python directory. Then you will be able to access it in any IDE such PyCharm by typing:
import pymongo
| 0
| 1
| 0
| 0
|
2012-12-02T08:33:00.000
| 2
| 0
| false
| 13,667,690
| 1
| 0
| 0
| 1
|
I am a python newbie - I want to use the pymongo library to access mongoDb using some convenient IDE, and after looking through the web i decided to use WING.
Can some one point how to add the pymongo library to the WING ide (or to any other IDE for that matter)? i want to get the auto-completion for commands.
Thanks
|
python file in dos and unix
| 13,669,170
| 3
| 0
| 1,281
| 0
|
python,linux
|
Windows line endings are CRLF, or \r\n.
Unix uses simply \n.
When the OS reads your shebang line, it sees #!/usr/bin/python\r. It can't run this command.
A simple way to see this behavior from a unix shell would be $(echo -e 'python\r') (which tries to run python\r as a command). This output will also be similar to : command not found.
Many advanced code editors under Windows support natively saving with unix line endings.
| 0
| 1
| 0
| 1
|
2012-12-02T12:01:00.000
| 2
| 0.291313
| false
| 13,669,092
| 0
| 0
| 0
| 1
|
I have some python file in windows, and I transfer them to my gentoo by samba.
I check their mode is executable, and I use ./xxx.py to run it, but get an error:
: No such file or directory
I am troubled that it does not prompt what file is not here.
but when I use python xxx.py, it can run in the right way.
and then I check the CR character by use set ff in vim, and found it is dos, then I use set ff=unix to set it, now it can run by using ./xxx.py
but I don't know why it can be use python xxx.py when ff=dos?
|
Are parameters passed to subprocess safe in any way from code injection?
| 13,669,702
| 4
| 1
| 643
| 0
|
python,subprocess
|
As long as you use a list of parameters and leave shell set to False, yes, the parameters are safe from code injection. They will not be parsed by a shell and thus not subject to any code execution opportunities.
Note that on Windows, the chances of using code injection are already mitigated by the fact that a CreateProcess call is used.
| 0
| 1
| 0
| 0
|
2012-12-02T13:25:00.000
| 1
| 1.2
| true
| 13,669,690
| 0
| 0
| 0
| 1
|
Are parameters passed to subprocess safe in any way from code injection?
I am building a small python program to do some movie file tagging. For ease, I am passing the tag info to AtomicParsley (on Windows) using subprocess.call(). The tag information is an online source, retrieved automatically. If some individual were to place code in the tags (i.e. replaced the actors with some sort of rd term), would subprocess be safe from that execution? This is more of a conceptual question than a question about the specifics of the language.
The subprocess.call is executed with ['AtomicParsley',filename,'--tag1',tag1_info,(...)]. Since the first part of the command is guaranteed to be the name of the AP executable and the second is guaranteed to be a valid filename, I should think any malicious code inside the metainfo database would just be written as a string to the appropriate tag (i.e. the Actor's name would be del C:\Windows). Do those seem like reasonable assumptions?
|
End Python Script when running it as boot script?
| 13,706,652
| 1
| 2
| 575
| 0
|
python,debian,boot
|
Seems like it was just a dumb mistake on my part.
I realized the whole point of this was to allow the python script to run as a background process during boot so I added the " &" to the end of the script call like you would when running it from the shell and viola I can get to my password prompt by pressing "Enter".
I wanted to put this answer here just in case this would be something horribly wrong to do, but it accomplishes what I was looking for.
| 0
| 1
| 0
| 1
|
2012-12-03T00:47:00.000
| 2
| 1.2
| true
| 13,675,689
| 0
| 0
| 0
| 1
|
I am using Debian and I have a python script that I would like to run during rc.local so that it will run on boot. I already have it working with a test file that is meant to run and terminate.
The problem is that this file should eventually run indefinitely using Scheduler. It's job is to do serial reads, a small amount of processing on those reads, and inserts into a MySQL database. However, I am nervous about then not being able to cancel the script to get to my login prompt if changes need to be made since I was unable to terminate the test script early using Ctrl+C (^C).
My hope is that there is some command that I am just missing that will accomplish this. Is there another key command that I'm missing that will terminate the python script and end rc.local?
Thanks.
EDIT: Another possible solution that would help me here is if there is a way to start a python script in the background during boot. So it would start the script and then allow login while continuing to run the script in the background.
I'm starting to think this isn't something that's possible to accomplish so other suggestions to accomplish something similar to what I'm trying to do would be helpful as well.
Thanks again.
|
Debugging Python bottle apps with WingIDE
| 13,687,662
| 3
| 4
| 537
| 0
|
python,debugging,breakpoints,bottle,wing-ide
|
Are you debugging under WSGI using wingdbstub.py or launching bottle from the IDE? I'm not that familiar with bottle but a common problem is a web framework's reload mechanism running code in a sub-process that is not debugged. I'm not certain bottle would do that under WSGI, however, but printing the process id at time of importing wingdbstub (or startup if launching from the IDE) and again at the line where the breakpoint is missed would rule this in our out. The "reloader" arg for Bottle.__init__ may be relevant here. If set to True, try setting it to False when debugging under Wing.
Another thing to try is to raise an exception on purpose where the breakpoint is (like "assert 0, 'test exception'" and see if this exception is reported in Wing's debugger in the Exceptions tool and if so whether Wing also manages to open the source code. If bottle is running code in a way that doesn't make it possible to find the source code then this would still stop on the assertion (Wing's debugger stops on all assertions by default even if the host code handles the exception) but it would fail to show the debug file and would put up a message in the status area (at bottle of IDE screen and in the Messages tool) that indicates the file name the debug process specified. Depending on this it may be possible to fix the problem (but would require modifying Bottle if the file name is something like "".
BTW, to insert code that is only run under Wing's debugger us something like this:
import os
if 'WINGDB_ACTIVE' in os.environ:
# code here
If this doesn't help please email support at wingware dot com.
| 0
| 1
| 0
| 0
|
2012-12-03T15:25:00.000
| 2
| 0.291313
| false
| 13,686,325
| 0
| 0
| 0
| 1
|
I'm writing an Python Bottle application (Python 2.7.2 and Bottle 0.10.9) and developing it in the WingIDE (3.2.8-1) Professional for Linux. This all works well, except when I want to debug the Bottle application. I have it running in standalone mode within WingIDE, but it won't stop at any of my break points in the code, even if I set Bottle.debug(False). Does anyone have any suggestions/ideas about how I can setup Bottle so it will stop on breakpoints within WingIDE?
|
Can I store a blob with a key_name with Google Appengine ndb?
| 13,714,228
| 1
| 1
| 337
| 0
|
python,google-app-engine,blob,blobstore
|
When you upload data to the blobstore you receive a blob_key and a file_name. The blob_key is unique. The file_name is NOT unique. When you do another upload with the same file_name a new version is stored in the blobstore with the same file_name and a new unique blob_key. The first blob is NOT deleted. You have to do it yourself.
To administer these uploaded blobs, you create a datastore entity with your own key_name. You can use the file_name for this purpose. And you can use a BlobKeyProperty (NDB) or blobstore.BlobReferenceProperty (datastore) in this entity to reference your blob (to save your blob_key reference). In this way your key_name / file_name uniquely identifies your blob.
| 0
| 1
| 0
| 0
|
2012-12-04T16:54:00.000
| 1
| 1.2
| true
| 13,707,922
| 0
| 0
| 1
| 1
|
I am building a service where you can upload images. On the blob creation I would like to supply a key_name, which will be used by the relevant entity to retrieve it later.
|
changing python windows command line
| 13,711,840
| 0
| 1
| 538
| 0
|
python
|
Install both pythons, and change the path in Windows, by default both Pythons will be PATH=c:\python\python 2.7 and PATH=c:\python\python 3.2 Or something like that. What and since windows stops as soon as it finds the first python, what you could do is have one called PATH=c:\python27\ and another PATH=c:\python32\ this way you can run both of them.
| 0
| 1
| 0
| 0
|
2012-12-04T20:56:00.000
| 2
| 0
| false
| 13,711,765
| 1
| 0
| 0
| 1
|
I try to save a python script as a shortcut that I want to run. It opens it but then closes right away.
I know why it is doing this, it is opening my windows command line in python3.2 the script is in python 2.7
I need both version on my PC, my question is how to I change the cmd default.
I have tried to "open with" shortcut on the icon and it just continues to default to 3.2.
Help please
|
Daemon with python 3
| 13,722,236
| 1
| 2
| 3,305
| 0
|
python,python-3.x,daemon,launch-daemon
|
Suppose for python script name is monitor. use following steps:
copy monitor script in /usr/local/bin/ (not necessary)
Also add a copy in /etc/init.d/
Then execute following command to make it executable
sudo -S chmod "a+x" "/etc/init.d/monitor"
At last run update.rc command
sudo -S update-rc.d "monitor" "defaults" "98"
this will execute you monitor whenever you login for all tty.
| 0
| 1
| 0
| 1
|
2012-12-05T11:03:00.000
| 2
| 1.2
| true
| 13,721,808
| 0
| 0
| 0
| 1
|
I am writing a script in python3 for Ubuntu that should be executed all X Minutes and should automatic start after logging in. Therefore I want to create a daemon (is it the right solution for that?) but I haven't found any modules / examples for python3, just for python 2.X. Do you know something what I can work with?
Thank you,
|
Can I use Z3Py withouth doing a system-wide install?
| 13,730,652
| 1
| 2
| 540
| 0
|
python,z3
|
Yes, you can do it by including the build directory in your LD_LIBRARY_PATH and PYTHONPATH environment variables.
| 0
| 1
| 0
| 1
|
2012-12-05T16:54:00.000
| 2
| 1.2
| true
| 13,728,325
| 0
| 0
| 0
| 1
|
I'm trying to use Z3 from its python interface, but I would prefer not to do a system-wide install (i.e. sudo make install). I tried doing a local install with a --prefix, but the Makefile is hard-coded to install into the system's python directory.
Best case, I would like run z3 directly from the build directly, in the same way I use the z3 binary (build/z3). Does anyone know how to, or have script, to run the z3py directly from the build directory, without doing an install?
|
Understanding an element of the main loop
| 13,736,441
| 1
| 0
| 70
| 0
|
python,loops
|
The simplest solution is to add a variable outside of the loop which stores the last time the data size was checked. Every time through your loop you can compare the current time vs the last time every time through the loop and check if more than X time has elapsed.
| 0
| 1
| 0
| 0
|
2012-12-06T03:07:00.000
| 1
| 0.197375
| false
| 13,736,310
| 1
| 0
| 0
| 1
|
I'm just starting out with Python. And I need help understanding how to do the main loop of my program.
I have a source file with two columns of data, temperature & time. This file gets updated every 60 seconds by a bash script.
I successfully wrote these three separate programs;
1. A program that can read the last 1440 lines of the source data and plot out a day graph.
2. A program that can read the last 10080 lines of the source data and plot out a week graph.
3. A program that can read the source data and just display the last recorded temperature.
4. Check the size of the source file and delete data over X days old.
I want to put it all together so that a user can toggle between the 3 different display types. I understand that this would work in a main loop, with just have the input checked in the loop, and call a function based on what is returned.
But I don't know how to handle the file size check. I don't want it checked every time the loops cycles. I would like it to be run once a day.
thanks in advance!
|
python daemon + interprocess communication + web server
| 16,685,053
| 0
| 1
| 544
| 0
|
python,arduino,interprocess,python-multithreading
|
Have WAMP server. It is the easiest and quickest way. The web server will support php, python , http etc.
If you are using Linux , the easiest tool for serial communication is php.
But in windows php cannot read data from serial communication. Hence use python / perl etc.
Thanks
| 0
| 1
| 1
| 1
|
2012-12-06T19:43:00.000
| 2
| 0
| false
| 13,751,271
| 0
| 0
| 0
| 2
|
The situation:
I have a python script to connect/send signals to serial connected arduino's. I wanted to know the best way to implement a web server, so that i can query the status of the arduinos. I want that both the "web server" part and serial connection runs on the same script. Is it possible, or do i have to break it into a daemon and a server part?
Thanks, any comments are the most welcomed.
|
python daemon + interprocess communication + web server
| 16,689,821
| 0
| 1
| 544
| 0
|
python,arduino,interprocess,python-multithreading
|
For those wondering what I have opted for; I have decoupled the two part:
The Arduino daemon
I am using Python with a micro web framework called [Bottle][1] which handles the API calls and I have used PySerial to communicate with the Arduino's.
The web server
The canonical Apache and PHP; are used to make API calls to the Arduino daemon.
| 0
| 1
| 1
| 1
|
2012-12-06T19:43:00.000
| 2
| 1.2
| true
| 13,751,271
| 0
| 0
| 0
| 2
|
The situation:
I have a python script to connect/send signals to serial connected arduino's. I wanted to know the best way to implement a web server, so that i can query the status of the arduinos. I want that both the "web server" part and serial connection runs on the same script. Is it possible, or do i have to break it into a daemon and a server part?
Thanks, any comments are the most welcomed.
|
local GAE datastore does not keep data after computer shuts down
| 13,779,121
| 1
| 4
| 2,643
| 0
|
google-app-engine,python-2.7,google-cloud-datastore
|
The datastore typically saves to disk when you shut down. If you turned off your computer without shutting down the server, I could see this happening.
| 0
| 1
| 0
| 0
|
2012-12-08T10:27:00.000
| 3
| 0.066568
| false
| 13,776,610
| 0
| 0
| 1
| 1
|
On my local machine (i.e. http://localhost:8080/), I have entered data into my GAE datastore for some entity called Article. After turning off my computer and then restarting next day, I find the datastore empty: no entity. Is there a way to prevent this in the future?
How do I make a copy of the data in my local datastore? Also, will I be able to upload said data later into both localhost and production?
My model is ndb.
I am using Max OS X and Python 2.7, if theses matter.
|
Multiple processes reading&deleting files in the same directory
| 13,777,096
| 1
| 1
| 1,245
| 0
|
python,bash,shell,race-condition
|
The only sure way that no two scripts will act on the same file at the same time is to employ some kind of file locking mechanism. A simple way to do this could be to rename the file before beginning work, by appending some known string to the file name. The work is then done and the file deleted. Each script tests the file name before doing anything, and moves on if it is 'special'.
A more complex approach would be to maintain a temporary file containing the names of files that are 'in process'. This file would obviously need to be removed once everything is finished.
| 0
| 1
| 0
| 0
|
2012-12-08T11:20:00.000
| 3
| 0.066568
| false
| 13,776,973
| 1
| 0
| 0
| 1
|
I have a directory with thousands of files and each of them has to be processed (by a python script) and subsequently deleted.
I would like to write a bash script that reads a file in the folder, processes it, deletes it and moves onto another file - the order is not important. There will be n running instances of this bash script (e.g. 10), all operating on the same directory. They quit when there are no more files left in the directory.
I think this creates a race condition. Could you give me an advice (or a code snippet) how to make sure that no two bash scripts operate on the same file?
Or do you think I should rather implement multithreading in Python (instead of running n different bash scripts)?
|
Making a python script executable in python 2.7
| 38,528,239
| 3
| 5
| 16,418
| 0
|
python,executable
|
Assuming you have pip installed which you should after installing Python inside folder Scripts.
Install PyInstaller using pip, type in the command prompt the following.
pip install pyinstaller
After you install PyInstaller locate where your pyinstaller files are (they should be where your pip files are inside the Scripts folder) and go to the command prompt and type the following.
c:\python27\Scripts>pyinstaller --onefile c:\yourscript.py
The above command will create a folder called “dist” inside the Scripts folder, this will contain your single executable file “yourscript.exe”.
| 0
| 1
| 0
| 0
|
2012-12-09T00:53:00.000
| 4
| 0.148885
| false
| 13,783,586
| 1
| 0
| 0
| 1
|
We are trying to make our python script execute itself as a .exe file, without having python installed. Like if we give our program to someone else, they wouldn't need to install python to open it.
It is a text-based game like zork, so we need a gui, like cmd, to run it.
We have tried using py2exe, and pyinstaller, but none of them made any sense, and don't work with 2.7.3 for some reason.
Any help?
|
Programmatically setting a process to execute at startup (runlevel 2)?
| 13,876,262
| 0
| 0
| 302
| 0
|
python,linux,startup,runlevel
|
I may as well answer my own question with my findings.
On Debian,Ubuntu,CentOS systems there is a file named /etc/rc.local. If you use pythons' FileIO to edit that file, you can put a command that will be run at the end of all the multi-user boot levels. This facility is still present on systems that use upstart.
On BSD I have no idea. If you know how to make something go on startup please comment to improve this answer.
Archlinux and Fedora use systemd to start daemons - see the Arch wiki page for systemd. Basically you need to create a systemd service and symlink it. (Thanks Emil Ivanov)
| 0
| 1
| 0
| 1
|
2012-12-09T03:52:00.000
| 1
| 1.2
| true
| 13,784,459
| 1
| 0
| 0
| 1
|
I would like to find out how to write Python code which sets up a process to run on startup, in this case level two.
I have done some reading, yet it has left me unclear as to which method is most reliable on different systems. I originally thought I would just edit /etc/inittab with pythons fileIO, but then I found out that my computers inittab was empty.
What should I do? Which method of setting something to startup on boot is most reliable? Does anyone have any code snippets lying around?
|
Secure web requests via Tornado with .htaccess
| 13,793,602
| 2
| 1
| 860
| 0
|
python,.htaccess,authorization,tornado
|
If you based your application on the Tornado "Hello World" example then you probably haven't, but you really should consider writing your application as a WSGI application. Tornado has no problem with that, and the advantage is that your application now will run under a multitude of other environments (Apache + mod_wsgi to name but one).
But how does that solve your original problem? Well, just Google "WSGI authentication middleware", it'll yield plenty of hits. Basically, what that entails is transparently 'wrapping' your WSGI-application in another, one allowing you to completely decouple that aspect of your application. If you're lucky, and one of hits turns out to be a perfect fit, you might get away with not witing any extra code at all.
Since you mentioned .htaccess: it is possible to have Apache do the authentication in an Apache/mod_wsgi configuration.
| 0
| 1
| 0
| 0
|
2012-12-09T17:58:00.000
| 2
| 1.2
| true
| 13,790,066
| 0
| 0
| 0
| 1
|
I have implemented a very small application with Tornado, where HTTP GET Requests are used to perform actions. Now I would like to secure these requests. What would be a preferable way? Using .htaccess? How can I realize that?
It doesn't have to be for certain requests, it should be for all requests running on a certain port.
|
Embedding Pig into Python
| 13,796,893
| 1
| 0
| 1,676
| 0
|
python,hadoop,apache-pig,embedding
|
Ok. Have found the solution. If you are also seeing this error then I hope this helps.
1) Downloaded the Jython installer jar.
2) ran it with java -jar
3) Specify a location for the installation
4) Added the Jython executable shell script to my PATH environment variable.
5) Copied the jython jar from installation folder to HADOOP_HOME/lib folder. ie. lib folder under hadoop.
Mostly the step 5 is the deal maker. But these are the steps I followed. It seems that copying/setting the Jython jar to PIG does not seem to help. I am running Hadoop in pseudo cluster mode with Pig on top of it. And Pig seems to take the HADOOP based jars rather than its own lib!!
After this it runs like a charm.
| 0
| 1
| 0
| 0
|
2012-12-10T06:23:00.000
| 1
| 0.197375
| false
| 13,795,993
| 0
| 0
| 0
| 1
|
I am trying to embed a pig script in Python and am encountering an exception and can't seem to find what the problem is. I have a Python script with pig script embedded in it and have Apache PIG 0.10 installed. I can run pig scripts from the shell and it works ok. when I run the python script with pig embedded from shell using command
pig -x mapreduce pythonscript.py it gives me the error
Error before Pig is launched
---------------------------- ERROR 2998: Unhandled internal error. org/python/util/PythonInterpreter
java.lang.NoClassDefFoundError: org/python/util/PythonInterpreter at
org.apache.pig.scripting.jython.JythonScriptEngine.main(JythonScriptEngine.java:338)
I have tried adding Jython jar to the $PIG_CLASSPATH environment variable at shell before running pig command. It does not help.
I see that others are also encountering this problem but, has anyone found a solution? Any pointers?
|
uPnP pushing video to Smart TV/Samsung TV on OSX/Mac
| 13,798,803
| 3
| 2
| 3,338
| 0
|
python,bash,upnp,server-push,samsung-smart-tv
|
You will still need a DLNA server to host your videos on. Via UPnP you only hand the URL to the TV, not the video directly. Once you have it hosted on a DLNA server, you can find out the URL to a video by playing it in Windows Media Player (which has DLNA-support) or by using UPnP Inspector (which I recommend anyways, if you are going to be working with UPnP). You can then push this URL to the TV, which will download and play the video, if its format is supported.
I do not know my way around python, but you since UPnP is HTTP based, you will need to send an HTTP request with appropriate UPnP-headers (see wikipedia or test it yourself with UPnP Inspector) and the proper XML-formatted body for the function you are trying to use.
The UPnP-function I worked with to push a link to the TV is "SetAVTransportURI", but it might differ from your TV. Use UPnP Inspector to find the correct one, including its parameters.
In summary: Get a DLNA-Server to host you videos on. Find out the links to those videos using UPnP Inspector or other DLNA-clients. Find out the UPnP-function that sends a URL to your TV (again, I recomment UPnP Inspector, you can explore and call all functions with it). Implement a call to that function in your script.
| 0
| 1
| 0
| 0
|
2012-12-10T09:53:00.000
| 1
| 1.2
| true
| 13,798,520
| 0
| 0
| 0
| 1
|
I would like to make a simple script to push a movie to a Smart TV.
I have already install miniupnp or ushare, but I don't want to go in a folder by the TV Smart Apps, i want to push the movie to the TV, to win time and in future why not make the same directly from a NAS.
Can anyone have an idea how to do this ? This application SofaPlay make it great but only from my mac.
Thanks you
|
C compiler to build python from sources on various unix flavors
| 13,817,769
| 0
| 2
| 287
| 0
|
python,linux,unix,gcc,cross-compiling
|
I would assume that it's better to build on the OS itself, rather than "cross compile". Although since this is all Unix, cross-compiling might very well work as well, with a bit of effort. But it's probably easier to just build the binaries on the OS in question. I guess that also depends on whether you link statically or not.
Python's build process will itself select the best compiler, and it will prefer gcc to cc, at least in most cases.
| 0
| 1
| 0
| 0
|
2012-12-11T09:55:00.000
| 1
| 0
| false
| 13,817,662
| 1
| 0
| 0
| 1
|
Am looking at building python (2.7 version) from sources for various UNIX like OSes including SUSE (Desktop, Server), RHEL (Desktop, Server), Ubuntu, AIX, Solaris (SPARC) OSes.
Also, some of these OSes might have to build both 32 bit and 64 bit versions. I also want to minimize dependencies on (shared) libraries.
That said, is it better to use the native C compiler (cc) wherever available as against gcc? Is it better to cross compile?
Thanks.
|
What is different between makedirs and mkdir of os?
| 71,559,746
| 1
| 86
| 63,432
| 0
|
python,linux,python-2.7
|
makedirs : Recursive directory creation function. Like mkdir(), but makes all intermediate-level directories needed to contain the leaf directory. Raises an error exception if the leaf directory already exists or cannot be created.
| 0
| 1
| 0
| 0
|
2012-12-11T11:33:00.000
| 3
| 0.066568
| false
| 13,819,496
| 0
| 0
| 0
| 1
|
I am confused to use about these two osmethods to create the new directory.
Please give me some example in Python.
|
GAE development server keep full text search indexes after restart?
| 20,389,173
| -2
| 8
| 1,909
| 0
|
python,google-app-engine,google-cloud-datastore,google-search
|
Look like this is not an issue anymore. according to documentation (and my tests):
"The development web server simulates the App Engine datastore using a
file on your computer. This file persists between invocations of the
web server, so data you store will still be available the next time
you run the web server."
Please let me know if it is otherwise and I will follow up on that.
| 0
| 1
| 0
| 0
|
2012-12-12T16:12:00.000
| 3
| -0.132549
| false
| 13,843,907
| 0
| 0
| 1
| 2
|
Is there anyway of forcing the GAE dev server to keep full text search indexes after restart? I am finding that the index is lost whenever the dev server is restarted.
I am already using a static datastore path when I launch the dev server (the --datastore_path option).
|
GAE development server keep full text search indexes after restart?
| 13,849,805
| 2
| 8
| 1,909
| 0
|
python,google-app-engine,google-cloud-datastore,google-search
|
This functionality was added a few releases ago (in either 1.7.1 or 1.7.2, I think). If you're using an SDK from the last few months it should be working. You can try explicitly setting the --search_indexes_path flag on dev_appserver.py; it's possible that the default location (/tmp/) isn't writable. Could you post the first few lines of the logs from when you start dev_appserver?
| 0
| 1
| 0
| 0
|
2012-12-12T16:12:00.000
| 3
| 0.132549
| false
| 13,843,907
| 0
| 0
| 1
| 2
|
Is there anyway of forcing the GAE dev server to keep full text search indexes after restart? I am finding that the index is lost whenever the dev server is restarted.
I am already using a static datastore path when I launch the dev server (the --datastore_path option).
|
Strange co_filename for file from .egg during tracing in Python 2.7
| 13,846,221
| 1
| 0
| 84
| 0
|
python,egg,python-internals
|
No, that is not a bug. Eggs, when being created, have their bytecode compiled in a build/bdist.<platform>/egg/ path, and you see that reflected in the co_filename variable. The bdist stands for binary distribution.
| 0
| 1
| 0
| 1
|
2012-12-12T18:22:00.000
| 1
| 1.2
| true
| 13,846,155
| 1
| 0
| 0
| 1
|
When tracing(using sys.settrace) python .egg execution by Python 2.7 interpreter frame.f_code.co_filename instead of <path-to-egg>/<path-inside-egg> eqauls to something like build/bdist.linux-x86_64/egg/<path-inside-egg>
Is it a bug? And how to reveal real path to egg?
In Python 2.6 and Python 3 everything works as expected.
|
Python interacting with OS X - is it possible?
| 13,850,697
| 0
| 2
| 466
| 0
|
python,macos
|
The 2nd one you can't do for sure, since the events are grabbed by other processes.
You should look for a osx specific library for doing that and then write a python wrapper around it.
| 0
| 1
| 0
| 0
|
2012-12-12T23:32:00.000
| 1
| 0
| false
| 13,850,513
| 0
| 0
| 0
| 1
|
I'll explain my question: is it possible to write a Python script which interacts with OS X architecture in a high-level way?
For example, can I gain control on Mac OS X windows resizing from a Python script? Are there modules for that? I'm not finding any.
To push things even further, would I be able to control keyboard shortcuts too? I mean, with Python, could I write a script that opens a terminal window everytime I type cmd + Enter from wherever I am in that moment, as if it was a system shortcut (Awesome WM style, if you know what I'm talking about)?
Hope I've been clear.
|
Google protocol buffers not found when trying to freeze python app
| 13,862,767
| 7
| 4
| 1,539
| 0
|
python,py2exe,pyinstaller,cx-freeze
|
I've already had a solution when writing the question - I'm putting it here because it's probable that other people will find it here easily.
The solution: Create empty __init__.py in Lib/site-packages/google of your python installation directory, and compile it somehow (import google in interactive python session for example).
When there is __init__.pyc in the package directory, the freezing tools start to work.
| 0
| 1
| 0
| 0
|
2012-12-13T15:01:00.000
| 1
| 1.2
| true
| 13,862,562
| 0
| 0
| 0
| 1
|
When trying to freeze python (2.7) application with any of cx_freeze, bbfreeze, pyinstaller or py2exe, the frozen application cannot find google.protobuf.
In logs of the freezing process there is usually something like 'cannot find google'. So the google package is not found and not packaged, although it's in python's site-packages and the non-frozen version works just fine.
|
#!/usr/bin/python and #!/usr/bin/env python, which support?
| 13,879,633
| 1
| 10
| 9,499
| 0
|
python,shebang
|
As you note, they probably both work on linux. However, if someone has installed a newer version of python for their own use, or some requirement makes people keep a particular version in /usr/bin, the env allows the caller to set up their environment so that a different version will be called through env.
Imagine someone trying to see if python 3 works with the scripts. They'll add the python3 interpreter first in their path, but want to keep the default on the system running on 2.x. With a hardcoded path that's not possible.
| 0
| 1
| 0
| 0
|
2012-12-14T13:26:00.000
| 4
| 0.049958
| false
| 13,879,569
| 1
| 0
| 0
| 2
|
How should the shebang for a Python script look like?
Some people support #!/usr/bin/env python because it can find the Python interpreter intelligently. Others support #!/usr/bin/python, because now in most GNU/Linux distributions python is the default program.
What are the benefits of the two variants?
|
#!/usr/bin/python and #!/usr/bin/env python, which support?
| 13,879,608
| 4
| 10
| 9,499
| 0
|
python,shebang
|
I use #!/usr/bin/env python as the default install location on OS-X is NOT /usr/bin. This also applies to users who like to customize their environment -- /usr/local/bin is another common place where you might find a python distribution.
That said, it really doesn't matter too much. You can always test the script with whatever python version you want: /usr/bin/strange/path/python myscript.py. Also, when you install a script via setuptools, the shebang seems to get replaced by the sys.executable which installed that script -- I don't know about pip, but I would assume it behaves similarly.
| 0
| 1
| 0
| 0
|
2012-12-14T13:26:00.000
| 4
| 0.197375
| false
| 13,879,569
| 1
| 0
| 0
| 2
|
How should the shebang for a Python script look like?
Some people support #!/usr/bin/env python because it can find the Python interpreter intelligently. Others support #!/usr/bin/python, because now in most GNU/Linux distributions python is the default program.
What are the benefits of the two variants?
|
Installing python modules for specific version on linux (pySide)
| 13,895,878
| 1
| 0
| 1,564
| 0
|
python,linux,ubuntu,python-3.x,pyside
|
I think you should install the pyside from its source files that have setup.py and then run the command python3.3 setup.py build and sudo python3.3 setup.py install because if you install by apt for example, it will use the default interpreter which is 3.2 that you mentioned.
| 0
| 1
| 0
| 0
|
2012-12-15T20:11:00.000
| 2
| 0.099668
| false
| 13,895,763
| 1
| 0
| 0
| 1
|
So, to keep it simple. Ubuntu 12.10 has python 3.2 pre installed and it is linked to "python3". I downloaded python 3.3 and it's command is "python3.3". However, I downloaded pySide for python3 from synaptic. Using "from PySide.QtCore import *" fails on python3.3. BUT, when I ran just "python3" (aka 3.2) everything works fine. Synaptic just installed lib for python3.2 which is default for python3 in ubuntu. How can I force synaptic to install modules for python3.3?
Thanks
|
Supervisor as non-root user
| 13,905,927
| 11
| 3
| 7,256
| 0
|
python,debian,supervisord
|
To be able to run any subprocess as a different user from what supervisord is running as, you must run supervisord as root.
When you run supervisord as a user other than root, it cannot run subprocesses under another user. This is a UNIX process security restriction.
| 0
| 1
| 0
| 0
|
2012-12-16T21:58:00.000
| 1
| 1.2
| true
| 13,905,861
| 0
| 0
| 0
| 1
|
I have been trying to get supervisor running as a non root user but came against problems time after time. The more I have read into it the more it looks like supervisor is meant to be run as root.
I even read somewhere that it is only possible to run subprocesses as their own users under supervisor if supervisor is running as root.
MY question is, is it possible to get supervisor to run as non-root and still start subprocesses as non-root users also. Secondly, other then creating the user and setting the user in the supervisor.conf, is there anything else I have to do?
|
Installing MySQL-python without mysql-server on CentOS
| 13,932,070
| 21
| 13
| 27,898
| 1
|
centos,mysql-python
|
So it transpires that mysql_config is part of mysql-devel. mysql-devel is for compiling the mysql client, not the server. Installing mysql-devel allows the installation of MySQL-python.
| 0
| 1
| 0
| 0
|
2012-12-17T22:01:00.000
| 3
| 1
| false
| 13,922,955
| 0
| 0
| 0
| 1
|
I'm attempting to install MySQL-python on a machine running CentOS 5.5 and python 2.7. This machine isn't running a mysql server, the mysql instance this box will be using is hosted on a separate server. I do have a working mysql client. On attempting sudo pip install MySQL-python, I get an error of EnvironmentError: mysql_config not found, which as far as I can tell is a command that just references /etc/my.cnf, which also isn't present. Before I go on some wild goose chase creating spurious my.cnf files, is there an easy way to get MySQL-python installed?
|
How to execute python file in linux
| 13,933,228
| 8
| 32
| 290,768
| 0
|
python,linux
|
yes there is. add
#!/usr/bin/env python
to the beginning of the file and do
chmod u+rx <file>
assuming your user owns the file, otherwise maybe adjust the group or world permissions.
.py files under windows are associated with python as the program to run when opening them just like MS word is run when opening a .docx for example.
| 0
| 1
| 0
| 1
|
2012-12-18T12:36:00.000
| 7
| 1
| false
| 13,933,169
| 0
| 0
| 0
| 1
|
I am using linux mint, and to run a python file I have to type in the terminal: python [file path], so is there way to make the file executable, and make it run the python command automatically when I doublr click it?
And since I stopped dealing with windows ages ago, I wonder if the .py files there are also automatically executable or do I need some steps.
Thanks
|
Full text search in appengine, make some fields scores more important than others?
| 13,942,535
| 2
| 0
| 152
| 0
|
python,google-app-engine,full-text-search
|
Unfortunately yes; we don't yet have a way for you to weight different fields more or less than others. Sorry!
| 0
| 1
| 0
| 0
|
2012-12-18T19:06:00.000
| 1
| 0.379949
| false
| 13,939,773
| 0
| 0
| 1
| 1
|
I have a python appengine app that uses full text search. The document model has something like
title: title
abstract: short abstract
full text: lots and lots of text
If someone searches for a string, I want it ordered such that score for matches in title >> abstract >> full text. There doesn't seem to be a way to do this with the exiting scoring options, am I out of luck?
|
how to Install Fipy on Python 3.3
| 13,946,326
| 0
| 3
| 2,625
| 0
|
python-3.x,installation,package,fipy
|
No, FiPy does not support Python 3. You need to either use Python 2 or help update FiPy to support Python 3. Contact the authors about that.
| 0
| 1
| 0
| 0
|
2012-12-19T04:09:00.000
| 2
| 0
| false
| 13,945,351
| 1
| 0
| 0
| 1
|
I would like to install Fipy on Python3.3 (Windows 7 or Linux/ubuntu-mint), is possible? I need to use Fipy for improving my python3.3 coding. Do I have to translate that in python 2.7 ?
Please does anyone know? any suggestion?
|
Does a GAE app keep a log of the emails it sends?
| 13,957,307
| 6
| 4
| 88
| 0
|
python,google-app-engine
|
you can try one of the following way:
Log
write a log to datastore while each time you call sent_mail.
write log with logging module and check the log in dashboard.
mail
while send the email, add a debug email address in email's "bcc" field.
you can also check the "sent mail" in the email account used as sender.
| 0
| 1
| 0
| 1
|
2012-12-19T16:27:00.000
| 1
| 1.2
| true
| 13,956,774
| 0
| 0
| 1
| 1
|
Wondering if it is possible to see a history of emails that a GAE app has sent? Need to look into the history for debugging purposes.
Note that logging when I send the email or bcc'ing a user are not options for this particular question as the period I'm curious about was in the past (since then we are bcc'ing).
|
How to use packages installed via easy_install in zsh?
| 13,974,516
| 0
| 1
| 1,499
| 0
|
python,macos,pip,zsh,easy-install
|
Thanks for the hint @Evert I had to add /usr/local/share/python to my $PATH and now everything works fine.
| 0
| 1
| 0
| 0
|
2012-12-20T12:13:00.000
| 1
| 1.2
| true
| 13,971,997
| 1
| 0
| 0
| 1
|
I use OS X (Mountain Lion) and ZSH. I can use easy_install to install some python packages but if I want to use the command in my ZSH afterwards I just get something like this:
zsh: command not found: virtualenv
Have I forgotten to include anything to my $PATH or so? Hope you can help me out :)
|
zookeeper lock stayed locked
| 13,997,307
| 0
| 8
| 3,997
| 0
|
python,locking,celery,apache-zookeeper,kazoo
|
Killing a process with a kill signal will do nothing to clear "software locks" such as ZooKeeper locks.
The only kind of locks killed by a KILL signal are OS-level locks, since all file descriptors are killed, and file descriptor locks are therefore killed as well. But as far as ZooKeeper is concerned, those are not OS level locks (would it be only because the ZooKeeper process, even on the same machine, is not the one of your python process).
It is therefore not a bug in ZooKeeper, and an expected behavior of your kill -9.
| 0
| 1
| 0
| 0
|
2012-12-21T21:14:00.000
| 2
| 1.2
| true
| 13,997,263
| 0
| 0
| 0
| 1
|
I am using celery and zookeeper (kazoo lock) to lock my workers. I have a problem when I kill (-9) one of the workers before releasing the lock then that lock stays locked forever.
So my question is: Does killing the process release locks in that process or is this some bug in zookeeper?
|
Is there a function in python similar to calling `initctl list`?
| 13,998,816
| 1
| 0
| 214
| 0
|
python,linux,unix,process
|
You can go through /proc/<pid>/cmdline to get the running process names. You need to list the files in /proc and filter the numerical ones for getting access to list of the processes running on your system.
However I wouldn't call this accessing "all possible running processes" because that would include kernel threads as well.
| 0
| 1
| 0
| 0
|
2012-12-22T00:04:00.000
| 2
| 1.2
| true
| 13,998,774
| 1
| 0
| 0
| 1
|
I need to get a list of all possible running processes(whether they are stopped currently or not) from the system, without keeping a record myself.
I was wondering if there is a better way to get a list of these processes in python without having to do the dreaded subprocess output parsing of an initctl list call.
|
Producer/Consumer + Worker arch with Node.js/python
| 14,001,337
| 1
| 1
| 669
| 0
|
python,node.js,express,ipc
|
I would use a message passing service such as RabbitMQ or even ZeroMQ to notify or have the Node.JS process poll for this notification.
So, the Python process would do it's processing then it would send a message out and from there the Node.JS process would read this message then know that it can do its job and process the data in MongoDB.
| 0
| 1
| 0
| 0
|
2012-12-22T08:20:00.000
| 1
| 1.2
| true
| 14,001,216
| 0
| 0
| 0
| 1
|
We are having 2 components 1 Producer/Consumer, 2 Process
Producer/Consumer is i/o incentive, and nothing but take web request and make entry to mongodb based on input params.
Process is separate process (in python) which process data from mongodb and group(make pair) them.
This pairing can take little time, and once pairing is done, we want to notify Node that for given connection, "Process is done", so node can send data back to client.
I am not sure on "How to notify Node's connection that process is done, and this is the output."
|
How do I know to which core my Python process has been bound?
| 14,004,255
| 10
| 2
| 2,272
| 0
|
python,process
|
Processes and native OS threads are only bound to specific processors if somebody specifically requests for that to happen. By default, processes and threads can (and will) be scheduled on any available processor.
Modern operating systems use pre-emptive multi-threading and can interrupt a thread's execution at any moment. When that thread is next scheduled to run, it can be executed on a different processor. This is known as a context switch. The thread's entire execution context is stored away by the operating system and then when the thread is re-scheduled, the execution context is restored.
Because of all this, it makes no real sense to ask what processor your thread is executing on since the answer can change at any moment. Even during the execution of the function that queried which the current thread's processor.
Again, by default, there's no relationship between the processors that two separate processes execute on. The two processes could execute on the same processor, or different processors. It all depends on how the OS decides to schedule the different threads.
In the comments you state:
The Python process will execute on only one core due to the GIL lock.
That statement is simply incorrect. For example, a section of Python code would claim the GIL, get context switched around all the available processors, and then release the GIL.
Right at the start of the answer I said alluded to the possibility of binding a process or thread to a particular processor. For example, on Windows you can use SetProcessAffinityMask and SetThreadAffinityMask to do this. However, it is unusual to do this. I can only recall ever doing this once, and that was to ensure that an execution of CPUID run on a specific processor. In the normal run of things, processes and threads have affinity with all processors.
In another comment you say:
I am creating the child processes to use multi cores of the CPU.
In which case you have nothing to worry about. Typically you would create as many processes as there are logical processors. The OS scheduler is sensible and will schedule each different process to run on a different processor. And thus make the optimal use of the available hardware resources.
| 0
| 1
| 0
| 0
|
2012-12-22T15:31:00.000
| 1
| 1.2
| true
| 14,004,036
| 1
| 0
| 0
| 1
|
How do I know to which process my Python process has been bound?
Alone these same lines, are child processes going to execute on the same core (i.e. CPU) that the parent is currently executing?
|
nodelay() causes python curses program to exit
| 72,080,593
| 1
| 7
| 5,827
| 0
|
python,ncurses,curses
|
While I didn't use curses in python, I am currently working with it in C99, compiled using clang on Mac OS Catalina. It seems that nodelay()` does not work unless you slow down the program step at least to 1/10 of a second, eg. usleep(100000). I suppose that buffering/buffer reading is not fast enough, and getch() or wgetch(win*) simply doesn't manage to get the keyboard input, which somehow causes it to fail (no message whatsoever, even a "Segmentation fault").
For this reason, it's better to use halfdelay(1), which equals nodelay(win*, true) combined with usleep(100000).
I know this is a very old thread (2012), but the problem is still present in 2022, so I decided to reply.
| 1
| 1
| 0
| 0
|
2012-12-22T17:13:00.000
| 3
| 0.066568
| false
| 14,004,835
| 0
| 0
| 0
| 2
|
I've written a curses program in python. It runs fine. However, when I use nodelay(), the program exits straight away after starting in the terminal, with nothing shown at all (just a new prompt).
EDIT
This code will reproduce the bug:
sc = curses.initscr()
sc.nodelay(1) # But removing this line allows the program to run properly
for angry in range(20):
sc.addstr(angry, 1, "hi")
Here's my full code
import curses, time, sys, random
def paint(x, y, i):
#...
def string(s, y):
#...
def feed():
#...
sc = curses.initscr()
curses.start_color()
curses.curs_set(0)
sc.nodelay(1) #########################################
# vars + colors inited
for angry in range(20):
try:
dir = chr(sc.getch())
sc.clear()
feed()
#lots of ifs
body.append([x, y])
body.pop(0)
for point in body:
paint(*point, i=2)
sc.move(height-1, 1)
sc.refresh()
time.sleep(wait)
except Exception as e:
print sys.exc_info()[0], e
sc.getch()
curses.beep()
curses.endwin()
Why is this happenning, and how can I use nodelay() safely?
|
nodelay() causes python curses program to exit
| 14,006,585
| 0
| 7
| 5,827
| 0
|
python,ncurses,curses
|
I see no difference when running your small test program with or without the sc.nodelay() line.
Neither case prints anything on the screen...
| 1
| 1
| 0
| 0
|
2012-12-22T17:13:00.000
| 3
| 0
| false
| 14,004,835
| 0
| 0
| 0
| 2
|
I've written a curses program in python. It runs fine. However, when I use nodelay(), the program exits straight away after starting in the terminal, with nothing shown at all (just a new prompt).
EDIT
This code will reproduce the bug:
sc = curses.initscr()
sc.nodelay(1) # But removing this line allows the program to run properly
for angry in range(20):
sc.addstr(angry, 1, "hi")
Here's my full code
import curses, time, sys, random
def paint(x, y, i):
#...
def string(s, y):
#...
def feed():
#...
sc = curses.initscr()
curses.start_color()
curses.curs_set(0)
sc.nodelay(1) #########################################
# vars + colors inited
for angry in range(20):
try:
dir = chr(sc.getch())
sc.clear()
feed()
#lots of ifs
body.append([x, y])
body.pop(0)
for point in body:
paint(*point, i=2)
sc.move(height-1, 1)
sc.refresh()
time.sleep(wait)
except Exception as e:
print sys.exc_info()[0], e
sc.getch()
curses.beep()
curses.endwin()
Why is this happenning, and how can I use nodelay() safely?
|
Unix `at` scheduling with python script: Permission denied
| 14,033,835
| 0
| 0
| 1,059
| 0
|
python,linux,shell,unix
|
Could you try: echo 'python mypythonscript.py' | at ...
| 0
| 1
| 0
| 1
|
2012-12-23T00:37:00.000
| 4
| 0
| false
| 14,007,784
| 0
| 0
| 0
| 2
|
I'm trying to create a scheduled task using the Unix at command. I wanted to run a python script, but quickly realized that at is configured to use run whatever file I give it with sh. In an attempt to circumvent this, I created a file that contained the command python mypythonscript.py and passed that to at instead.
I have set the permissions on the python file to executable by everyone (chmod a+x), but when the at job runs, I am told python: can't open file 'mypythonscript.py': [Errno 13] Permission denied.
If I run source myshwrapperscript.sh, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with at?
Edit: I got frustrated with the python script, so I went ahead and made a sh script version of the thing I wanted to run. I am now finding that the sh script returns to me saying rm: cannot remove <filename>: Permission denied (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have at do it.
|
Unix `at` scheduling with python script: Permission denied
| 14,007,903
| 0
| 0
| 1,059
| 0
|
python,linux,shell,unix
|
Start the script using python not the actual script name, ex : python path/to/script.py.
at tries to run everything as a sh script.
| 0
| 1
| 0
| 1
|
2012-12-23T00:37:00.000
| 4
| 0
| false
| 14,007,784
| 0
| 0
| 0
| 2
|
I'm trying to create a scheduled task using the Unix at command. I wanted to run a python script, but quickly realized that at is configured to use run whatever file I give it with sh. In an attempt to circumvent this, I created a file that contained the command python mypythonscript.py and passed that to at instead.
I have set the permissions on the python file to executable by everyone (chmod a+x), but when the at job runs, I am told python: can't open file 'mypythonscript.py': [Errno 13] Permission denied.
If I run source myshwrapperscript.sh, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with at?
Edit: I got frustrated with the python script, so I went ahead and made a sh script version of the thing I wanted to run. I am now finding that the sh script returns to me saying rm: cannot remove <filename>: Permission denied (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have at do it.
|
Python: Redirect output to several consoles?
| 14,024,002
| 0
| 0
| 288
| 0
|
python,subprocess
|
I'm a bit confused. Using subprocess.Popen(...) should spawn a new command prompt automatically for each call. What is aria2c? Is it a program you had written in python as well? Is it a 3rd party exe that writes to the command prompt window?
I can help you to redirect all the sub-processes output to the main command prompt, so it can be displayed inline.
Also, maybe you can give a little more detail on what is going on first, so I can understand your trouble a bit better.
| 0
| 1
| 0
| 0
|
2012-12-24T15:17:00.000
| 1
| 0
| false
| 14,023,009
| 0
| 0
| 0
| 1
|
I have a main program, in which a user can call a sub-process (to download files) several times. Each time, I call aria2c using subprocess, and it will print the progress to stdin. Of course, it is desirable that the user can see the progress of each download seperately.
So the question is how can I redirect the output of each process to a seperate console window?
|
Does uwsgi server read the paths in the environment variable PYTHONPATH?
| 14,039,533
| 0
| 1
| 1,207
| 0
|
python,uwsgi,pythonpath
|
you can specify multiple --pythonpath options, but PYTHONPATH should be honoured (just be sure it is correctly set by your init script, you can try setting it from the command line and running uwsgi in the same shell session)
| 0
| 1
| 0
| 1
|
2012-12-26T06:00:00.000
| 1
| 0
| false
| 14,036,549
| 0
| 0
| 0
| 1
|
Its weird because, when I run a normal python script on the server, it runs but when I run it via uWSGI, it cant import certain modules.
there is a bash script that starts uwsgi, and passes a path via --pythonpath option.
Is this an additional path or all the paths have to be given here ?
If yes, how do I separate multiple paths given by this option.
|
Downloading a Blob by Filename in Google App Engine (Python)
| 14,043,190
| 1
| 0
| 259
| 0
|
python,google-app-engine,google-cloud-datastore
|
Every blob you upload, creates a new version of that blob (with that filename) in the blobstore. Ofcourse you can delete the old version(s) of the blob, if you uploaded a new version. But to make sure you have the latest version of a blob (of a filename) you have to store the filename in the datastore and make a reference to the latest version. This reference holds the blob_key.
| 0
| 1
| 0
| 0
|
2012-12-26T16:05:00.000
| 2
| 0.099668
| false
| 14,043,045
| 0
| 0
| 0
| 1
|
I know that I can grab a blob by BlobKey, but how do I get the blobkey associated with a given filename?
In short, I want to implement "get file by filename"
I can't seem to find any built-in functionality for this.
|
Is there anyway to view memcache data in google app engine?
| 14,051,678
| 0
| 1
| 444
| 0
|
python,google-app-engine,memcached
|
What's wrong with the memcache viewer in the admin console?
| 0
| 1
| 0
| 0
|
2012-12-27T07:02:00.000
| 2
| 0
| false
| 14,050,745
| 0
| 0
| 1
| 1
|
Basically what I want to do is see the raw data of memcache so that I can see how my data are being stored.
|
how to disable pypy assert statement?
| 69,422,294
| 0
| 5
| 342
| 0
|
python,pypy
|
For anyone coming here in the future, Oct 3 2021 pypy3 does accept the -O flag and turn off assertion statements
| 0
| 1
| 0
| 0
|
2012-12-27T07:57:00.000
| 2
| 0
| false
| 14,051,324
| 0
| 0
| 0
| 2
|
$ ./pypy -O
Python 2.7.2 (a3e1b12d1d01, Dec 04 2012, 13:33:26)
[PyPy 1.9.1-dev0 with GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
And now for something completely different: `` amd64 and ppc are only
available in enterprise version''
>>>> assert 1==2
Traceback (most recent call last):
File "", line 1, in
AssertionError
>>>>
But when i execute
$ python -O
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> assert 1==2
>>>
|
how to disable pypy assert statement?
| 14,051,708
| 5
| 5
| 342
| 0
|
python,pypy
|
PyPy does silently ignore -O. The reasoning behind it is that we believe -O that changes semantics is seriously broken, but well, I guess it's illegal. Feel free to post a bug (that's also where such reports belong, on bugs.pypy.org)
| 0
| 1
| 0
| 0
|
2012-12-27T07:57:00.000
| 2
| 1.2
| true
| 14,051,324
| 0
| 0
| 0
| 2
|
$ ./pypy -O
Python 2.7.2 (a3e1b12d1d01, Dec 04 2012, 13:33:26)
[PyPy 1.9.1-dev0 with GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
And now for something completely different: `` amd64 and ppc are only
available in enterprise version''
>>>> assert 1==2
Traceback (most recent call last):
File "", line 1, in
AssertionError
>>>>
But when i execute
$ python -O
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> assert 1==2
>>>
|
How to check if pdf printing is finished on linux command line
| 14,052,830
| 1
| 4
| 2,252
| 0
|
python,linux,command-line
|
You can check the state of the printer using the lpstat command (man lpstat). To wait for a process to finish, get the PID of the process and pass it wait command as argument
| 0
| 1
| 0
| 0
|
2012-12-27T08:42:00.000
| 2
| 0.099668
| false
| 14,051,766
| 0
| 0
| 0
| 1
|
I have a bunch of files that I need to print via PDF printer and after it is printed I need to perform additional tasks, but only when it is finally completed.
So to do this from my python script i call command "lpr path/to/file.doc -P PDF"
But this command immediately returns 0 and I have no way to track when printing process is finished, was it successful or not etc...
There is an option to send email when printing is done, but to wait for email after I start printing looks very hacky to me.
Do you have some ideas how to get this done?
Edit 1
There are a plenty of ways to check if printer is printing something at current moment. Therefore at the moment after I start printing something I run lpq command every 0.5 second to find out if it is still printing. But this looks to m e not the best way to do it. I want to be able get alerted or something when actual printing process is finished. Was it successful or not etc...
|
redis celeryd and apache
| 14,061,333
| 2
| 0
| 130
| 0
|
python,django,redis,celery
|
Provided you're running Daemon processes of Redis and Celery you do not need to restart them when you restart Apache.
Generally, you will need to restart them when you make configuration changes to either Redis or Celery as the applications are dependent on eachother.
| 0
| 1
| 0
| 0
|
2012-12-27T21:05:00.000
| 1
| 1.2
| true
| 14,061,297
| 0
| 0
| 1
| 1
|
I'm a bit new to redis and celery. Do I need to restart celeryd and redis every time I restart apache? I'm using celery and redis with a django project hosted on webfaction.
Thanks for the info in advance.
|
What does ConnectionRefused do?
| 14,077,687
| 1
| 5
| 102
| 0
|
python,networking,twisted
|
If the datagram socket is connected, it can receive ICMP Port Unreachable messages via the Sockets API, which presumably maps into calling this method. Note that I am not speaking of the TCP connect operation here, but the Sockets connect() method, which can be called on a UDP socket, and which presumably maps into some method in the API you are using.
| 0
| 1
| 0
| 0
|
2012-12-28T21:22:00.000
| 1
| 1.2
| true
| 14,075,972
| 0
| 0
| 0
| 1
|
This is a method of DatagramProtocol class in Twisted. As I understand UDP protocol doesn't guarantee that someone is listening on the given port even using ConnectedDatagramProtocol.
Can someone explain to me, when this method is called and how I suppose to check if there is someone listening to my transmission using UDP?
|
Google App Engine - Upgrading from App Engine Helper
| 14,079,702
| 1
| 0
| 91
| 0
|
python,django,google-app-engine
|
You can use Django 1.4 with CloudSQL.
If you're using the HRD, you'd want to use django-nonrel (the successor to App Engine Helper).
While django-nonrel works, the documentation is a bit lacking at the moment.
| 0
| 1
| 0
| 0
|
2012-12-29T04:15:00.000
| 1
| 1.2
| true
| 14,078,640
| 0
| 0
| 1
| 1
|
I would like to remove dependencies on the old-style App Engine Helper for Django in my Python-based App-Engine application . At the same time, I would like to upgrade to Python2.7 and Django1.4. I have a few questions about the upgrade process:
1) The new App Engine SDK (Version 1.7.4) states that Django is fully supported. Does this mean that neither the App Engine Helper nor the Django-norel will be required in order for Django to function on the App Engine?
2) Assuming that the answer to my previous question is that no external patches/helpers are required, I am having trouble finding an example App Engine/Django application based on the new SDK. Do you know where I could find a Django/AppEngine example that does not rely on external patches/helpers? (this will give me a known good starting point, which I can then port my existing code into).
3) Currently my database models inherit from BaseModel which was provided in the App Engine Helper. In order to not break my website, what should these models inherit from given the BaseModel will no longer exist?
|
How to run a Python file not in directory from another Python file?
| 14,094,933
| 2
| 7
| 6,750
| 0
|
python,python-3.x,exec
|
import sys,
change "sys.path" by appending the path during run time,then import the module that will help
| 0
| 1
| 0
| 0
|
2012-12-30T20:10:00.000
| 3
| 0.132549
| false
| 14,094,224
| 1
| 0
| 0
| 1
|
Let's say I have a file foo.py, and within the file I want to execute a file bar.py. But, bar.py isn't in the same directory as foo.py, it's in a subdirectory call baz. Will execfile work? What about os.system?
|
Performance of Python HTTPServer and TCPServer
| 14,111,484
| 4
| 4
| 2,663
| 0
|
python,performance,tcpserver,httpserver
|
Neither of those built-in libraries was meant for serious production use. Get real implementations, for example, from Twisted, or Tornado, or gunicorn, etc, etc, there are lots of them. There's no need to stick with the standard library modules.
The performance, and probably the robustness of the built-in libraries is poor.
| 0
| 1
| 1
| 0
|
2013-01-01T14:58:00.000
| 2
| 0.379949
| false
| 14,111,460
| 0
| 0
| 0
| 1
|
I've spent a few days on and off trying to get some hard statistics on what kind of performance you can expect from using the HTTPServer and/or TCPServer built-in libraries in Python.
I was wondering if anyone can give me any idea's as to how either/or would handle serving HTTP requests and if they would be able to hold up in production environments or in situations with high traffic and if anyone had any tips or clues that would improve performance in these situations. (Assuming that there is no access to external libraries like Twisted etc)
Thanks.
|
How to execute a python command line utility from the terminal in any directory
| 14,113,933
| 0
| 0
| 170
| 0
|
python,macos,command-line
|
Add the directory it is stored in to your PATH variable? From your prompt, I'm guessing you're using an sh-like shell and from your tags, I'm further assuming OS X. Go into your .bashrc and make the necessary changes.
| 0
| 1
| 0
| 0
|
2013-01-01T20:23:00.000
| 3
| 0
| false
| 14,113,906
| 0
| 0
| 0
| 2
|
I have just written a python script to do some batch file operations. I was wondering how i could keep it in some common path like rest of the command line utilities like cd, ls, grep etc.
What i expect is something like this to be done from any directory -
$ script.py arg1 arg2
|
How to execute a python command line utility from the terminal in any directory
| 14,113,928
| 1
| 0
| 170
| 0
|
python,macos,command-line
|
Just put the script directory into the PATH environment variable, or alternatively put the script in a location that is already in the PATH. On Unix systems, you usually use /home/<nick>/bin for your own scripts and add that to the PATH.
| 0
| 1
| 0
| 0
|
2013-01-01T20:23:00.000
| 3
| 1.2
| true
| 14,113,906
| 0
| 0
| 0
| 2
|
I have just written a python script to do some batch file operations. I was wondering how i could keep it in some common path like rest of the command line utilities like cd, ls, grep etc.
What i expect is something like this to be done from any directory -
$ script.py arg1 arg2
|
Python-controllable command line audio player for Linux
| 14,120,125
| 3
| 3
| 2,048
| 0
|
python,linux,audio
|
mpd should be perfect for you. It is a daemon and can be controlled by various clients, ranging from GUI-less command-line clients like mpc to GUI command-line clients like ncmpc and ncmpcpp up to several full-featured desktop clients.
mpd + mpc should do the job for you as mpc can be easily controlled via the command line and is also able to provide various status information about the currently played song and other things.
It seems like there is already a python client library available for mpd - python-mpd.
| 0
| 1
| 0
| 0
|
2013-01-02T10:02:00.000
| 2
| 1.2
| true
| 14,120,045
| 0
| 0
| 0
| 1
|
I want to build use my Raspberry Pi as a media station. It should be able to play songs via commands over the network. These commands should be handled by a server written in Python. Therefor, I need a way to control audio playback via Python.
I decided to use a command line music player for linux since those should offer the most flexibility for audio file formats. Also, Python libraries like PyAudio and PyMedia don't seem to work for me.
I don't really have great expectations about the music player. It must be possible to play and pause sound files in as much codecs as possible and turn the volume up and down. Also it has to be a headless player since I am not running any desktop environment. There are a lot of players like that out there, it seems. mpg123 for example, works well for all I need.
The problem I have now is that all of these players seem to have a user interface written in ncurses and I have no idea how to access this with the Python subprocess module. So, I either need a music player which comes with Python bindings or one which can be controlled with the command line via the subprocess module. At least these are the solutions I thought about by now.
Does anyone know about a command line audio player for linux that would solve my problem? Or is there any other way?
Thanks in advance
|
Cross-platform deployment and easy installation
| 14,139,446
| 0
| 5
| 1,886
| 0
|
python,batch-file,cross-platform,py2exe,scientific-computing
|
I would recommend using py2exe for the windows side, and then BuildApplet for the mac side. This will allow you to make a simple app you double click for your less savvy users.
| 0
| 1
| 0
| 0
|
2013-01-03T12:54:00.000
| 3
| 0
| false
| 14,139,377
| 0
| 0
| 0
| 2
|
EDIT
One option I contemplated but don't know enough about is to e.g. for windows write a batch script to:
Search for a Python installation, download one and install if not present
Then install the bundled package using distutils to also handle dependencies.
It seems like this could be a relatively elegant and simple solution, but I'm not sure how to proceed - any ideas?
Original Question
In brief
What approach would you recommend for the following scenario?
Linux development environment for creation of technical applications
Deployment now also to be on Windows and Mac
Existing code-base in Python
wine won't install windows version of Python
No windows install CDs available to create virtual windows/mac machines
Porting to java incurs large overhead because of existing code-base
Clients are not technical users, i.e. providing standard Python packages not sufficient - really requires installable self-contained products
Background
I am writing technical and scientific apps under Linux but will need some of them to be deployable on Windows/MacOs machines too.
In the past I have used Python a lot, but I am finding that for non-technical users who aren't happy installing python packages, creating a simple executable (by using e.g. py2exe) is difficult as I can't get the windows version of Python to install using wine.
While java would seem a good choice, if possible I wanted to avoid having to port my existing code from Python, especially as Python also allows writing portable code.
I realize I'm trying to cover a lot of bases here, so any suggestions regarding the most appropriate solutions (even if not perfect) will be appreciated.
|
Cross-platform deployment and easy installation
| 14,139,409
| 2
| 5
| 1,886
| 0
|
python,batch-file,cross-platform,py2exe,scientific-computing
|
py2exe works pretty well, I guess you just have to setup a Windows box (or VM) to be able to build packages with it.
| 0
| 1
| 0
| 0
|
2013-01-03T12:54:00.000
| 3
| 0.132549
| false
| 14,139,377
| 0
| 0
| 0
| 2
|
EDIT
One option I contemplated but don't know enough about is to e.g. for windows write a batch script to:
Search for a Python installation, download one and install if not present
Then install the bundled package using distutils to also handle dependencies.
It seems like this could be a relatively elegant and simple solution, but I'm not sure how to proceed - any ideas?
Original Question
In brief
What approach would you recommend for the following scenario?
Linux development environment for creation of technical applications
Deployment now also to be on Windows and Mac
Existing code-base in Python
wine won't install windows version of Python
No windows install CDs available to create virtual windows/mac machines
Porting to java incurs large overhead because of existing code-base
Clients are not technical users, i.e. providing standard Python packages not sufficient - really requires installable self-contained products
Background
I am writing technical and scientific apps under Linux but will need some of them to be deployable on Windows/MacOs machines too.
In the past I have used Python a lot, but I am finding that for non-technical users who aren't happy installing python packages, creating a simple executable (by using e.g. py2exe) is difficult as I can't get the windows version of Python to install using wine.
While java would seem a good choice, if possible I wanted to avoid having to port my existing code from Python, especially as Python also allows writing portable code.
I realize I'm trying to cover a lot of bases here, so any suggestions regarding the most appropriate solutions (even if not perfect) will be appreciated.
|
HA deploy for Python wsgi application
| 14,148,902
| 0
| 2
| 2,227
| 0
|
python,nginx,wsgi,haproxy
|
Of the three options, only option number 1 has any chance of working with websockets. Nginx, and most standard webservers will not play nicely with them.
| 0
| 1
| 0
| 0
|
2013-01-03T20:34:00.000
| 2
| 0
| false
| 14,146,814
| 0
| 0
| 1
| 1
|
I consider this scenarios for deploying High Available Python web apps:
load balancer -* wsgi servers
load balancer -* production HTTP sever - wsgi server
production HTTP sever (with load balancing features, like Nginx) -* wsgi servers
For load balancer I consider HAProxy
For production HTTP sever I consider Nginx
For wsgi servers I mean servers which directly handle wsgi app (gevent, waitress, uwsgi...)
-* means one to many connection
- means one to one connection
There is no static content to serve. So I wonder if production HTTP server is needed.
What are the pros and cons of each solution?
For each scenario (1-3), in place of wsgi server is there any advantage of using wsgi container server (uWSGI, gunicorn) rather then raw wsgi server (gevent, tornado..)?
I'm also wondering which solution is the best for websockets or long polling request?
|
can one python script run both with python 2.x and python 3.x
| 64,151,445
| 0
| 7
| 5,143
| 0
|
python,python-3.x
|
In the general case, no; many Python 2 scripts will not run on Python 3, and vice versa. They are two different languages.
Having said that, if you are careful, you can write a script which will run correctly under both. Some authors take extra care to make sure their scripts will be compatible across both versions, commonly using additional tools like the six library (the name is a pun; you can get to "six" by multiplying "two by three" or "three by two").
However, it is now 2020, and Python 2 is officially dead. Many maintainers who previously strove to maintain Python 2 compatibility while it was still supported will now be relieved and often outright happy to pull the plug on it going forward.
| 0
| 1
| 0
| 1
|
2013-01-04T06:55:00.000
| 3
| 0
| false
| 14,152,548
| 0
| 0
| 0
| 2
|
i have thousands of servers(linux), some only has python 2.x and some only has python 3.x, i want to write one script check.py can run on all servers just as $./check.py without use $python check.py or $python3 check.py, is there any way to do this?
my question is how the script check.py find the Interpreter no matter the Interpreter is python2.x and python3.x
|
can one python script run both with python 2.x and python 3.x
| 14,152,613
| 0
| 7
| 5,143
| 0
|
python,python-3.x
|
Considering that Python 3.x is not entirely backwards compatible with Python 2.x, you would have to ensure that the script was compatible with both versions. This can be done with some help from the 2to3 tool, but may ultimately mean running two distinct Python scripts.
| 0
| 1
| 0
| 1
|
2013-01-04T06:55:00.000
| 3
| 0
| false
| 14,152,548
| 0
| 0
| 0
| 2
|
i have thousands of servers(linux), some only has python 2.x and some only has python 3.x, i want to write one script check.py can run on all servers just as $./check.py without use $python check.py or $python3 check.py, is there any way to do this?
my question is how the script check.py find the Interpreter no matter the Interpreter is python2.x and python3.x
|
What is a main reason that Django webserver is blocking?
| 14,158,864
| 5
| 3
| 176
| 0
|
python,django,webserver,tornado,blocking
|
I believe It is single threaded and blocking so it is easy to debug. if you put a debugger in it will completely halt the server
| 0
| 1
| 0
| 0
|
2013-01-04T14:19:00.000
| 2
| 1.2
| true
| 14,158,844
| 0
| 0
| 1
| 2
|
Why is the Django webserver blocking, not non-blocking like Tornado? Was there a reason to design the webserver in this way?
|
What is a main reason that Django webserver is blocking?
| 14,159,011
| 5
| 3
| 176
| 0
|
python,django,webserver,tornado,blocking
|
If you really need another reason on top of that suggested by dm03514, it I'd probably simply because it was easier to write. Since the dev server I'd for development only, little effort was spent in making it more complex or able to serve multiple requests. In fact, this is an explicit goal: making it any better would encourage people to use it in a production setting, for which it is not tested.
| 0
| 1
| 0
| 0
|
2013-01-04T14:19:00.000
| 2
| 0.462117
| false
| 14,158,844
| 0
| 0
| 1
| 2
|
Why is the Django webserver blocking, not non-blocking like Tornado? Was there a reason to design the webserver in this way?
|
Give python "platform" library fake platform information?
| 14,160,208
| 1
| 3
| 823
| 0
|
python,cross-platform
|
you could create your initialization functions to take those variables as parameters so it is easy to spoof them in testing
| 0
| 1
| 0
| 0
|
2013-01-04T15:40:00.000
| 4
| 0.049958
| false
| 14,160,178
| 0
| 0
| 0
| 1
|
At the beginning of the script, I use platform.system and platform.release to determine which OS and version the script is running on (so it knows it's data is in Application Support on Mac, home on unix-like and non-mac unix, appdata on windows <= XP, and appdata/roaming on windows >= Vista). I'd like to test my series of ifs, elifs, and elses what determine the os and release, but I only have access to Mac 10.6.7, some unknown release of Linux, and Windows 7. Is there a way to feed platform fake system and release information so I can be sure XP, Solaris, etc, would handle the script properly without having an installation?
|
python.exe is not a valid win32 application error coming suddenly
| 14,168,530
| 2
| 1
| 5,945
| 0
|
python,windows
|
Download (or open if you already have) dependency walker, then open python.exe in it. See if you are missing a DLL or got a DLL corrupted. You may require to re-install python, some reference DLL or exe files could be corrupted/overwritten/modified/deleted.
| 0
| 1
| 0
| 0
|
2013-01-05T04:16:00.000
| 1
| 0.379949
| false
| 14,168,522
| 1
| 0
| 0
| 1
|
My python environment was working alright before and after running a C++ code multiple times from python subprocesses, I started the computer and saw that it says that python.exe is not a valid win32 application and I no longer access python. What changed I am not sure. Will I need to reinstall python? When I open the IDE, this message comes Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32
|
Gunicorn logging from multiple workers
| 14,305,123
| 1
| 8
| 3,257
| 0
|
python,flask,gunicorn
|
We ended up changing our application to send logs to stdout and now rely on supervisord to aggregate the logs and write them to a file. We also considered sending logs directly to rsyslog but for now this is working well for us.
| 0
| 1
| 0
| 0
|
2013-01-05T14:00:00.000
| 2
| 0.099668
| false
| 14,172,470
| 0
| 0
| 1
| 1
|
I have a flask app that runs in multiple gunicorn sync processes on a server and uses TimedRotatingFileHandler to log to a file from within the flask application in each worker. In retrospect this seems unsafe. Is there a standard way to accomplish this in python (at high volume) without writing my own socket based logging server or similar? How do other people accomplish this? We do use syslog to aggregate across servers to a logging server already but I'd ideally like to persist the log on the app node first.
Thanks for your insights
|
How to install python packages without root privileges?
| 21,084,055
| 1
| 32
| 40,097
| 0
|
python,numpy,installation,scipy
|
You can import a module from an arbitrary path by calling:
sys.path.append()
| 0
| 1
| 0
| 0
|
2013-01-06T06:33:00.000
| 4
| 0.049958
| false
| 14,179,941
| 1
| 0
| 0
| 1
|
I am using numpy / scipy / pynest to do some research computing on Mac OS X. For performance, we rent a 400-node cluster (with Linux) from our university so that the tasks could be done parallel. The problem is that we are NOT allowed to install any extra packages on the cluster (no sudo or any installation tool), they only provide the raw python itself.
How can I run my scripts on the cluster then? Is there any way to integrate the modules (numpy and scipy also have some compiled binaries I think) so that it could be interpreted and executed without installing packages?
|
What are the other alternatives to using django-kombu?
| 14,181,856
| 2
| 3
| 590
| 0
|
python,django,celery
|
The stable version of kombu is production ready, same for celery.
kombu takes care of the whole messaging between consumers, producers and the message broker which in order are the celery workers, webworkers (or more in general scripts that put tasks in the queue) and the message broker you are using.
You need kombu to run celery (it is actually in the requirements if you look at its setup)
With kombu you can use different message brokers (rabbitmq, redis ...) so the choice is not between using kombu or rabbitmq as they do different things, but between kombu and redis or kombu and rabbitmq etc etc..
If you are ok with redis as message broker, you just have to install:
celery-with-redis and django-celery packages
| 0
| 1
| 0
| 0
|
2013-01-06T09:44:00.000
| 1
| 0.379949
| false
| 14,180,944
| 0
| 0
| 1
| 1
|
I am using Django-kombu with Celery but have read at a quite few places that it isn't production ready.
Basically, I want to create a multiple master - multiple slaves architecture using Celery and pass messages in between them and and back to the main program that did the call.
I am not able to understand where does Kombu fit in there. Why not RabbitMQ? The tutorials are all very messy with one person suggesting something and the other something else.
Can someone give me clearer picture of what is a production stack look like when dealing Celery + Django?
Also, do I have to use Dj-Celery?
|
python binary version doesnt match rpm version
| 14,183,758
| 0
| 0
| 64
| 0
|
python,linux
|
Create a symlink in /usr/bin/ called python2.7, point to to where you have installed the new Python and use that.
Do not attempt to upgrade or force the default python on a redhat box, because a lot of other tools will stop working.
| 0
| 1
| 0
| 0
|
2013-01-06T15:02:00.000
| 2
| 0
| false
| 14,183,362
| 1
| 0
| 0
| 2
|
I have installed manually python (2.7.3). Whoc do I update the rpm version
usr/bin/python -V:
Python 2.7.3
rpm -qf /usr/bin/python:
python-2.6.5-3.el6.x86_64
any suggestions?
linux version: RH6.3
|
python binary version doesnt match rpm version
| 14,185,931
| 1
| 0
| 64
| 0
|
python,linux
|
You installed it incorrectly. Instead of make install you should run make altinstall. This will install the new version of Python parallel to existing versions, and create a new executable in $PREFIX/bin with the name of python followed by the minor version of Python installed, e.g. python2.7.
| 0
| 1
| 0
| 0
|
2013-01-06T15:02:00.000
| 2
| 0.099668
| false
| 14,183,362
| 1
| 0
| 0
| 2
|
I have installed manually python (2.7.3). Whoc do I update the rpm version
usr/bin/python -V:
Python 2.7.3
rpm -qf /usr/bin/python:
python-2.6.5-3.el6.x86_64
any suggestions?
linux version: RH6.3
|
how to set the interpreter of wxpython for eclipse once for all
| 14,198,642
| 0
| 0
| 245
| 0
|
eclipse,macos,python-2.7,wxpython
|
I always go to Preferences / PyDev / Interpreter - Python. Then add a new interpreter, and just click Add and Apply. Wait until everything is parsed, this takes a while. Then click OK.
Change the interpreter from "Default" to your newly set-up interpreter.
Check if correct interpreter is set for your project. Right-click the project / Properties / PyDev - Interpreter/Grammar. New projects should get this by default.
| 0
| 1
| 0
| 0
|
2013-01-06T17:11:00.000
| 1
| 1.2
| true
| 14,184,589
| 0
| 0
| 0
| 1
|
I've Eclipse, python 2.7, wxpython 2.8, and OSx 10.5.8
I would like wxpython is included correctly in my eclipse environment, to have not all the wxpython commands underlined as errors.
I've imported in the PYTHONPATH, via preferences, the correct path of the wx library. Once I import them manually in the Eclipse, save settings, then it works.
But if i close Eclipse, and open it again, even if the interpreter have its own path of wxpython, it seems it's not recognized, and I've no autocomplete, no documentation. I need to remove and add again the same path to make everything work. It still happen after months. I guess it maybe a problem of macosx eclipse.
Do you know why?
Do you agree?
thank you in advance
|
Need more than 32 USB sound cards on my system
| 14,203,491
| 3
| 6
| 1,112
| 0
|
python,linux,alsa,udev,soundcard
|
The sound card limit is defined as the symbol SNDRV_CARDS in include/sound/core.h.
When I increased this seven years ago, I did not go beyond 32 because the card index is used as a bit index for the variable snd_cards_lock in sound/core/init.c, and I did not want to change more than necessary.
If you make snd_cards_lock a 64-bit variable, change all accesses to use a 64-bit type, and adjust any other side effect that I might have forgotten about, you should be able to get the kernel to have more ALSA cards.
This limit also exists in the alsa-lib package; you will have to change at least the check in snd_ctl_hw_open in src/control/control_hw.c.
| 0
| 1
| 0
| 0
|
2013-01-07T18:08:00.000
| 2
| 0.291313
| false
| 14,201,551
| 0
| 0
| 0
| 1
|
I'm working on an educative multiseat project where we need to connect 36 keyboards and 36 USB sound cards to a single computer. We're running Ubuntu Linux 12.04 with the 3.6.3-030603-generic kernel.
So far we've managed to get the input from the 36 keyboards, and recognized the 36 sound cards without getting a kernel panic (which happened before updating the kernel). We know the 36 sound cards have been recognized because $ lsusb | grep "Audio" -c outputs 36.
However, $ aplay -l lists 32 playback devices in total (including the "internal" sound card). Also, $ alsamixer -c 32 says "invalid card index: 32" (works just from 0 through 31 ; 32 in total too).
So my question is, how can I access the other sound cards if they're not even listed with these commands? I'm writing an application in python and there are some libraries to choose from, but I'm afraid they'll also be limited to 32 devices in total because of this. Any guidance will be useful.
Thanks.
|
Running process in parallel for data collection
| 26,588,050
| 0
| 1
| 186
| 0
|
python,multithreading,parallel-processing,serial-port
|
Instead of using threads you could also implementing your data sources as generators and just loop over them to consume the incoming data and do something with it. Perhaps using two different generators and zipping them together, actually would be a nice experiment I'm not entirely sure it can be done...
| 0
| 1
| 0
| 0
|
2013-01-08T07:59:00.000
| 1
| 0
| false
| 14,210,568
| 0
| 0
| 0
| 1
|
I am collecting data from two pieces of equipment using serial ports (scale and conductivity probe). I need to continuously collect data from the scale which I average between collection points of the conductivity probe (roughly a minuet).
Thus I need to run two processes at the same time. One that collects data from the scale, and other which waits for data from the conductivity probe, once it gets the data it would send a command to the other process in order to get the collected scale data, which is then time stamped and saved into .csv file.
I looked into subprocess but it I cant figure out how to reset a running script. Any suggestions on what to look into.
|
Can I use open cv with python on Google app engine?
| 43,981,159
| 3
| 4
| 2,599
| 0
|
python,google-app-engine,opencv,python-2.7
|
Now it is possible. The app should be deployed using a custom runtime in the GAE flexible environment. OpenCV library can be installed by adding the instruction RUN apt-get update && apt-get install -y python-opencv in the Dockerfile.
| 0
| 1
| 0
| 0
|
2013-01-08T15:03:00.000
| 3
| 0.197375
| false
| 14,217,858
| 0
| 0
| 1
| 1
|
HI actually I was working on a project which i intended to deploy on the google appengine.
However I found that google app engine is supported by python. Can I run openCV with python scripts on Google app engine?
|
Celery 3.0.12 countdown not working
| 14,246,789
| 1
| 0
| 497
| 0
|
python,celery
|
This is a bug in celery 3.0.12, reverting to celery 3.0.11 did the job.
Hope this helps someone
| 0
| 1
| 0
| 0
|
2013-01-08T23:30:00.000
| 1
| 0.197375
| false
| 14,225,865
| 0
| 0
| 1
| 1
|
When I run my task: my_task.apply_async([283], countdown=5) It runs immediately when it should be running 5 seconds later as the ETA says
[2013-01-08 15:15:21,600: INFO/MainProcess] Got task from broker: web.my_task[4635f997-6232-4722-9a99-d1b42ccd5ab6] eta:[2013-01-08 15:20:51.580994]
[2013-01-08 15:15:22,095: INFO/MainProcess] Task web.my_task[4635f997-6232-4722-9a99-d1b42ccd5ab6] succeeded in 0.494245052338s: None
here is my installation:
software -> celery:3.0.12 (Chiastic Slide) kombu:2.5.4 py:2.7.3
billiard:2.7.3.19 py-amqp: N/A
platform -> system:Darwin arch:64bit imp:CPython
loader -> djcelery.loaders.DjangoLoader
settings -> transport:amqp results:mongodb
Is this a celery bug? or I am missing something?
|
How to know which application run a python script
| 14,236,492
| 0
| 0
| 99
| 0
|
python
|
If you're happy to stay specific to unix and then you can get the parent PID of the process with os.getppid(). If you wanted to translate it back to a program id, you could can run a subprocess to use the relevant OS-specific PID-to-useful-data tool (odds on - ps)
| 0
| 1
| 0
| 1
|
2013-01-09T13:20:00.000
| 2
| 0
| false
| 14,236,130
| 0
| 0
| 0
| 1
|
Is there a way to know which application I'm running a python script from?
I can run python from multiple sources, like Textmate, Sublime Text 2 or Terminal (I'm on Mac OSX). How can I know, exactly which tool launched the current python app.
I've tried looking into the os and inspect modules, but couldn't find the solution.
|
Pass Python scripts for mapreduce to HBase
| 14,675,698
| -2
| 3
| 3,776
| 0
|
python,hadoop,mapreduce,hbase
|
You can very easily do map reduce programming with Python which would interact with thrift server. Hbase client in python would be a thrift client.
| 0
| 1
| 0
| 0
|
2013-01-09T16:27:00.000
| 2
| -0.197375
| false
| 14,241,729
| 0
| 0
| 1
| 1
|
We have a HBase implementation over Hadoop. As of now all our Map-Reduce jobs are written as Java classes. I am wondering if there is a good way to use Python scripts to pass to HBase for Map-Reduce.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.