Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Google Cloud Storage Client Library 400 error on devserver as of update 1.8.8
| 20,105,749
| 1
| 1
| 164
| 0
|
python,google-app-engine,google-cloud-storage,client-library
|
have you updated gcs client to 1.8.8 version from the downloads list, or to SVN head? Thanks.
| 0
| 1
| 0
| 0
|
2013-11-20T19:29:00.000
| 1
| 1.2
| true
| 20,105,156
| 0
| 0
| 1
| 1
|
I haven't changed any code, but when I try to upload an image file using the GCS Client library on app engine's dev server, I am now getting this fatal error:
Expect status [201] from Google Storage. But got status 400.
This was working until I made the update from Google to 1.8.8 as of 11/19/13.
Anybody else seeing this? It doesn't give any other indications as to why the 400 error.
|
twisted web for production server service
| 20,114,092
| 0
| 2
| 805
| 0
|
python,web,twisted
|
I have found several good references for launching daemon processes with python. See daemoncmd from pypi.
Im still coming up a little short on the monitoring/alert solutions (in python).
| 0
| 1
| 0
| 0
|
2013-11-21T02:21:00.000
| 1
| 0
| false
| 20,111,254
| 0
| 0
| 1
| 1
|
I would like to deploy several WSGI web applications with Twisted on a debian server, and need some direction for a solid production setup. These applications will be running 24/7.
I need to run several configurations, each binding to different ports/interfaces/privileges.
I want to do as much of this in python as possible.
I do not want to package my applications with a program like 'tap2deb'.
What is the best way to implement each application as a system service? Do I need some /etc/init.d shell scripts, or can I manage this with python? (I don't want anything quite as heavy as Daemontools)
If I use twistd to manage most of the configuration/process management, what kind of wrappers/supervisors do I need to put in place?
I would like centralized management, but restricting control to the parent user account is not a problem.
The main problem I want to avoid, is having to SSH into my server once a day to restart a blocking/crashed application
|
Calling exe in Windows from Linux
| 20,140,905
| 0
| 0
| 252
| 0
|
python,linux,django,windows,apache
|
I can think of some ways to do this:
Use web services with real REST protocol and cross-site scripting protection
Use WINE (As OneOfOnes suggested in his comment)
But this is very risky for real production and might not work at all (or just when the load will become heavier)
Write some code in the windows machine and call this code using something like Zero-MQ (ZMQ) or similar product
Depending on the way your are using this library, One solution can fit better than the others.
For most cases, I would suggest to go with ZMQ
This way you can use much more complex models of communication (subscription-subscribers, send-response, and more)
Also, using ZMQ would let you to scale in a very easy way if the need will come (you will be able to put few windows machines to process the requests)
Edit:
To support file transfer between machines, you have few options as well.
Use ZMQ. File can be just a stream of data.
No problem to support such a stream with ZMQ
Use file server with some Enq. procedure
Enq. can be done via ZMQ msg to inform the other side that the file is ready
You can use folder share instead of a file server, but sharing files on the windows machine will not be a scale-able solution
Windows program can send the file via FTP or SSH to the Linux server.
Once again, signaling (file ready, file name,...) can be done with ZMQ
| 0
| 1
| 0
| 0
|
2013-11-22T08:09:00.000
| 1
| 0
| false
| 20,139,979
| 0
| 0
| 1
| 1
|
Background knowledge: Website in Django run under apache.
Briefly speaking, I need to call an .exe program in a windows machine from a Linux machine.
The reason for this is our website runs on Linux, but one module relies on Windows DLL. So we plan to put it on a separate windows server, and use some methods to call the exe program, and get the result back.
My idea is: setup a web service on that windows machine, post some data to it, let itself deals with the exe program, and return some data as response. Notice that request data and response data will both contains files.
I wonder if there is any neater way for this?
EDIT: Thanks for @fhs, I found I didn't make my main problem clearly enough. Yes, the webservice could work. But the main disadvantages for this is: basically, I need to post several files to windows; windows receive files, save them, call the program using these files as parameters, and then package the result files into a zip and return it. In linux, receive the file, unpack it to local file system. It's kind of troublesome.
So, is there any way to let both machines access the other one's files as easily as in local file system?
|
mykey.get() method doesn't work in google app engine project
| 20,156,917
| 3
| 0
| 99
| 0
|
google-app-engine,python-2.7,app-engine-ndb
|
A few things are wrong here.
Firstly Subcategory.get_by_id(subcategoryId) probably won't work as you example key has an ancestor defined. You need to include the ancestor(s) in get_by_id
Given you are using a mySubcategorykey.get() and you don't retrieve an entity then it means the key is incorrect. a get by key won't experience eventual consistancy so the key is just wrong, or you didn't put() the original entity.
I suggest you examine the key after you put() the entity and see if it actually matches what you are using.
Also there are problems with your example key, ndb.Key(Category, 'Foods', Subcategory, subcategoryId) Category and Subcategory need to be strings, or variables with a string value of "Category" and "SubCategory" - which would be a bit odd to write this way.
Also you don't create query objects from keys , query is a method of ndb.Model, or you instantiate a query object from ndb.Query .
So you are mixing up some terminolgy and/or concepts.
| 0
| 1
| 0
| 0
|
2013-11-23T00:00:00.000
| 1
| 1.2
| true
| 20,156,694
| 0
| 0
| 1
| 1
|
I am passing one of my projects to the google app engine, just for the sake of learning. However I have some problems with the ndb datastore. My root entity would be Categories and these have Subcategories as child entities. So let's say I have Category Foods which has Subcategory Main Dishes. So the key for this Entity would be ndb.Key(Category, 'Foods', Subcategory, subcategoryId). When I am creating a query object from this key I can fetch the correct subcategory, but from the documentation I would like to do other two methods as well which are not working, I don't know for what reason.
mySubcategorykey.get() => it returns None using the aforementioned key.
Subcategory.get_by_id(subcategoryId) => Also returns None.
Also when I am generating a safeUrl from the key, I cannot return the object with ndb.Key(urlSafe=myUrlSafeString).get(), however printing the ndb.Key(urlSafe) gives me the correct key, as it states in the DataStore Viewer.
Anyone can help me please telling what I am doing wrong? Thank you.
|
AppEngine Push to Deploy and Modules
| 20,196,135
| 4
| 3
| 347
| 0
|
python,git,google-app-engine
|
Lack of support for App Engine modules is a known issue for the push-to-deploy feature, and is something we're actively working on addressing at this time.
| 0
| 1
| 0
| 0
|
2013-11-23T06:08:00.000
| 1
| 1.2
| true
| 20,159,232
| 0
| 0
| 1
| 1
|
Can I use AppEngines "Push to Deploy" (deploying by pushing a GIT repository) to update a multiple module Python application?
Where do I get the repo url for the non default modules?
|
Using Soundcloud Python library in Google App Engine - what files do I need to move?
| 20,177,193
| 0
| 0
| 241
| 0
|
python,google-app-engine,pip,soundcloud
|
Ok, I think I figured out. What I needed to do what copy the soundcloud folder, along with the fudge, simplejson and requests folders into the root folder of my webapp. Thank you VooDooNOFX -- although you didn't directly answer the precise question, you sparked the thinking that helped me figure it out.
| 0
| 1
| 0
| 0
|
2013-11-23T19:19:00.000
| 2
| 0
| false
| 20,166,720
| 0
| 0
| 1
| 1
|
I want to use the soundcloud python library in a web app I am developing in Google App Engine. However, I can't find any file called "soundcloud.py" in the soundcloud library files I downloaded. When using pip install it works fine on my local computer.
What files exactly do I need to move - or what exact steps do I need to take - in order to be able to "import soundcloud" within Google App Engine.
I already tried moving all the *.py files into my main app directory, but still got this error:
import soundcloud
ImportError: No module named soundcloud
|
Best way to implement long running subprocess in Django?
| 20,231,324
| 5
| 4
| 1,377
| 0
|
python,django,python-2.7,subprocess,celery
|
If you don't want something as complex as Celery, then you can use subprocess + nohup to start long running tasks off, dump the PID to a file (check the subprocess documentation for how to do that) and then check if the PID contained in the file is still running (using ps). And if you wanted, you could write a very small 'wrapper' script which would run the task you tell it to, and if it crashes, write a 'crashed.txt' file.
One thing to note is that you should probably run commands including the close_fds=True value to the call. (so check_call(['/usr/bin/nohup', '/tasks/do_long_job.sh'], close_fds=True) ). Why? By default, all subprocesses are given access to the parent's open file descriptors, INCLUDING ports. This means that if you need to restart your web server process, while the long process is running, that the running process will keep the port open, and you won't be able to load the server up again. You can guess how I found this out. :-)
| 0
| 1
| 0
| 0
|
2013-11-23T21:37:00.000
| 2
| 0.462117
| false
| 20,168,198
| 1
| 0
| 1
| 1
|
I know there are many questions similar to this one, but as far as my research has taken me, none of them answers my specific question. I hope you will take your time to help me out, as I have been struggling with this for days without finding a proper answer.
I am trying to find the best way to implement a subprocess into a Django application. To be more specific:
The process will be run from one view (asynchronously) and handled from another.
The process can run up to several hours.
Multiple instances of the same process/program should be able to run at the same time.
Other than knowing when the process is completed (or if it crashed, so it can be re-run), no communication with it is needed.
Does anyone know which way would be the best to implement this? Would any of the Python modules (such as subprocess, threads, multiprocessing, spawn) be able to achieve this or would I have to implement an external task queue such as Celery?
|
Problems running python script by windows task scheduler that does pscp
| 70,087,563
| 0
| 17
| 39,296
| 0
|
python,scheduled-tasks,scp
|
I had this issue before. I was able to run the task manually in Windows Task Scheduler, but not automatically. I remembered that there was a change in the time made by another user, maybe this change made the task scheduler to error out. I am not sure. Therefore, I created another task with a different name, for the same script, and the script worked automatically. Try to create a test task running the same script. Hopefully that works!
| 0
| 1
| 0
| 0
|
2013-11-25T14:51:00.000
| 8
| 0
| false
| 20,196,049
| 0
| 0
| 0
| 5
|
Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks.
|
Problems running python script by windows task scheduler that does pscp
| 70,039,404
| 0
| 17
| 39,296
| 0
|
python,scheduled-tasks,scp
|
Just leaving this for posterity: A similar issue I faced was resolved by using the UNC (\10.x.xx.xx\Folder\xxx)path everywhere in my .bat and .py scripts instead of the letter assigned to the drive (\K:\Folder\xxx).
| 0
| 1
| 0
| 0
|
2013-11-25T14:51:00.000
| 8
| 0
| false
| 20,196,049
| 0
| 0
| 0
| 5
|
Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks.
|
Problems running python script by windows task scheduler that does pscp
| 52,639,496
| 0
| 17
| 39,296
| 0
|
python,scheduled-tasks,scp
|
Create a batch file add your python script in your batch file and then schedule that batch file .it will work .
Example : suppose your python script is in folder c:\abhishek\script\merun.py
first you have to go to directory by cd command .so your batch file would be like :
cd c:\abhishek\script
python merun.py
it work for me .
| 0
| 1
| 0
| 0
|
2013-11-25T14:51:00.000
| 8
| 0
| false
| 20,196,049
| 0
| 0
| 0
| 5
|
Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks.
|
Problems running python script by windows task scheduler that does pscp
| 21,116,923
| 19
| 17
| 39,296
| 0
|
python,scheduled-tasks,scp
|
I had the same issue when trying to open an MS Access database on a Linux VM. Running the script at the Windows 7 command prompt worked but running it in Task Scheduler didn't. With Task Scheduler it would find the database and verify it existed but wouldn't return the tables within it.
The solution was to have Task Scheduler run cmd as the Program/Script with the arguments /c python C:\path\to\script.py (under Add arguments (optional)).
I can't tell you why this works but it solved my problem.
| 0
| 1
| 0
| 0
|
2013-11-25T14:51:00.000
| 8
| 1.2
| true
| 20,196,049
| 0
| 0
| 0
| 5
|
Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks.
|
Problems running python script by windows task scheduler that does pscp
| 21,117,293
| 2
| 17
| 39,296
| 0
|
python,scheduled-tasks,scp
|
Brad's answer is right. Subprocess needs the shell context to work and the task manager can launch python without that. Another way to do it is to make a batch file that is launched by the task scheduler that calls python c:\path\to\script.py etc. The only difference to this is that if you run into a script that has a call to os.getcwd() you will always get the root where the script is but you get something else when you make the call to cmd from task scheduler.
| 0
| 1
| 0
| 0
|
2013-11-25T14:51:00.000
| 8
| 0.049958
| false
| 20,196,049
| 0
| 0
| 0
| 5
|
Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks.
|
What is uWSGI master mode?
| 20,199,153
| 5
| 11
| 4,022
| 0
|
python,uwsgi
|
upstart is only a process manager, the uWSGI master has access to lot of memory areas of workers (well it is the opposite indeed) so it can make truly monitoring of the behaviour of workers, in addition to this it allows graceful reloading, exports statistics and dozens of other things. Running without it is not a good idea from various point of views.
| 0
| 1
| 0
| 1
|
2013-11-25T15:49:00.000
| 1
| 1.2
| true
| 20,197,259
| 0
| 0
| 1
| 1
|
What are the benefits of running uWSGI in master mode if I'm only running a single app? Does master mode offer process management benefits that make it more reliable than, say, running via Upstart?
|
Google Compute Engine OpenERP
| 23,682,731
| 0
| 2
| 513
| 0
|
python,debian,openerp,google-compute-engine
|
you could limit access to your OpenERP instance by specifying --allowed_ip_sources="x.x.x.x" the IP or the CIDR range from where you expect the application to be accessed.
Additionally limit access of 8060 port only to your OpenERP instance, by tagging the instance as say ERP and apply --target_tags="ERP" to limit traffic from your source IP range to hit only the specific ERP instance.
| 0
| 1
| 0
| 0
|
2013-11-26T02:36:00.000
| 2
| 0
| false
| 20,207,499
| 0
| 0
| 1
| 1
|
I already install OpenERP and PostgreSQL in google compute engine.
Using debian 7. when i check with ifconfig as root user. I just got 2 ip addres.
127.0.0.1 and my internal ip address. My external IP/IP Public can't detected by debian 7.
I use ephemeral ip address for my external IP.
I allready try run OpenERP service using 127.0.0.1:8069 and my internal IP 10.240.226.xxx.
I can't access it from my external IP 8.34.xxx.xx:8069.
Please give me advice to fix these problem? and where i can contact or find Google "Help & Support" or submit "ticket support", beside using stackoverflow and google group?
|
Installing bigfloat, GMP and MPFR in windows for python
| 20,229,129
| 1
| 1
| 4,016
| 0
|
python,installation,gmp,mpfr,bigfloat
|
There are two versions of gmpy - version 1 (aka gmpy) and version 2 (aka gmpy2). gmpy2 includes MPFR. If you install gmpy2 then you probably don't need bigfloat since the functionality of MPFR can be directly accessed from gmpy2.
Disclaimer: I maintain gmpy and gmpy2.
| 0
| 1
| 0
| 0
|
2013-11-26T06:19:00.000
| 3
| 0.066568
| false
| 20,209,874
| 1
| 0
| 0
| 1
|
I am trying to install bigfloat in Python 3.2 on a Windows 7 machine. The documentation says that I first need to install GMP and MPFR. I have downloaded both of these to my desktop (as well as the bigfloat package). However as they are C packages I am not sure how to install them in python (I have tried to find a clear explanation for the last several hours and failed). Can any one either tell me what I need to do or point me to a tutorial? Thanks a lot, any help is greatly appreciated.
|
python 3.3.3 hangs if opened in shell within emacs
| 20,214,424
| 1
| 2
| 232
| 0
|
python-3.3,emacs24
|
Sounds like a bug. Try a workaround: load python-mode first, then open the shell interactively. This will provide some setup, which might cure it.
With shipped python.el M-x run-python RET
With python-mode.el M-x python[VERSION] RET
VERSION is optional, it provides non-default shells without re-customizing the variable holding the command-name, i.e. py-shell-name
| 0
| 1
| 0
| 0
|
2013-11-26T09:38:00.000
| 1
| 1.2
| true
| 20,213,284
| 0
| 0
| 0
| 1
|
I am trying to start python 3.3.3 within a shell buffer in emacs (GNU emacs 24.2). OS is Win7. If I start python from the regular command line, the program works well. If I open a shell buffer in emacs (M-x shell) and type "python" into the command line (the program is in the path), it prints "python" on a new line and stops there.
Any ideas what I am doing wrong?
|
virtualenv on Windows, activate/deactivate events/hooks
| 20,238,425
| 1
| 1
| 1,635
| 0
|
python,virtualenv
|
Many problems lessened or solved with virtualenvwrapper-win. Well written framework, with simple entry points.I spend a lot of time fighting with windows, trying to get a functional python work environment. This is one of the those programs I really wish I knew about a long time ago.
Does not handle multiple python installations extraordinarily (or switching between them), but the project owner also developed another supporting product, pywin, meant to augment that particular shortcoming.
The whole point is, that it makes Windows command-line development quite a bit smoother, even if its not all the automation I dream about.
| 0
| 1
| 0
| 0
|
2013-11-26T14:45:00.000
| 1
| 1.2
| true
| 20,220,200
| 1
| 0
| 0
| 1
|
I am working on Windows (sadface) with Python and virtualenv.
I would like to have setup and teardown scripts that go along with the virtualenv activation/deactivation. But I am not sure if these hooks have already been designated, and if so, where?
I guess I could hack the activate.bat, but then what if I use activate.py instead (does activate.py call activate.bat, or must I hack both files)? I can almost get away with environment variable PYTHONSTARTUP, but this needs to be redefined in each virtualenv. So unless virtualenv allows arbitrary assignment of env-vars, I am back to an activation/deactivation hook to set PYTHONSTARTUP (which really defeats the purpose, but now you see my catch-22).
EDIT: I plan to use my virtualenv to host interactive development sessions. I will be calling 'venv/bin/activate.bat' manually from the terminal. I do not want loose Batch/Powershell scripts laying around that I have to remember to call once when I activate, and once again when I deactive. I want to hook the execution in such a way, so that after I add my custom scripting hooks, 6 months later I don't have to remember how it works. I just execute activate.bat, and I am off to the races.
|
Output file contains nothing before script finishing
| 20,234,550
| 0
| 1
| 641
| 0
|
python,bash,pbs
|
I like to write data to sys.stderr sometimes for this sort of thing. It obviates the need to flush so much. But if you're generating output for piping sometimes, you remain better off with sys.stdout.
| 0
| 1
| 0
| 0
|
2013-11-27T04:01:00.000
| 2
| 0
| false
| 20,233,650
| 1
| 0
| 0
| 1
|
I write a python script in which there are several print statement. The printed information can help me to monitor the progress of the script. But when I qsub the bash script, which contains python my_script &> output, onto computing nodes, the output file contains nothing even when the script is running and printing something. The output file will contains the output when the script is done. So how can I get the output in real time through the output file when the script is running.
|
How to send email alert to admin after delete a python setup file using linux?
| 20,235,689
| 2
| 0
| 100
| 0
|
python,linux,python-3.x
|
The easiest way is to not allow the user delete the script. Put the script in one of the non-core bin directories, e.g. into /usr/local/bin as root and regular user will not be able to remove it.
| 0
| 1
| 0
| 1
|
2013-11-27T06:34:00.000
| 1
| 0.379949
| false
| 20,235,524
| 0
| 0
| 0
| 1
|
I want to send an email alert using the linux os when user delete python setup file. i made a screenshot program using python, unfortunately if user uninstall python setup file.I want to send an email to the admin. if you know the processing steps kindly share with me or please give any suggestions.
|
Number of map tasks and split size
| 20,252,470
| 1
| 3
| 1,433
| 0
|
python,hadoop
|
Number of mappers is actually governed by the InputFormat you are using. Having said that, based on the type of data you are processing, InputFormat may vary. Normally, for the data stored as files in HDFS FileInputFormat, or a subclass, is used which works on the principle of MR split = HDFS block. However, this is not always true. Say you are processing a flat binary file. In such a case there is no delimiter(\n or something else) to represent the split boundary. What would you do in such a case? So, the above principle doesn't always work.
Consider another scenario wherein you are processing data stored in a DB, and not in HDFS. What will happen in such a case as there is no concept of 64MB block size when we talk about DBs?
The framework tries its best to carry out the computation in a manner as efficient as possible, which might involve creation of lesser/more number of mappers as specified/expected by you. So, in order to see how exactly mappers are getting created you need to look into the InputFormat you are using in your job. getSplits() method to be precise.
If I want to use only 1 map task, do I have to set the input splits size to 1GB??
You can override the isSplitable(FileSystem, Path) method of your InputFormat to ensure that the input files are not split-up and are processed as a whole by a single mapper.
Let's say I successfully specify that I want to use only 2 map tasks, does it use 2 cores? And each core has 1 map task??
It depends on availability. Mappers can run on multiple cores simultaneously. And a single core can run multiple mappers sequentially.
| 0
| 1
| 0
| 0
|
2013-11-27T16:57:00.000
| 2
| 1.2
| true
| 20,248,521
| 0
| 0
| 0
| 1
|
What I'm trying to do
I'm new to hadoop and I'm trying to perform MapReduce several times with a different number of mappers and reducers, and compare the execution time. The file size is about 1GB, and I'm not specifying the split size so it should be 64MB. I'm using a machine with 4 cores.
What I've done
The mapper and reducer are written in python. So, I'm using hadoop streaming. I specified the number of map tasks and reduce tasks by using '-D mapred.map.tasks=1 -D mapred.reduce.tasks=1'
Problem
Because I specified to used 1 map task and 1 reduce task, I expected to see just one attempt but I actually have 38 map attempts, and 1 reduce task. I read tutorials and SO questions similar to this problem, and some said that the default map task is 2, but I'm getting 38 map tasks. I also read that mapred.map.tasks only suggests the number and the number of map tasks is the number of split size. However, 1GB divided by 64MB is about 17, so I still don't understand why 38 map tasks were created.
1) If I want to use only 1 map task, do I have to set the input splits size to 1GB??
2) Let's say I successfully specify that I want to use only 2 map tasks, does it use 2 cores? And each core has 1 map task??
|
Mac OS deploy Django + Apache on Amazon EC2
| 20,256,055
| 1
| 0
| 304
| 0
|
python,django,apache,deployment,amazon-ec2
|
You use Terminal to SSH into your AWS EC2 environment. All commands past there are 100% platform based (ubuntu, amazon linux, red hat, etc).
You wouldn't use any Mac OS commands besides creating the SSH connection. There's a tutorial on how to do that through the EC2 console.
| 0
| 1
| 0
| 0
|
2013-11-27T20:35:00.000
| 1
| 1.2
| true
| 20,252,595
| 0
| 0
| 1
| 1
|
I am trying to deploy Django and Apache to Amazon EC2 server. Currently, i have already got the AWS account and lunched the instance on server. But the problem is that i cannot find a tutorial about how to deploy Django and Apache to Amazon EC2 WITH Mac OS, all i can find are Linux system deploying tutorials. Where can i find deploying tutorial for Mac OS?
|
Python eve not responding after running for a couple of days
| 20,278,344
| 0
| 1
| 433
| 0
|
python,eve
|
I had the exact same problem.
You are running something like this:
>python yourPeve.py
You need to run:
>python yourPeve.py &
The & simbol, will put the process in the background, so, when you close the terminal, the process won't be killed.
| 0
| 1
| 0
| 1
|
2013-11-28T08:06:00.000
| 2
| 0
| false
| 20,260,726
| 0
| 0
| 0
| 1
|
I've set up a very simple python eve on a linux machine. Somehow, it always stops responding after running for a while. I don't have much experience on python programming and eve doesn't seem to have very nice log file.
Can someone please help me to look into the root cause?
Thanks,
Chunan
|
Change Property of db.Model to custom Type
| 20,269,288
| 0
| 0
| 49
| 0
|
google-app-engine,python-2.7
|
Its not sql. You dont clone or delete 'tables', no such thing in the datastore.
To do the migration you would use task queues to run through a query. You probably need to stop your frontend while doing so. Task queues have a longer limit than the 60sec you mention and each taskqueue will create another one until you finish pro essing all items in your query.
Yiu also complain that its harder than other enviroments but it isnt so. The problem maybe is that you chose to use the datastore instead of cloud sql which you could also have used. Each has its pros and cons.
| 0
| 1
| 0
| 0
|
2013-11-28T12:58:00.000
| 1
| 0
| false
| 20,266,882
| 0
| 0
| 1
| 1
|
I'm trying to change the Property several Fields on my GAE AppEngine to a custom Type (Encrypted Content).
Most of them are currently String or Text Properties. Since we have multiple millions of Entries in our DB, migration is not an easy task. I'm looking for a best practise, here is what I think will work best but this might be very challenging to execution time limits plus I'm a little bit frightened about the costs for this task.
clone table to tmp_table
delete table
create table with new attributes
insert values from tmp_table into table
What sounds like a short hiking trip on most environments feels a little bit more complex on GAE ;)
My Questions to you:
- Are there any know best practises you are aware of / did you already achieve this challenge & how?
- Any Idea how to trigger the process (I would estimate it takes several minutes so the 60 second limit
|
Mercurial hook: isn't recompiled after change?
| 20,836,549
| 0
| 0
| 44
| 0
|
python,mercurial,mercurial-hook
|
I don't know exactly what happened, but it seams like it was not using the script because an exceptions somehow prohibited it from compiling to a pyc, and Mercurial somehow fetched the old version of that pyc file. Not too sure, but that's my best guess (as somehow noone else seems to have an idea and the Mercurial guys made it really clear they only answer stuff on their mailing list instead of SO.. how .. nice).
| 0
| 1
| 0
| 1
|
2013-11-28T17:32:00.000
| 1
| 1.2
| true
| 20,271,881
| 0
| 0
| 0
| 1
|
Ok, this is really weird. I have an old Mercurial 2.02. with python 2.6 on an old Ubuntu-something (I think 10.4).
We are a windows shop, und push regularly, so I wanted kind of a review service. It absolutely worked on windows.. pretxnchangegroup referencing the python file on the drive, worked..
But I made the mistake to create the Mercurial hook on a new Mercurial 2.7, but then recognized the internal API changed, so I got back and fixed it, or tried to. I'm using windows, but need to deploy the hook to Linux, so I use WinSCP to copy the py file to my home directory. And then sudo cp it to the python 2.6 distro folder where the other hook file lie.
I invoke the hook via the module pattern on the linux box:
pretxnchangegroup.pushtest = python:mycompanyname.testcommit.exportpatches
In the folder "mycompanyname" is the file testcommit.py and the function is named exportpatches. It works locally without a problem.
The strange thing: It worked once, and kind of unstable: sometime it just says that the function "mycompanyname.testcommit.exportpatches" is not defined. And sometimes it just uses an old version of the hook (I see that because it gives an old exception message instead of the newer one). And I don't know how to get exception messages in python, so I'm lost there..
Second strange thing: these hook files also have a .pyc version, probably compiled, but my hook doesn't get such treatment. Is that autocompilation?
If I try the directory approach to point to the file, I get a straight 500 internal error on push.
I'm really lost and desperate by now, because the stuff has to work pretty soon, and I'm banging my head against the wall right now..
|
how to start external system application in python specifying startup directory?
| 20,293,883
| -1
| 0
| 46
| 0
|
python,windows
|
You can use os.chdir(target_directory) to change your program's working directory before starting the external application.
| 0
| 1
| 0
| 0
|
2013-11-29T21:54:00.000
| 2
| -0.099668
| false
| 20,293,848
| 1
| 0
| 0
| 1
|
For example I know this method: os.system("cmd") but it starts console in the directory of the script or in the dir of the interpreter, is there a way to gain control of this issue ?
|
Keeping JVM running
| 20,315,993
| 1
| 0
| 469
| 0
|
java,python,jvm
|
Without fixing python so it doesn't do you this, you can start a java service which calls your code and have python talk to it via TCP e.g. using protobuf. This way the service can be running all the time.
| 0
| 1
| 0
| 0
|
2013-12-01T19:03:00.000
| 1
| 0.197375
| false
| 20,315,970
| 1
| 0
| 1
| 1
|
We've been working on a web application using Django. One library we needed to use was written in java, so I made a single jar file containing all the java code we need to use. The python script simply calls the java program using subprocess module and resumes its execution.
Everytime the java program is called, it initializes the jvm, does a little work, and then uninitializes itself. This introduces some overhead which might not be that significant in the end but nevertheless having to go through this construct/destroy circle every time we need something from the java library bothers me.
Is there an elegant way of doing this without the overhead i just described above?
|
Python3.3 header preferred over Python2.7 header by gcc
| 20,325,445
| 1
| 3
| 806
| 0
|
python,python-2.7,gcc,python-3.x,include-path
|
GCC probably prefers the 3.3 version if it's installed as the default that's run when you call 'python' without a version? You could always point that binary at the 2.7 to make it the default on your system..
Looking at the m4 source, seems like you might be able to do the following on one line:
PYTHON=/path/to/python2.7 PYTHON_INCLUDES="-I/usr/include/python2.7" ./configure --prefix /bla/bla
| 0
| 1
| 0
| 1
|
2013-12-02T07:38:00.000
| 1
| 1.2
| true
| 20,323,084
| 0
| 0
| 0
| 1
|
I am trying to compile a code which uses the Python.h header. In fact it is the lcm library.
Now, I have Python2.7 and Python3.3 installed on my system. The respective header filer are found in /usr/include/python2.7/ and /usr/include/python3.3m/.
The problem is that the code needs the 2.7 version, but gcc always prefers the 3.3 version.
I tried setting ./configure --prefix /bla/bla CPPFLAGS=-I/usr/include/python2.7/ and export C_INCLUDE_PATH=/usr/include/python2.7, none of which worked.
An intermediate workaround is to change the code to #include <python2.7/Python.h> but that makes it unportable, so it will not serve as a fix for the lcm people...
There must be a way!!!
|
I need to overwrite an existing Python installation in ubuntu 12.04.3
| 20,324,883
| 0
| 0
| 533
| 0
|
python,django,ubuntu
|
Sounds like an issue with your path - python not finding django becuase it doesnt know where to look for it. Look up issues regarding path and see if those help.
| 0
| 1
| 0
| 1
|
2013-12-02T09:29:00.000
| 2
| 1.2
| true
| 20,324,828
| 0
| 0
| 1
| 1
|
and thanks ahead of time.
I am relatively new to Linux and am using Ubuntu 12.04.3. Basically, I've been messing around with some files trying to get Django to work. Well, I though I should do another install of Python2.7 for some reason. Stupidly, I manually installed it. Now when I open the Python shell and do 'import django', it can't be found.
I just want to go back to using the Python that was on Ubuntu by default, or overwrite the one I installed manually with one using apt-get. However, I am unable to figure out how to do this nor have I found a question that could help me.
Any help is much appreciated. I've been working on this for 6 hours now...
--EDIT--
Ok well I'm just trying to go ahead and have the PYTHONPATH look in the right place. I've seen in other posts that you should do this in the ~/.profile file. I went into that file and added this line
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/dist-packages
"import django" is still coming up with "no module found"
I tried doing "import os" and then "os.environ["PYTHONPATH"], which gave me:
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/UserDict.py", line 23, in getitem
raise KeyError(key)
KeyError: 'PYTHONPATH'
As far as I can tell, this means that I do not have a PYTHONPATH variable set, but I am unsure as to what I am doing wrong.
--ANOTHER EDIT--
As I am not a very reputable member, I am not allowed to answer my own question before 8 hours from my original question, so I am putting it as an update.
Hey guys, thank you all for the quick responses and helpful tips. What I did was open a python shell and type:
sys.path.append('/usr/local/lib/python2.7/dist-packages')
and it worked!
I should have done this from the beginning instead of trying to overwrite my manual Python installation.
Once again, thank you all for the help.
I feel so relieved now :)
|
Gtk-WARNING **: Locale not supported by C library. while using several Python modules (mayavi, spectral)
| 20,513,075
| 2
| 4
| 5,430
| 0
|
python,gtk,mayavi,spectral
|
While using OS X Mavericks one has to use: ipython --pylab=wx instead of ipython --pylab=osx to avoid crashing the X11 window. I don't know why this works.
| 1
| 1
| 0
| 0
|
2013-12-04T03:57:00.000
| 1
| 1.2
| true
| 20,366,533
| 0
| 0
| 0
| 1
|
I updated my MacBook to Mavericks, reinstalled Macports and all Python 2.7 modules I usually use. While running Python I get the following messages:
when importing mlab:
from mayavi import lab
(process:1146): Gtk-WARNING **: Locale not supported by C library.
Using the fallback 'C' locale.
when running a mlab command such as mlab.mesh(), the display window opens, shows no content and freezes.
I don't get this message while importing spectral, but I get it when running view_cube() the display window showing the image cube, freezes but shows the data cube. It seems there is something wrong with Xterm, but I can't figure it out. How can I keep the display window from freezing and get rid of the Gtk-WARNING?
I checked locale and locale -a, but couldn't see anything unusual:
locale:
locale
LANG=
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
|
How to check if Tornado already listen url?
| 20,512,056
| 2
| 1
| 566
| 0
|
python,tornado
|
There is no public interface to find out whether a path is currently mapped in a Tornado Application. In general, you shouldn't be calling add_handlers after startup anyway - instead, add a wildcard rule (like (r'/game/(.*)', GameHandler)) and then in GameHandler you can check whether the requested game exists or not (and if not, raise HTTPError(404)).
| 0
| 1
| 0
| 0
|
2013-12-05T10:40:00.000
| 3
| 0.132549
| false
| 20,397,744
| 0
| 0
| 0
| 2
|
How to check if application in Tornado listen some url ?
I need to listen lot off urls, for new game I create new and add programmatically handler for that url, but I first need to check. How to check if Tornado already listen url?
|
How to check if Tornado already listen url?
| 20,493,472
| 1
| 1
| 566
| 0
|
python,tornado
|
I believe you'll need a fine-grained access to the active games, so better keep them in your domain model.
Still, you can examine tornado.web.Application.handlers of your app.
| 0
| 1
| 0
| 0
|
2013-12-05T10:40:00.000
| 3
| 0.066568
| false
| 20,397,744
| 0
| 0
| 0
| 2
|
How to check if application in Tornado listen some url ?
I need to listen lot off urls, for new game I create new and add programmatically handler for that url, but I first need to check. How to check if Tornado already listen url?
|
How to "merge" two Python twisted applications?
| 20,429,005
| 0
| 2
| 247
| 0
|
python,asynchronous,twisted,tornado,reactor
|
So far I found, that if you merge two twisted applications, you should remove reactor.run() from one of them, leaving only one reactor.run() in the end. And be sure that twisted.reactor implementation is same for both applications.
More comments welcome.
| 0
| 1
| 0
| 0
|
2013-12-05T15:28:00.000
| 2
| 0
| false
| 20,403,921
| 0
| 0
| 1
| 1
|
I have two applications, written on twisted framework, for example one using twisted.web, and other using twisted.protocols.*, not web. How can I "merge" them in one, effectively sharing one reactor for both apps?
What are the best practices for that tasks? Actually I need to connect SIPSimpleApp and TornadoWeb. They both can use twisted reactor.
|
Can human-readable source code be recovered from a py2exe executable?
| 20,421,890
| 3
| 1
| 598
| 0
|
python,py2exe
|
Yes, trivially. Py2exe just creates a zip of the .pyc files with an executable wrapper, and those files are easily uncompilable with eg uncompyle.
The way to sell commercial software in Python is not to worry about whether people can see the code, but to license it appropriately.
| 0
| 1
| 0
| 0
|
2013-12-06T10:38:00.000
| 1
| 1.2
| true
| 20,421,825
| 1
| 0
| 0
| 1
|
I would like to distribute a .exe program that I made with Python and py2exe.
My program may be commercial (not sure). If so, I don't want people to recover easily the sourcecode. (Of course reverse engineering or complex decompiling can always be done, but...)
So : is it possible to recover the original sourecode from an executable produced by py2exe?
|
How do I store files in googleappengine datastore
| 20,424,484
| 0
| 0
| 71
| 1
|
python,google-app-engine,blob,google-cloud-datastore
|
Datastore has a limit on the size of objects stored there, thats why all examples and documentation say to use the blobstore or cloud storage. Do that.
| 0
| 1
| 0
| 0
|
2013-12-06T10:47:00.000
| 1
| 0
| false
| 20,421,965
| 0
| 0
| 1
| 1
|
Just wondering how to store files in the google app engine datastore.
There are lots of examples on the internet, but they are using blobstore
I have tried importing db.BlobProperty, but when i put() the data
it shows up as a <Blob> i think. It appears like there is no data
Similar to None for a string
Are there any examples of using the Datastore to store files
Or can anyone point me in the right direction
I am new to programming, so not to complex, but I have a good
hang of Python, just not an expert yet.
Thanks for any help
|
Using vagrant as part of development environment
| 21,195,137
| 0
| 5
| 1,504
| 0
|
python,development-environment,vagrant
|
In most IDE you can add "library" path which are outside the project so that your code completion etc works. About the traceback, I'm unfamiliar with python but this sounds like issue that are resolved by "mapping" paths between servers and dev machine. This is generally the reason why #2 is often the way to go (Except when you have a team willing to do #1).
| 0
| 1
| 0
| 0
|
2013-12-06T21:16:00.000
| 1
| 0
| false
| 20,433,712
| 0
| 0
| 0
| 1
|
I'm investigating ways to add vagrant to my development environment. I do most of my web development in python, and I'm interested in python-related specifics, however the question is more general.
I like the idea of having all development-related stuff isolated in virtual machine, but I haven't yet discovered an effective way to work with it. Basically, I see 3 ways to set it up:
Have all services (such as database server, MQ, etc) as well as an application under development to run in VM. Developer would ssh to VM and edit sources there, run app, tests, etc, all in an ssh terminal.
Same as 1), but edit sources on host machine in mapped directory with normal GUI editor. Run application and tests on vagrant via ssh. This seems to be most popular way to use vagrant.
Host only external services in VM. Install app dependencies into virtualenv on host machine and run app and tests from there.
All of these approaches have their own flaws:
Developing in text console is just too inconvenient, and this is the show-stopper for me. While I'm experienced ViM user and could live with it, I can't recommend this approach to anyone used to work in any graphical IDE.
You can develop with your familiar tools, but you cannot use autocompletion, since all python libs are installed in VM. Your tracebacks will point to non-local files. You will not be able to open library sources in your editor, ctags will not work.
Losing most of "isolation" feature: you have to install all compilers, *-dev libraries yourself to install python dependencies and run an app. It is pretty easy on linux, but it might be much harder to set them all up on OSX and on Windows it is next to impossible I guess.
So, the question is: is there any remedy for problems of 2nd and 3rd approaches? More specifically, how is it possible to create an isolated and easily replicatable environment, and yet enjoy all the comfort of development on host machine?
|
On what operating systems are the different functions of the select module in Python, like select(), poll(), epoll() available?
| 20,457,117
| 0
| 0
| 57
| 0
|
python,sockets,networking,io,network-programming
|
From the python select documentation:
This module provides access to the select() and poll() functions
available in most operating systems, epoll() available on Linux 2.5+
and kqueue() available on most BSD. Note that on Windows, it only
works for sockets; on other operating systems, it also works for other
file types (in particular, on Unix, it works on pipes). It cannot be
used on regular files to determine whether a file has grown since it
was last read.
| 0
| 1
| 0
| 0
|
2013-12-08T18:00:00.000
| 1
| 1.2
| true
| 20,456,893
| 0
| 0
| 0
| 1
|
I was trying to use the poll() function on Windows when I realized that only the select() function is supported on windows, and I believe poll() is supported on Linux.
Could anyone help me out as to what functions of the select module are supported on what operating systems?
Thanks
|
running python script for multiple files from windows cmd
| 20,471,771
| 0
| 2
| 2,906
| 0
|
python,windows,batch-file,cmd
|
Those wildcards are expanded at "shell (i.e. bash) level" before running your python script.
So the problem doesn't reside in python, but in the "shell" that you are using on Windows.
Probably you cloud try PowerShell for Windows or bash via CygWin.
| 0
| 1
| 0
| 0
|
2013-12-09T13:21:00.000
| 5
| 0
| false
| 20,471,702
| 0
| 0
| 0
| 1
|
I am trying to run python script from windows cmd. When I run it under linux I put
python myscript.py filename??.txt
it goes through files with numbers from filename01.txt to filename18.txt and it works.
I tried to run it from cmd like
python myscript.py filename*.txt
or
python myscript.py filename**.txt
but it didnt work. If I tried the script on one single file in windows cmd it works.
Do you have any clue where the problem could be?
Thanks!
|
Unable to get celery worker to work
| 20,506,294
| 0
| 0
| 450
| 0
|
python,celery,celerybeat
|
I figured out the problem and it turns out to be a really stupid one, rather shows that I was doing a bad practice.
I was using Gevent for measuring performance difference on the server and the way I was spawning Celery workers was not the right way to handle Gevent code. Not that I did not know that Celery needs a flag in the command line argument for Gevent, not using the same code on the local machine always worked for me. It just never struck my mind that I was using Gevent and that was causing the issue.
All resolved after almost 20 hours of debugging, googling and chatting on IRC.
| 0
| 1
| 0
| 0
|
2013-12-10T12:03:00.000
| 1
| 0
| false
| 20,494,017
| 0
| 0
| 0
| 1
|
I have been working on a project which uses Celery beat for scheduling tasks. Locally, I have been using RabbitMQ as the broker and everything was working fine.
When I pushed my project to the remote server, I changed the broker to Redis.
The celery beat process seams to work fine as I can see in the console that it is scheduling the task. But the worker in unable to pick up the task. When I call the task asynchronously from a shell by using delay() on the task, even then the task does not get picked up by the worker.
I assumed that there could be something weird with Redis. However, that doesn't seem to be the cases. I made my project work with Redis locally. On the server, when I changed the broker to RabbitMQ, even then I was getting the same issue.
My local machine runs Mac OS and the server runs Debian 6.
What could be the issue? How can I debug this situation and just get the worker to consume tasks and do the work? I am using Python 2.7.
|
MPI distributed, unordered work
| 20,500,867
| 2
| 1
| 182
| 0
|
c++,python,c,mpi,distributed-computing
|
You can use MPI_ANY_SOURCE and MPI_ANY_TAG for receiving from anywhere. After receiving you can read the Information (source and tag) out of the MPI_Status structure that has to passed to the MPI_Recv call.
If you use this you do not neccessary need any asynchronous communication, since the master 'listens' to everybody asking for new jobs and returning results; and each slave does his task and then sends the result to the master asks for new work and waits for the answer from the master.
You should not have to work with scatter/gather at all since those are ment for use on an array of data and your problem seems to have more or less independant tasks.
| 0
| 1
| 0
| 0
|
2013-12-10T17:03:00.000
| 1
| 0.379949
| false
| 20,500,675
| 0
| 1
| 0
| 1
|
I would like to write a MPI program where the master thread continuously submits new job to the workers (i.e. not just at the start, like in the MapReduce pattern).
Initially, lets say, I submit 100 jobs onto 100 workers.
Then, I would like to be notified when a worker is finished with a job. I would send the next job off, whose parameters depend on all the results received so far. The order of results does not have to be preserved, I would just need them as they finish.
I can work with C/C++/Python.
From the documentation, it seems like I can broadcast N jobs, and gather the results. But this is not what I need, as I don't have all of them available, and gather would block. I am looking for a asynchronous, any-worker recv call, essentially.
|
Specify which version of Python runs in Automator?
| 33,398,907
| 2
| 2
| 952
| 0
|
python,macos,updates,automator
|
I couldn't specify explicitly which python for it to use.
So, I ran it in bash environment with following command:
$ your/python/path /path/to/your/python/script.py
And make sure first line of your python program contains the path to the python environment you wish to use.
Eg:
#! /usr/local/bin/python
| 0
| 1
| 0
| 0
|
2013-12-10T20:59:00.000
| 1
| 1.2
| true
| 20,505,085
| 1
| 0
| 0
| 1
|
In my terminal and in CodeRunner my Python is updated to 2.7.6 but when I ran a shell script in the OSX Automator I found that it is running 2.7.2
How can I update the Automator Python to 2.7.6 like the rest of my compilers ?
|
how to Start python shell program on windows 7 startup?
| 20,508,067
| 4
| 0
| 6,240
| 0
|
python,windows,shell,startup
|
Create a batch file with the line start C:\python27\python.exe D:your_program_location\your_program.py'
Drag the batch file from desktop to "Start - All Programs - Startup". That should do the trick.
| 0
| 1
| 0
| 0
|
2013-12-11T00:00:00.000
| 2
| 1.2
| true
| 20,507,907
| 1
| 0
| 0
| 2
|
I have made a program that takes infrared values serially, transmits them them to another program(the one im having trouble with), and uses the win32 python api to react to a matched value. It all works, but I need this program to run on the startup of my computer.
It uses the IDLE python shell to run, and I need to open/run the file directly from that program. Is there any way to do this? I can't just put a shortcut into the startup directory because its an unrecognized file, and it needs to be run, not just opened. Any help would be great, thanks!
|
how to Start python shell program on windows 7 startup?
| 20,508,698
| 0
| 0
| 6,240
| 0
|
python,windows,shell,startup
|
It sounds like your actual problem is just that you didn't put the right extension on the file.
Just rename it to, e.g., script.py, or script.pyw, and, unless you've changed the settings from the default, that should open the file with the command-line or windowed Python launcher, which will just run it.
If you've changed your settings so .py files open in IDLE instead of in the Python launcher… is there a reason you want those weird settings? If not, just undo it.
| 0
| 1
| 0
| 0
|
2013-12-11T00:00:00.000
| 2
| 0
| false
| 20,507,907
| 1
| 0
| 0
| 2
|
I have made a program that takes infrared values serially, transmits them them to another program(the one im having trouble with), and uses the win32 python api to react to a matched value. It all works, but I need this program to run on the startup of my computer.
It uses the IDLE python shell to run, and I need to open/run the file directly from that program. Is there any way to do this? I can't just put a shortcut into the startup directory because its an unrecognized file, and it needs to be run, not just opened. Any help would be great, thanks!
|
Streaming of a log text file with constant updates
| 21,938,560
| 0
| 0
| 189
| 0
|
python,logging,streaming
|
I'm doing something like that.
I have a server running on my raspberry pi + client that parse the output of the server and sends it to another server on the web.
What I'm doing is that the local server program write it's data in chunks.
Every time it writes the data (by the way, also on tmpfs) it writes it on a different file, so I don't get errors when trying to parse the file while something else is writing to that file..
After it writes the file, it starts the client program in order to parse and send the data (Using subprocess with the name of the file as a parameter).
Works great for me.
| 0
| 1
| 0
| 1
|
2013-12-11T10:22:00.000
| 2
| 0
| false
| 20,516,490
| 0
| 0
| 0
| 1
|
I made a program which is saving sensor data in a log file (server site).
This file is stored in a temporary directory (Ram-Disk).
Each line contains a timestamp and a JSON string.
The update rate is dependent on the sensor data, but the fastest is every 0.5s.
What I want to do is, to stream every update in this file to a client application.
I have several approaches in mind:
maybe a shared folder on server site (samba) with a script (client site), just checking the file every 0.5s
maybe a another server program running on the server, checking for updates (but this I don't want to do, because Raspberry Pi is slow)
Has anyone maybe done something like this before and can share some ideas? Is there maybe a python module for this already (which opens a file like a stream and if something changed then this stream is giving it out)? Is it smart to check a file constantly for updates?
|
Python scripts stopped running on double-click in Windows
| 36,706,017
| 1
| 5
| 7,440
| 0
|
python
|
I was having the same issue. The code works in the IDLE but not on double click. I ran the script through the command prompt and it gave me an error that the IDLE didn't find. Windows didn't like the ascii characters I was printing. I removed them and the script started to work on double click again.
| 0
| 1
| 0
| 0
|
2013-12-11T14:14:00.000
| 4
| 0.049958
| false
| 20,521,456
| 0
| 0
| 0
| 1
|
I always ran my scripts on windows by double-clicking them. However after I reinstalled my python versions this is not happenning. My python installations are on C:\Python27 and C:\Python33. PATH has C:\Python27\ in it. If I try to run a script from cmd, it works ok. But when I double-click any .py file nothing happens.
I am completely clueless as I don't use windows often for scripting.
What can be the reason for that?
|
How do I unlock the app engine database when localhost runs?
| 20,578,822
| 9
| 7
| 3,369
| 0
|
python,google-app-engine,localhost,google-cloud-datastore
|
This can happen if you're running multiple instances of dev_appserver without giving them distinct datastore files/directories. If you need to be running multiple instances, see dev_appserver.py --help and look at the options for specifying paths/files.
| 0
| 1
| 0
| 0
|
2013-12-14T01:42:00.000
| 4
| 1
| false
| 20,578,757
| 0
| 0
| 1
| 2
|
Right now I get a blank page when localhost runs, but the deployed app is fine. The logs show the "database is locked". How do I "unlock" the database for localhost?
|
How do I unlock the app engine database when localhost runs?
| 59,824,595
| 0
| 7
| 3,369
| 0
|
python,google-app-engine,localhost,google-cloud-datastore
|
So with your command to start the server which should be
start_in_shell.sh -f -p 8xxx -a 8xxx
do include a -s flag after the -f as below:
start_in_shell.sh -f -s -p 8xxx -a 8xxx
Sometimes some unanticipated error somewhere causes this issue. Remember to keep only one of the instances with this flag(-s) and others should be started as you do usually.
This should make it work.
| 0
| 1
| 0
| 0
|
2013-12-14T01:42:00.000
| 4
| 0
| false
| 20,578,757
| 0
| 0
| 1
| 2
|
Right now I get a blank page when localhost runs, but the deployed app is fine. The logs show the "database is locked". How do I "unlock" the database for localhost?
|
Use python to reproduce bash command 'ls -a' output
| 20,590,779
| 10
| 9
| 17,199
| 0
|
python,linux,bash
|
Just add them manually to os.listdir() result. result = [os.curdir, os.pardir] + os.listdir(path).
Most modern filesystems no longer create the actual hardlinks but all APIs include the names explicitly anyway.
| 0
| 1
| 0
| 0
|
2013-12-15T02:53:00.000
| 1
| 1.2
| true
| 20,590,704
| 1
| 0
| 0
| 1
|
I am new to python and am working on writing bash ls command in python, I am stuck on ls -a option which (according to the manpage):
Include directory entries whose names begin with a dot (`.')
I am aware of os.listdir() but it does not list special entries '.' and '..'
From the docs: os.listdir(path):
Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order. It does not include the special entries '.' and '..' even if they are present in the directory.
I need help in listing these special entries through python, I would appreciate if someone can help me out here a little.
Thanks all for your patience.
|
What are the cons if I set the file descriptor limit to a huge number?
| 20,604,080
| 1
| 2
| 1,001
| 0
|
python,linux,macos,tornado
|
It's safe to set it quite high. Usually the default for desktop OS is quite low. The main disadvantage is the extra memory that is allocated
| 0
| 1
| 0
| 0
|
2013-12-16T05:22:00.000
| 2
| 0.099668
| false
| 20,603,994
| 1
| 0
| 0
| 1
|
I use Tornado(a python framework) to develope websites, but it often pops me the OSError - too many open files error during high concurrency tests.
One way to solve this is to set the FD limit to a higher number.
What are the cons, or disadvantages of setting a high FD limit? Can I set it arbitrarily to, like 99999999?
|
danger to crash desktop by using ipython-nb slides in ubuntu 12.04
| 20,605,837
| 0
| 0
| 42
| 0
|
ubuntu,desktop,ipython-notebook
|
There is no reason for a website (here in particular IPython slideshow) to be able to have any effect on the OS. If this happens, then it is probably due to en error somewhere else (wild guess error in graphic card that both crashed the slideshow and gnome). So more a hidden common denominator than cause/effect.
Also you should'nt need to reinstall Linux totally if something like that happen again. The gnome (?) desktop is just a package amoung other that you can reinstall. Of course you need to know a little about using the terminal and apt-get.
| 0
| 1
| 0
| 0
|
2013-12-16T07:05:00.000
| 1
| 0
| false
| 20,605,189
| 1
| 0
| 0
| 1
|
I had an ugly experience when I tried to launch the ipython-nb presentation in ubuntu 10.04.
I could managed to see the presentation under chrome but with errors (slides where one over the other). But the worst thing was that once I restarted my pc, the genome(?) desktop was gone. I had to reinstall the entire linux.
I would like yo know if someone has experienced a similar crash under ubuntu12.04LTS.
|
Using Canopy as a Python distribution, where is "canopy_cli.exe" located?
| 20,655,950
| 1
| 0
| 147
| 0
|
python,python-2.7,canopy,python-venv
|
Well, I found it, canopy_cli.exe exactly at the specified location, so the directory is correct.
So... Maybe you try to reinstall the software?
Btw, what version you are using? How about updating as the same time?
| 0
| 1
| 0
| 0
|
2013-12-18T09:55:00.000
| 1
| 1.2
| true
| 20,654,812
| 1
| 0
| 0
| 1
|
I am working with Canopy for Python, which is a software bundle of Python 2.7 and the most important data analysis packages. It also includes IPython and makes working in a live console very easy.
Instead of using virtualenv, Canopy makes use of a Python-2.7-backported version of "venv". To initially setup a new environment, they want me to use canopy_cli myProjectFolder.
Unfortunately, I do not find a canopy_cli.exe in my C:\Users\Me\AppData\Local\Enthought\Canopy\App on Windows.
Did I do anything wrong or is the file located elsewhere?
Thanks!
|
Can errors result from using .py files with pythonw?
| 20,669,260
| 1
| 0
| 121
| 0
|
python
|
The only problems I would expect would be in:
print
stdin
stdout
raw_input
| 0
| 1
| 0
| 0
|
2013-12-18T21:16:00.000
| 1
| 1.2
| true
| 20,668,574
| 0
| 0
| 0
| 1
|
Easy quesiton here. I have a GUI which I run using a batch file. I want it to be displayed with out the terminal in the background, so I use the pythonw executable. However, I am not using the corresponding .pyw file, but a regular .py file instead.
Are there any differences between python and pythonw that might cause strange behavior. The program gives strange behavior when I use the batch file, but not when within cmd so I suspect the culprit is some internal difference between python and pythonw. Could this be the case? Thanks in advance.
|
Installing a Python package with an Upstart service using setuputils
| 20,672,084
| 1
| 0
| 411
| 0
|
python,ubuntu,service,setuptools,upstart
|
Should setuputils even be responsible for creating the service on the machine, or rather should this be handled by an external package manager like dpkg/apt/rpm
Almost certainly the latter.
distutils/setuptools is not designed to handle things like this.
There's some configuration information that's sufficient for installing site-packages, shared data, executables, and maybe a few other things in ways that make sense on your platforms. But there's nowhere near enough to handle things like installing init scripts.
These tools are designed to handle not just slightly different early-2010s-era Ubuntu-like linux distros, but a wide variety of different platforms. On non-Ubuntu-like distros (and pre-lucid Ubuntu) there is no Upstart, but there is SysV-style init. On some other *nixes, there isn't even SysV-style init, but there is BSD-style init. On OS X, while SysV-style init does exist, it's heavily deprecated and launchd is used instead. On Windows, there isn't anything even remotely similar, but there are completely different ways to set up "services" and "run-at-startup" programs and related concepts.
On top of that, on many platforms, the package manager wants to be able to own all startup scripts, and you don't want to violate that expectation on a user/sysadmin's behalf without him specifically asking for it.
So, you need a platform-specific package for each platform. If you just create a PyPI package and a .deb for Ubuntu Precise or whatever you use, if some Fedora or Mac or Ubuntu Natty user gets jealous, they'll either do it themselves, or ask you.
| 0
| 1
| 0
| 0
|
2013-12-19T01:07:00.000
| 1
| 1.2
| true
| 20,671,719
| 1
| 0
| 0
| 1
|
I'm attempting to create a python package that when installed also creates an upstart service. Currently, my options are symlinking the service from the package directory to /etc/init, or copying the file to /etc/init. Either one works fine so long as I can unlink/delete the file upon uninstallation of the package. I saw another related question where a commenter expressed that this should not be the job of setuputils in the first place. So my question is as follows:
Should setuputils even be responsible for creating the service on the machine, or rather should this be handled by an external package manager like dpkg/apt/rpm; if it is prudent, is there a way to somehow run a script upon uninstallation of a package or have setuputils remove the file from /etc/init without modifying SOURCES.txt in the egg after running sdist?
Thanks!
|
Is it possible in python to kill process that is listening on specific port, for example 8080?
| 20,691,487
| 1
| 13
| 19,468
| 0
|
python,python-2.7
|
First of all, processes don't run on ports - processes can bind to specific ports. A specific port/IP combination can only be bound to by a single process at a given point in time.
As Toote says, psutil gives you the netstat functionality. You can also use os.kill to send the kill signal (or do it Toote's way).
| 0
| 1
| 0
| 1
|
2013-12-19T20:40:00.000
| 7
| 0.028564
| false
| 20,691,258
| 0
| 0
| 0
| 1
|
Is it possible in python to kill a process that is listening on a specific port, say for example 8080?
I can do netstat -ltnp | grep 8080 and kill -9 <pid> OR execute a shell command from python but I wonder if there is already some module that contains API to kill process by port or name?
|
Raspbian Run 4 commands from terminal window after desktop loads Python script?
| 20,727,288
| 0
| 1
| 1,022
| 0
|
python,linux,raspberry-pi,raspbian
|
Don't try to open a terminal window from python. Just use the os.system() command to run the three commands you show, if you insist on using python. Even easier would be a bash script into which you can write the three commands just as you have written them above.
Even better, and to get rid of the need to somewhere type the sudo password, add the three commands without sudo to /etc /rc.local just before the exit 0.
| 0
| 1
| 0
| 1
|
2013-12-22T07:21:00.000
| 1
| 0
| false
| 20,727,189
| 0
| 0
| 0
| 1
|
The following commands (to get a small screen working) execute just fine if I type them in from the LXTerminal window while running Raspian on a raspberry Pi once my desktop is loaded:
sudo modprobe spi-bcm2708
sudo modprobe fbtft_device name=adafruitts rotate=90
export FRAMEBUFFER=/dev/fb1
startx
I'm new to Pi and Python, and after piecing together several forum posts, the best way I thought to do this would be to run a python script from the /etc/xdg/lxsession/LXDE/autostart config file- I just don't know what the python script should say to automaticlaly open a LXTerminal window and type in the commands?
Any help would be much appreciated, thanks!
|
Running google app engine locally tbehind a proxy
| 22,313,654
| 0
| 1
| 426
| 0
|
google-app-engine,python-2.7,ubuntu
|
reset your proxy environment variables (http_proxy and https_proxy) while running the app server locally. You need them only when you are deploying your app to actual google servers.
| 0
| 1
| 0
| 0
|
2013-12-22T10:32:00.000
| 1
| 0
| false
| 20,728,436
| 0
| 0
| 1
| 1
|
I have been trying to run a small app using google app engine (python) on 8080. I am behind my college proxy which requires a username and password to login
here is what i get
INFO 2013-12-22 10:16:19,516 sdk_update_checker.py:245] Checking for updates to the SDK.
INFO 2013-12-22 10:16:19,518 init.py:94] Connecting through tunnel to: appengine.google.com:443
INFO 2013-12-22 10:16:19,525 sdk_update_checker.py:261] Update check failed:
WARNING 2013-12-22 10:16:19,527 api_server.py:331] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-12-22 10:16:19,529 api_server.py:138] Starting API server at: >localhost:35152
INFO 2013-12-22 10:16:19,545 dispatcher.py:171] Starting module "default" running at: >localhost:8080
INFO 2013-12-22 10:16:19,552 admin_server.py:117] Starting admin server at: >localhost:8000
but when i go to my browser to go to 8080...i get:
HTTPError()
HTTPError()
Traceback (most recent call last):
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call
return app(environ, start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
INFO 2013-12-22 10:22:05,095 module.py:617] default: "GET / HTTP/1.1" 500 -
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call
self._flush_logs(response.get('logs', []))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send
f = self.opener.open(req)
File "/usr/local/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call
return app(environ, start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call
self._flush_logs(response.get('logs', []))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send
f = self.opener.open(req)
File "/usr/local/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
HTTPError()
HTTPError()
Traceback (most recent call last):
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call
return app(environ, start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call
self._flush_logs(response.get('logs', []))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send
f = self.opener.open(req)
File "/usr/local/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call
return app(environ, start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call
self._flush_logs(response.get('logs', []))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send
f = self.opener.open(req)
File "/usr/local/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
INFO 2013-12-22 10:22:05,141 module.py:617] default: "GET /favicon.ico HTTP/1.1" 500 -
I have set my proxy connections (with username and password) as environment variables in apt.conf files and my terminal works fine with it...
i use ubuntu 12.04
|
Concurrent file access in Python
| 20,744,641
| 0
| 3
| 686
| 0
|
python,windows,file,concurrency
|
You can open the file in 'r+b' mode. You would then have a single file object which could be accessed by the two different processes.
Doing so requires some communication between processes (or careful handling of the processes) to about the current state of the file.
Overall, this seems a better approach then over-riding OS / file system locking to create duplicate file objects, which seems like the sort of thing that can't possibly end well.
You could also simply have the writer process open/close the file every time it accesses it, then the same with the reader process, assuming this is feasible for your program.
| 0
| 1
| 0
| 0
|
2013-12-23T13:01:00.000
| 1
| 0
| false
| 20,744,160
| 1
| 0
| 0
| 1
|
I have a Python script, which appends content to a large file a few times a second. I also need a second process, which occasionally opens that large file, and reads from it.
How do I do that in Windows? In C++ I could simply open a file with _SH_DENYNO, but what is the equivalent in Python?
|
what's the limit of logfile's size in python
| 20,756,290
| 1
| 0
| 1,003
| 0
|
python,logging
|
Maximum size of a file is filesystem dependant. There's nothing about it in Python.
| 0
| 1
| 0
| 0
|
2013-12-24T06:43:00.000
| 2
| 1.2
| true
| 20,756,170
| 1
| 0
| 0
| 1
|
In python, what's the limit of log-file's size by default? 10M, 2G, or some other limit?
Where can I find this information in the API? I know the rollback of RotateFileHandlers, it can configure. Thanks.
|
Execute shell command when receiving properly constructed email
| 20,764,497
| 0
| 0
| 817
| 0
|
python,bash,shell,email
|
procmail does this kind of thing trivally - checking the format of an incoming email and running a shell script that is. There's no need to reinvent the wheel
I'm not entirely clear from your description how python fits in to what you want to do..
Hope this helps!
| 0
| 1
| 0
| 1
|
2013-12-24T13:06:00.000
| 2
| 0
| false
| 20,761,523
| 0
| 0
| 0
| 1
|
Having failed to find an answer to this elsewhere, I am opening this question more widely.
I need to execute a bash shell command when a properly constructed email is received (I'm using GMail) using Python. I have previously used Python to send emails, but the only solution I have yet found is to use feedparser and Google Atom, which I don't like. I would suggest that a keyword could exist in either the subject or body of the email; security is not an issue (I don't think) as the consequence is benign.
The bash command to execute will actually call another scripts to send the latest jpg from my Python motion detection routine which runs independently.
|
Apache restart when developing python wsgi apps
| 20,779,815
| 0
| 0
| 217
| 0
|
python,windows,apache,wsgi
|
Accordning to the links in the comments above, restarts after source changes are always necessary on windows. On linux you still need to touch the wsgi file after source changes. Is it only me that finds this being a major drawback, compared to PHP?
| 0
| 1
| 0
| 0
|
2013-12-24T16:14:00.000
| 1
| 0
| false
| 20,763,775
| 0
| 0
| 1
| 1
|
I am evaluating python for web development (mod_wsgi) and have noticed that on windows I have to restart Apache after changing my python source code. On Ubuntu the problem doesn't exists, probably because linux supports wsgi daemon mode.
Are there any way to have hot deployment during web development on windows, like configuring apache, replacing web server, some IDE, etc?
|
Following a program's execution
| 20,764,514
| 0
| 1
| 73
| 0
|
python,c,debugging,gdb
|
ubiQ,
I've always used IDLE for debugging. Do a google search for it. Hope this helps, if not, let me know!
| 0
| 1
| 0
| 0
|
2013-12-24T17:10:00.000
| 2
| 0
| false
| 20,764,462
| 1
| 0
| 0
| 1
|
I'm looking to 'watch' a program as it executes. I want to, for example, keep track of a program's stack pointer as it changes through execution. I've been looking at scripting GDB with python but the solutions to this are very buggy - I've been unsuccessful thus far at installing PythonGDB. If anyone has any solutions / recommendations as to how to approach this problem I would be very grateful.
EDIT: I should have mentioned, I'm looking to record these values - ideally automatically - to be able to review them afterwards. I understand GDB allows me to step through the program and view the each state but I want to automate this process to be able to 'watch' how a particular values (such as the SP) change over time.
|
How to run Python 3 in Sublime 2 REPL Mac
| 65,398,436
| 0
| 5
| 6,952
| 0
|
python,macos,sublimetext,read-eval-print-loop,sublimerepl
|
I would suggest to change the directory to
/Library/Frameworks/Python.framework/Versions/Current/bin/python3
This way whenever you update python, SublimeREPL will always get the updated version.
| 0
| 1
| 0
| 0
|
2013-12-25T22:33:00.000
| 4
| 0
| false
| 20,777,307
| 0
| 0
| 0
| 1
|
My question is the following, I have sublime 2 and sublime repl plugin installed all working fine, the only thing i need is to change the version of python that is running on the sublimerepl built in console. What I mean is, I have python 2.7.5 (which is pre installed with maveriks) running fine in sublime (via sublimerepl), and I have installed via the installer from python.org, python 3.3.3 , that I need to use, I want to run this version of python on the sublimerepl console but I don't know how.
I know that there are alternatives to sublime but none of those are so beautiful as sublime is.
Can someone help me with this ? I've searched for all over the internet and couldn't find anything helpful.
Btw if I use terminal python 3.3.3 is working fine (Terminal>'python3'), I know this is possible beacause a friend of mine got it working and I have installed mine like he did, but mine is not working.
|
How to run different python versions in cmd
| 34,838,251
| 47
| 32
| 170,536
| 0
|
python,windows,python-2.7,python-3.x,cmd
|
I also met the case to use both python2 and python3 on my Windows machine. Here's how i resolved it:
download python2x and python3x, installed them.
add C:\Python35;C:\Python35\Scripts;C:\Python27;C:\Python27\Scripts to environment variable PATH.
Go to C:\Python35 to rename python.exe to python3.exe, also to C:\Python27, rename python.exe to python2.exe.
restart your command window.
type python2 scriptname.py, or python3 scriptname.py in command line to switch the version you like.
| 0
| 1
| 0
| 0
|
2013-12-26T14:27:00.000
| 3
| 1
| false
| 20,786,478
| 1
| 0
| 0
| 1
|
How can I configure windows command dialog to run different python versions in it? For example when I type python2 it runs python 2.7 and when I type python3 it runs python 3.3? I know how to configure environment variables for one version but two? I mean something like Linux terminal.
|
ipython notebook installation on mac os x
| 21,111,278
| 1
| 1
| 3,347
| 0
|
macos,ipython,ipython-notebook
|
thanks for the info but I have found out the problems a few weeks ago and have forgotten to post it here....
It is just that for Mac installation, it is pretty tricky and hence while installing, instead of typing in: easy_install ipython, users have to specify the python version.
Thus, I will need to type in easy_install ipython2.7
After which all is fine and working great!
| 0
| 1
| 0
| 0
|
2013-12-27T15:47:00.000
| 2
| 0.099668
| false
| 20,803,514
| 1
| 0
| 0
| 1
|
I am wondering if anyone has installed ipython notebook on mac OSX?
Currently I am able to run it in the terminal note but as soon as I type in the notebook version, there are problems encountered in running it.
Below is the error I have gotten:
Traceback (most recent call last): File
"/Users/tayyangki/anaconda/bin/ipython", line 9, in
load_entry_point('ipython==2.0.0-dev', 'console_scripts', 'ipython')() File
"/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/init.py",
line 118, in start_ipython
return launch_new_instance(argv=argv, **kwargs) File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/config/application.py",
line 565, in launch_instance
app.initialize(argv) File "", line 2, in initialize File
"/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/config/application.py",
line 92, in catch_config_error
return method(app, *args, **kwargs) File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/terminal/ipapp.py",
line 314, in initialize
super(TerminalIPythonApp, self).initialize(argv) File "", line 2, in initialize File
"/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/config/application.py",
line 92, in catch_config_error
return method(app, *args, **kwargs) File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/core/application.py",
line 371, in initialize
self.parse_command_line(argv) File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/terminal/ipapp.py",
line 309, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv) File "", line 2, in parse_command_line File
"/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/config/application.py",
line 92, in catch_config_error
return method(app, *args, **kwargs) File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/config/application.py",
line 474, in parse_command_line
return self.initialize_subcommand(subc, subargv) File "", line 2, in initialize_subcommand File
"/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/config/application.py",
line 92, in catch_config_error
return method(app, *args, **kwargs) File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/config/application.py",
line 405, in initialize_subcommand
subapp = import_item(subapp) File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/utils/importstring.py",
line 42, in import_item
module = import(package, fromlist=[obj]) File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/html/notebookapp.py",
line 75, in
from IPython.consoleapp import IPythonConsoleApp File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/consoleapp.py",
line 43, in
from IPython.kernel.zmq.kernelapp import ( File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/kernel/zmq/kernelapp.py",
line 54, in
from .ipkernel import Kernel File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/kernel/zmq/ipkernel.py", line 40, in
from .zmqshell import ZMQInteractiveShell File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/kernel/zmq/zmqshell.py", line 36, in
from IPython.core.payloadpage import install_payload_page File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/ipython-2.0.0_dev-py2.7.egg/IPython/core/payloadpage.py",
line 24, in
from docutils.core import publish_string File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/docutils/core.py",
line 20, in
from docutils import frontend, io, utils, readers, writers File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/docutils/frontend.py",
line 41, in
import docutils.utils File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/docutils/utils/init.py",
line 20, in
import docutils.io File "/Users/tayyangki/anaconda/lib/python2.7/site-packages/docutils/io.py",
line 18, in
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput File
"/Users/tayyangki/anaconda/lib/python2.7/site-packages/docutils/utils/error_reporting.py",
line 47, in
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1] File
"/Users/tayyangki/anaconda/lib/python2.7/locale.py", line 511, in
getdefaultlocale
return _parse_localename(localename) File "/Users/tayyangki/anaconda/lib/python2.7/locale.py", line 443, in
_parse_localename
raise ValueError, 'unknown locale: %s' % localename ValueError: unknown locale: UTF-8
Greatly appreciated if someone could help me?
|
Apache: python directory restriction
| 20,809,616
| 0
| 0
| 50
| 0
|
python,apache,mod-wsgi
|
There are two options. Use mod_wsgi daemon mode and run the daemon process as a distinct user. Then lock down all your file permissions appropriately to deny access from that user. The second is that mod_wsgi daemon mode also supports a chroot option. Using a chroot is obviously a lot more complicated to set up however.
| 0
| 1
| 0
| 1
|
2013-12-27T22:52:00.000
| 1
| 0
| false
| 20,808,909
| 0
| 0
| 0
| 1
|
Lets say I have directory with subdirectories where are projects stored.
How to lock Python script inside that subdirectory ? That it can not scan parent directories, read files, import etc. Is it possible with mod_wsgi ?
And how to disable any python functions ?
Thank
|
Hadoop : single node vs cluster performance
| 20,821,705
| 0
| 0
| 1,247
| 0
|
python-2.7,hadoop,hadoop-streaming
|
MapReduce isn't really meant to handle that small of an input dataset. The MapReduce framework has to determine which nodes will run tasks and then spin up a JVM to run each individual Map and Reduce task(s) (the number of tasks is dependent on the size of your data set). That usually has a latency on the order of tens of seconds. Shipping non local data between nodes is also expensive as it involves sending data over the wire. For such a small dataset, the overhead of setting up a MapReduce job in a distributed cluster is likely higher than the runtime of the job itself. On a single node you only see the overhead of starting up tasks on a local machine and don't have to do any data copying over the network, that's why the job finishes faster on a single machine. If you had multi gigabyte files, you would see better performance on several machines.
| 0
| 1
| 0
| 0
|
2013-12-28T10:34:00.000
| 1
| 1.2
| true
| 20,813,532
| 0
| 0
| 0
| 1
|
I am running three MapReduce jobs in sequence (output of one is the input to another) on a Hadoop cluster with 3 nodes (1 master and 2 slaves).
Apparently, the total time taken by individual jobs to finish on a single node cluster is less than the above by quite a margin.
What could be the possible reasons? Is it the network latency? It's running on 100Mbps Ethernet network. Will it help if I increase the number of nodes?
I am using Hadoop Streaming and my code is in python2.7.
|
Architecture for a self-running Python script on Mac OSX
| 20,816,671
| 0
| 0
| 217
| 0
|
python,macos,automation
|
Set a variable to the future time, and check it in a while() loop
| 0
| 1
| 0
| 0
|
2013-12-28T15:38:00.000
| 2
| 0
| false
| 20,816,257
| 1
| 0
| 0
| 1
|
I have the following set-up
A Python script
A Mac OSX Automator application with said script
An iCal alert that runs the Automator (and thus the Python script) on a regular schedule
All of the above works just fine. But I need to make a change. I need the script to check a web site for a time in the future (that same day) and then come back prior to that time and run itself again. I know how to do the first part (get the time) but I have no clue how to do the second part. How do you get a Python script to (1) run itself at a regular time and then (2) run again at some point in the future? The point in the future will change on a regular basis. Sometimes it would be as early as 10AM, other times it may be 7PM.
Any thoughts on this and pseudo-code are welcome. Thanks!
|
Call a Python Script from parallel shell scripts at the same time
| 20,825,952
| 1
| 0
| 88
| 0
|
python,linux,shell
|
You won't have any problem with what you're trying to do. You can call the same script in parallel a lot of times, with different input arguments. (sys.argv entries). For each run, a new memory space will be allocated.
| 0
| 1
| 0
| 0
|
2013-12-29T13:25:00.000
| 1
| 1.2
| true
| 20,825,590
| 1
| 0
| 0
| 1
|
I have a question about the Python Interpreter. How does is treat the same script running 100 times, for example with different sys.argv entries? Does it create a different memory space for each script or something different?
System is Linux , CentOS 6.5. Is there any operational limit that can be observed and tuned?
|
Eclipse - Installed PyDev and nothing changed
| 20,826,849
| 1
| 0
| 762
| 0
|
python,eclipse,pydev
|
I bet it's the problem of 3.x version of PyDev. It demands java 7.
2 solutions are possible:
Install java 7. re-run the Eclipse, Pydev should function well now. OR
Install last 2.x version of PyDev.
To do it
1) Remove PyDev : In Eclipse About window click Installation Details
button below. You will see controls for removing plug-ins.
2) Install 2.x version of PyDev:
Eclipse Help->Install New Software.
REMOVE CHECKBOX "Show only the latest version" located at the bottom of the dialog.
Choose Pydev update site from the list and in the appeared list of PyDev versions choose latest in 2.x branch.
| 0
| 1
| 0
| 0
|
2013-12-29T15:17:00.000
| 2
| 0.099668
| false
| 20,826,619
| 1
| 0
| 0
| 1
|
This isn't my first time using Eclipse or installing PyDev but this is the first time it both succeeded and failed.
It succeeded because it installed, it shows up as being installed and installation went on fine without a problem.
It failed because nothing has shown up, there is no Python perspective, no PyDev views in the view list, no new projects under PyDev, no PyDev preferences. It is as if it is not actually installed at all.
The only thing I did differently is extract the latest eclipse to a folder called ~/eclipse and create a short cut to run it there (the latest Eclipse), usually I use apt-get to install eclipse, realise it's an old version (C++11 stuff missing) then upgrade and do this. Somehow PyDev is usually carried forward.
I'm not sure how it can list it as being installed but have this error, I'd appreciate any help you guys can offer.
|
gae-boilerplate and gcs client
| 20,847,501
| 0
| 0
| 156
| 0
|
python,google-app-engine,google-cloud-storage
|
Stupid question now that i found the solution.
It was because i was running old_dev_appserver.py in my server startup script.
GCS is only supported from the 1.8.1 and greater.
| 0
| 1
| 0
| 0
|
2013-12-29T19:23:00.000
| 1
| 1.2
| true
| 20,829,163
| 0
| 0
| 1
| 1
|
I'm trying to work with gae-boilerplate on google app engine and I want to
communicate with the cloud on local development server (for now).
I took the test app example and it runs perfectly but when trying to integrate with
gae-boilerplate it falls apart.
If I extend my class with webapp2.request it will work but I can't call it from routes.py,
when I extend it with boilerplate BaseHandler, I can call it but ` get a deadlineexceeded exception:
TimeoutError: ('Request to Google Cloud Storage timed out.', DeadlineExceededError('Deadline exceeded while waiting for HTTP response from URL: http:// localhost: 8080/_ah/gcs/yey-cloud-storage-trial/demo-testfile',))
|
How to get pypi to recognise the OS correctly
| 20,890,406
| 3
| 2
| 157
| 0
|
python,pip,pypi
|
pip does not support installing packages distributed as '"dumb" binaries'. Only source distributions, eggs and wheels are supported.
There are various other drawbacks to using dumb binaries, not least in that the Python version for which they were compiled is not listed, and they contain the full path to the files making these distributions next to useless to most end-users. Such binaries should really only be used in internal distributions where the target machines have the exact dependencies already present. They don't really have any place on PyPI.
Use setuptools and distribute eggs for Windows, only. For all other platforms, distribute just the source. If you are planning on providing a wheel distributions too, do so in addition to the source distribution.
Eggs with compiled C extensions have some drawbacks (specifically when having to support Unicode strings; Python has both wide and narrow Unicode builds and eggs don't record what version they were compiled for), so sticking to source distributions for most platforms is best.
| 0
| 1
| 0
| 0
|
2013-12-29T22:16:00.000
| 1
| 1.2
| true
| 20,830,698
| 1
| 0
| 0
| 1
|
I have a package that I have registered on Pypi. However when I do sudo pip install mypackage from ubuntu it gives me the windows package rather than the linux one. How do you configure your package to give the right version for the right OS?
|
PyDev won't start in Aptana Studio3
| 21,073,033
| 0
| 0
| 666
| 0
|
python,aptana,pydev
|
Aptana 3.5.0 and PyDev 3.0 does not work under Mac OS X 10.9 Mavericks yet. PyDev reports builtin symbols such as None could not be recognized.
I rolled back to 3.4.2 as well.
| 0
| 1
| 0
| 0
|
2013-12-30T21:11:00.000
| 3
| 0
| false
| 20,847,649
| 0
| 0
| 1
| 2
|
Aptana Studio is my primary Python IDE and I have been using it for years with much joy and success! Recently, when I start Aptana Studio it fails to recognize any PyDev projects that I have previously created. I noticed that this was happening after installing a recent update of the IDE. I tried uninstalling Aptana and resinstalling the latest version from the website. Nada...I updated Java thinking there might be a misalignment between Java versions or something like that. Nada...The latest version of Eclipse works fine and Aptana seems to be functioning correctly for everything except for PyDev (Python).
I am running a current version of Windows 8. Does anyone know how to fix this or maybe trouble shoot the problem? PyDev worked perfectly in Aptana Studio until I installed the update. Has anyone come across this and know how to fix it?
|
PyDev won't start in Aptana Studio3
| 20,851,621
| 0
| 0
| 666
| 0
|
python,aptana,pydev
|
I went back to the Aptana website and this time around it gave me Aptana Studio 3, build: 3.4.2.201308081805 which works fine. 3.5.0 does still not appear to work for Python development at the moment.
| 0
| 1
| 0
| 0
|
2013-12-30T21:11:00.000
| 3
| 0
| false
| 20,847,649
| 0
| 0
| 1
| 2
|
Aptana Studio is my primary Python IDE and I have been using it for years with much joy and success! Recently, when I start Aptana Studio it fails to recognize any PyDev projects that I have previously created. I noticed that this was happening after installing a recent update of the IDE. I tried uninstalling Aptana and resinstalling the latest version from the website. Nada...I updated Java thinking there might be a misalignment between Java versions or something like that. Nada...The latest version of Eclipse works fine and Aptana seems to be functioning correctly for everything except for PyDev (Python).
I am running a current version of Windows 8. Does anyone know how to fix this or maybe trouble shoot the problem? PyDev worked perfectly in Aptana Studio until I installed the update. Has anyone come across this and know how to fix it?
|
Change a script post-execution in a for loop
| 20,867,582
| 1
| 1
| 54
| 0
|
python,linux,bash
|
Yes, it will change for the next execution of the loop.
The shell re-reads and executes ./file.py for each iteration.
| 0
| 1
| 0
| 0
|
2014-01-01T10:01:00.000
| 2
| 1.2
| true
| 20,867,464
| 1
| 0
| 0
| 1
|
This is a very dumb question. I have a python script that I am running on multiple files using a for loop:
for i in *; do ./file.py -i $i -o $i"_out"; done
Now, during this operation if I alter the script, will it change for the next execution in the loop ?
|
Where does subprocess.Popen look for the argument process? (Python)
| 20,870,883
| 1
| 0
| 58
| 0
|
python,subprocess
|
It will look in the directories in the PATH environment variable. But you can always specify an absolute or relative path, so if you know where your custom process is located, you can just give the full path to it.
| 0
| 1
| 0
| 0
|
2014-01-01T16:43:00.000
| 2
| 0.099668
| false
| 20,870,798
| 1
| 0
| 0
| 1
|
For example, when I type:
child = Popen('cmd'), how does the interpreter know where to look for cmd? and if I want to use my custom process, where do I put it that it will get recognized?
|
Establishing connection between client and Google App Engine server
| 20,874,529
| 3
| 0
| 98
| 0
|
python,google-app-engine,http,rest,client-server
|
What you suggest is the right way. 1&2 is a single post. Then you post again to the server.
| 0
| 1
| 1
| 0
|
2014-01-01T20:16:00.000
| 1
| 1.2
| true
| 20,872,804
| 0
| 0
| 1
| 1
|
I have a need for my client(s) to send data to my app engine application that should go something like this:
Client --> Server (This is the data that I have)
Server --> Client (Based on what you've just given me, this is what I'm going to need)
Client --> Server (Here's the data that you need)
I don't have much experience working with REST interfaces, but it seems that GET and POST are not entirely appropriate here. I'm assuming that the client needs to establish some kind of persistent connection with the server so they can both have a proper "conversation". My understanding is that sockets are reserved for paid apps, and I'd like to keep this on the free tier. However, I'm not sure of how to go about this. Is it the Channel API I should be using? I'm a bit confused by the documentation.
The app engine app is Python, as is the client. The solution that I'm leaning towards right now is that the client does a POST to the server (here's what I have), and subsequently does a GET (tell me what you need) and lastly does a POST (here's the data you wanted). But it seems messy.
Can anyone point me in the right direction please?
EDIT:
I didn't realize that you could get the POST response with Pythons urllib using the 'read' function of the object returned by urlopen. That makes things a lot nicer, but if anyone has any other suggestions I'd be glad to hear them.
|
How to configure Jenkins to run Nosetests as build action in Windows
| 20,882,158
| 1
| 0
| 2,371
| 0
|
python,jenkins,nosetests
|
It's hard to help you when you don't provide more information. What's the error message you get? Check "Console Output" and add it to your question.
It sounds like you're using the build step "Execute shell". On windows you should use "Execute Windows batch command" instead.
| 0
| 1
| 0
| 0
|
2014-01-02T11:35:00.000
| 1
| 1.2
| true
| 20,882,048
| 0
| 0
| 1
| 1
|
I have installed Python 3.3 on Windows. When i run nosetests from the command prompt giving the absolute path to the python test scripts folder, it runs. However, when i configure build shell in Jenkins as 'nosetests path/to/tests --with-xunit', the build fails. I am trying to install build monitor to see reasons for build failure. The build shell has nosetests C:\seltests\RHIS_Tests\ --with-xunit . I did not set Postbuild to nosetests.xml since it rejects that entry.
Thanks.
Sorry i am adding console output here
Building in workspace C:\Program Files\Jenkins\jobs\P1\workspace
Updating svn://godwin:3691/SVNRepo at revision '2014-01-02T18:28:06.781 +0530'
U Allows SQL in text fields.html
At revision 3
[workspace] $ sh -xe C:\WINDOWS\TEMP\hudson796644116462335904.sh
The system cannot find the file specified
FATAL: command execution failed
java.io.IOException: Cannot run program "sh" (in directory "C:\Program Files\Jenkins\jobs\P1\workspace"): CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(Unknown Source)
at hudson.Proc$LocalProc.<init>(Proc.java:244)
at hudson.Proc$LocalProc.<init>(Proc.java:216)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:773)
at hudson.Launcher$ProcStarter.start(Launcher.java:353)
at hudson.Launcher$ProcStarter.join(Launcher.java:360)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:94)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:63)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:785)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:566)
at hudson.model.Run.execute(Run.java:1678)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
|
Using Java GAE Datastore on Python locally
| 21,276,528
| 0
| 0
| 78
| 0
|
java,python,google-app-engine,google-cloud-datastore
|
--datastore_path=Location/datastore.db worked for me
| 0
| 1
| 0
| 0
|
2014-01-02T11:42:00.000
| 1
| 0
| false
| 20,882,174
| 0
| 0
| 1
| 1
|
How can I use my Java Datastore on Python version on Local, As Python environment has Inbuilt Interactive Console(for custom query), I want to use my application's Datastore which is currently running on GAE Java 1.8.2 to another version of GAE Python.
|
Writing a program to output instructions to Windows command line
| 20,884,754
| 3
| 0
| 724
| 0
|
c++,python
|
Hate to be the one to post it, but a nasty solution is the system function - dare I speak its name. Call it with code that you want executed in the command prompt and it will run. If you want to start task manager like this, call it like this:
system("C:\\Windows\\System32\\taskmgr.exe");
Fair warning that nobody really likes to see system in live code.
| 0
| 1
| 0
| 1
|
2014-01-02T13:53:00.000
| 1
| 0.53705
| false
| 20,884,520
| 0
| 0
| 0
| 1
|
I'm interested in writing a program in C++ to automate the instructions necessary to run a certain python script I've been experimenting with. I was hoping someone could tell me where to look to find information on sending instructions to the command line from a C++ application, as I don't know what to google to find info on that.
|
Deciphering large program flow in Python
| 20,915,021
| 0
| 2
| 147
| 0
|
python,program-flow
|
You could look for a cross reference program. There is an old program called pyxr that does this. The aim of cross reference is to let you know how classes refer to each other. Some of the IDE's also do this sort of thing.
| 0
| 1
| 0
| 0
|
2014-01-03T23:57:00.000
| 4
| 0
| false
| 20,914,919
| 1
| 0
| 0
| 1
|
I'm in the process of learning how a large (356-file), convoluted Python program is set up. Besides manually reading through and parsing the code, are there any good methods for following program flow?
There are two methods which I think would be useful:
Something similar to Bash's "set -x"
Something that displays which file outputs each line of output
Are there any methods to do the above, or any other ways that you have found useful?
|
Get user's installation path from distutils
| 20,959,516
| 0
| 0
| 91
| 0
|
python,python-2.6,distutils
|
Can’t you use a relative path in the .pth file? Or avoid using a .pth file at all? (They’re used for module collections that pre-date packages in Python, or horrible import hacks.)
| 0
| 1
| 0
| 1
|
2014-01-05T15:13:00.000
| 1
| 0
| false
| 20,935,204
| 0
| 0
| 0
| 1
|
I want to use disutils to make a .msi for my python library. Before installation, the user can choose the destination path of the installation. Depending on this path, I want to generate a .pth file that will contain the chosen path. For this to be possible I need to run a post-installation script that will place the .pth in the correct place.
My question is: Is there a way of getting that installation path that was selected by the user, during run-time?
|
Programming a linux-based Raspberry Pi operating system with python
| 20,941,275
| 2
| 0
| 548
| 0
|
python,operating-system,raspberry-pi,bare-metal
|
Operating systems generally use "low level" languages like c/c++/d in order to have proper access to system resources. The problems with writing one in python are first, you need something to run an interpreter below it (defeating the purpose of having the OS be written in python) and second, there aren't good ways to manage resources in python. Furthermore, you said you want it to be linux based, however, linux is written in c (for the reasons listed above and a few more) and therefore writing something in python will not be very productive. If you want to stick with python, maybe you could write a window manager for linux instead? It would be much easier than an OS and python would be a fine language for such a project.
| 0
| 1
| 0
| 1
|
2014-01-06T00:46:00.000
| 2
| 0.197375
| false
| 20,941,211
| 0
| 0
| 0
| 1
|
I don't know much about writing operating systems, but I though this would be a good way to learn. There are tutorials for raspberry pi operating systems, but they're not linux-based or made with python. I'm just looking for a general tutorial here.
|
To play .wav files using Python from within Cygwin
| 24,809,797
| 3
| 2
| 1,371
| 0
|
python,cygwin
|
try os.system("cat /path/foo.wav > /dev/dsp")
You need to install audio package for Cygwin first.
| 0
| 1
| 0
| 0
|
2014-01-06T11:05:00.000
| 1
| 0.53705
| false
| 20,948,650
| 1
| 0
| 0
| 1
|
I need to play .wav files stored on my PC using Python script from Cygwin.
Please advice if this is possible? If so please provide pointers etc, to Python script code which can be used from Cygwin. I am working on a 64-bit Windows 7 machine.
This is what I have done so far.
Downloaded and installed setup-x86_64.exe from cygwin website.
Installed packages as part of Cygwin: make,gcc,g++,git,ssh,sox,python ver >= 2.7, curl,wget.
Please advice on how to play .wav files using Python (version >= 2.7) from Cygwin.
|
manage.py help has different python path in virtualenv
| 21,575,289
| 0
| 0
| 1,129
| 0
|
python,django,virtualenv,pythonpath,manage.py
|
The problem is solved due to add a python path: add2virtualenv '/home/robert/Vadain/vadain.webservice.curtainconfig/'
| 0
| 1
| 0
| 0
|
2014-01-06T13:02:00.000
| 2
| 1.2
| true
| 20,950,640
| 1
| 0
| 1
| 1
|
I have a problem in virtualenv that a wrong python path is imported.
The reason is that by running the command:
manage.py help --pythonpath=/home/robert/Vadain/vadain.webservice.curtainconfig/
The result is right, but when I run manage.py help then I missing some imports.
I searched on the internet, but nothing is helped. The last change I have done is at the end of the file virtualenvs/{account}/bin/activate added the following text:
export PYTHONPATH=/home/robert/Vadain/vadain.webservice.curtainconfig
But this not solving the problem, somebody else's suggestion to fix this problem?
|
Error opening python executable in Windows after using Pyinstaller
| 46,979,196
| 0
| 0
| 4,038
| 0
|
python,pyinstaller
|
I had this same problem after I turned my .py file to an .exe file using pyinstaller (I'm using Python 3.6).
It would run fine on my computer, but when sending it to others to run, firstly the computer would try to stop it from running (understandable, but you can tell Windows that you trust it when the pop-up appears). It would then be saved to their computer. I tried to run the file and got the same pop-up you did. I figured it was their anti-virus stopping it from running, so opened the anti-virus software and added an exception for my file. After that it worked fine.
Granted, it's an inconvenient way to do it, but until I learn further it works for now.
| 0
| 1
| 0
| 0
|
2014-01-06T20:01:00.000
| 4
| 0
| false
| 20,958,234
| 1
| 0
| 0
| 2
|
I am trying to use Pyinstaller to create a python 2.7 executable in windows 7. I followed all the suggestions in the manual (using pip-win and Pywin32) but once the file has been created I cannot open the application and I get the error message:
"Windows cannot access the specified the specified device, path, or file. You may not have the appropriate permissions to access the item."
Does anyone have any idea why this might be happening and what I can do to prevent it? Sorry if this question is a bit vague, I will try and provide more details if I can.
Thanks in advance
|
Error opening python executable in Windows after using Pyinstaller
| 56,158,805
| 0
| 0
| 4,038
| 0
|
python,pyinstaller
|
I had the same problem since today (the last days was working fine).
I figured out that the problem was when I create the .exe file with --icon, if you don't create the file with the --icon should work fine.
| 0
| 1
| 0
| 0
|
2014-01-06T20:01:00.000
| 4
| 0
| false
| 20,958,234
| 1
| 0
| 0
| 2
|
I am trying to use Pyinstaller to create a python 2.7 executable in windows 7. I followed all the suggestions in the manual (using pip-win and Pywin32) but once the file has been created I cannot open the application and I get the error message:
"Windows cannot access the specified the specified device, path, or file. You may not have the appropriate permissions to access the item."
Does anyone have any idea why this might be happening and what I can do to prevent it? Sorry if this question is a bit vague, I will try and provide more details if I can.
Thanks in advance
|
trigger python script everytime a file is downloaded into a specific folder
| 20,960,749
| 2
| 2
| 1,034
| 0
|
python,directory,zip
|
Yes, you can use inotify (e.g. using pyinotify) to get a callback whenever a new file is created. It is not available on Windows though. There might be a similar api available, but I don't know if there are python bindings for that API.
| 0
| 1
| 0
| 0
|
2014-01-06T22:42:00.000
| 3
| 0.132549
| false
| 20,960,689
| 0
| 0
| 0
| 2
|
I have a specific folder in which I download certain .zip files. I am writing a python script to automate the unzip, upload, and deletion of files from this folder. Is there a way to automatically trigger my python script each time a zip file is downloaded to this folder?
[EDIT] : i am on osx mavericks, sorry for not mentioning this from the start
|
trigger python script everytime a file is downloaded into a specific folder
| 20,960,761
| 1
| 2
| 1,034
| 0
|
python,directory,zip
|
The easiest way I can think of:
Make a cronjob lets say every 1 minute, that launches a script to check the directory in question for any new zip files.
If found it will trigger unziping, upload and deletion.
if you don't want to create a cronjob you can always think about creating a daemon (but why bother)
| 0
| 1
| 0
| 0
|
2014-01-06T22:42:00.000
| 3
| 0.066568
| false
| 20,960,689
| 0
| 0
| 0
| 2
|
I have a specific folder in which I download certain .zip files. I am writing a python script to automate the unzip, upload, and deletion of files from this folder. Is there a way to automatically trigger my python script each time a zip file is downloaded to this folder?
[EDIT] : i am on osx mavericks, sorry for not mentioning this from the start
|
Python kivy storage module
| 20,985,021
| 0
| 0
| 528
| 0
|
python,linux,module,kivy
|
Ultimately, I resolved this by removing kivy using apt-get, then removing the stable repository from apt-get. I then added the daily repository again just to be sure (sudo add-apt-repository ppa:kivy-team/kivy-daily, note the -daily), updated, and then installed kivy. This gave me the 1.8.0 version, which has the storage module as expected. There are some minor differences between the two versions, however, this was sufficient for me. It appears that the stable 1.7.2 version simply doesn't have the storage module in the setup.py and thus does not compile with it.
| 0
| 1
| 0
| 0
|
2014-01-07T05:57:00.000
| 1
| 1.2
| true
| 20,964,906
| 1
| 0
| 0
| 1
|
I am attempting to set up kivy on linux, specifically Mint 13. I have followed the instructions on the kivy site, specifically, I added the daily repository to apt, and then used apt-get to install python-kivy.
I wish to use the storage module, however, upon trying to from kivy.storage.jsonstore import JsonStore, it throws an ImportError: No module named storage.jsonstore.
I have checked dist-packages/kivy, and indeed, the storage directory, with the files, is there as expected. (It should be noted that this is the reason I used the daily repository; the stable version does not have the storage module for some reason.)
I have previously managed to get the storage module to work on my Windows machine simply by adding the module to my kivy directory, however, it fails here, on Linux Mint. How should I proceed?
|
Serving executable file on App Engine changes file permissions
| 20,971,741
| 4
| 2
| 109
| 0
|
python,google-app-engine,pyinstaller
|
HTTP doesn't support file permissions, i.e. there is no way to make downloaded file exacutable by default.
If your concern is to avoid users to mess with chmod, you can serve .tar.gz archive, which is able to keep records if file is executable or not
| 0
| 1
| 0
| 0
|
2014-01-07T12:14:00.000
| 1
| 1.2
| true
| 20,971,366
| 0
| 0
| 1
| 1
|
I generated a Unix executable with PyInstaller. I then changed the permissions of the file using chmod +x+x+x my_file
-rwxr-xr-x my_file
When I serve that file from mysite.appspot.com/static/filename, I successfully download my app but the file permissions change and it can't be run as an executable anymore.
-rw-r--r my_file_after_being_downloaded
How can I serve my file while keeping its permissions unchanged?
(note that I can confirm that manually chmod-ing this downloaded file does turn it back into a Unix executable, and hence opens with double-click.)
|
Python, Keyring, and Cron
| 22,439,701
| 3
| 11
| 2,578
| 0
|
python,ubuntu,cron
|
I'm sorry to say I don't have the answer, but I think I know a bit of what's going on based on an issue I'm dealing with. I'm trying to get a web application and cron script to use some code that stashes an oauth token for Google's API into a keyring using python-keyring.
No matter what I do, something about the environment the web app and cron job runs in requires manual intervention to unlock the keyring. That's quite impossible when your code is running in a non-interactive session. The problem persists when trying some tricks suggested in my research, like giving the process owner a login password that matches the keyring password and setting the keyring password to an empty string.
I will almost guarantee that your error stems from Gnome-Keyring trying to fire up an interactive (graphical) prompt and bombing because you can't do that from cron.
| 0
| 1
| 0
| 1
|
2014-01-07T18:23:00.000
| 2
| 0.291313
| false
| 20,978,982
| 0
| 0
| 0
| 1
|
I'm hooking a python script up to run with cron (on Ubuntu 12.04) -- easy enough. Except for authentication.
The cron script accesses a couple services, and has to provide credentials. Storing those credentials with keyring is easy as can be -- except that when the cron job actually runs, the credentials can't be retrieved. The script fails out every time.
As nearly as I can tell, this has something to do with the environment cron runs in. I tracked down a set of posts which suggest that the key is having the script export DBUS_SESSION_BUS_ADDRESS. All well and good -- I can get that address and, export it, and source it from Python fairly easily -- but it simply generates a new error: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11. Setting DISPLAY=:0 has no effect.
So. Has anybody figured out how to unlock gnome-keyring from Python running on a cron job on Ubuntu 12.04?
|
OSX Permanently Set PYTHONPATH
| 20,986,449
| 1
| 3
| 3,533
| 0
|
macos,bash,terminal,osx-mountain-lion,pythonpath
|
put the path setting in /etc/profile, it will impact to all users.
put the path in your home directory ~/.profile, ~/.bashrc, ~/kshrc (depand on your shell).
| 0
| 1
| 0
| 0
|
2014-01-08T02:20:00.000
| 2
| 0.099668
| false
| 20,985,823
| 1
| 0
| 0
| 1
|
I have a problem where my PYTHONPATH variable always has a blank value. I can fix it temporarily like this:
export PYTHONPATH=$(python -c 'import sys;print ":".join(sys.path)')
but is there a more permanent way to do this?
|
pynfs: error: gssapi/gssapi.h: No such file or directory
| 55,601,574
| 22
| 18
| 15,906
| 0
|
python,linux,nfs
|
For those on Ubuntu the package you need to install is libkrb5-dev
| 0
| 1
| 0
| 0
|
2014-01-08T09:46:00.000
| 3
| 1
| false
| 20,992,032
| 1
| 0
| 0
| 1
|
I am trying to install pynfs on RHEL 6.4 based VM
command executed is python setup.py build, but I am getting this issue,
error: gssapi/gssapi.h: No such file or directory,
this issue is seen when setup.py build is executed for nfs4.0 directory,
Moving to nfs4.0
running build
running build_py
running build_ext
building 'rpc.rpcsec._gssapi' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/kerberos/include -I/usr/include/python2.6 -c lib/rpc/rpcsec/gssapi_wrap.c -o build/temp.linux-x86_64-2.6/lib/rpc/rpcsec/gssapi_wrap.o -Wall
lib/rpc/rpcsec/gssapi_wrap.c:2521:27: error: gssapi/gssapi.h: No such file or directory
lib/rpc/rpcsec/gssapi_wrap.c:2528: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘krb5oid’
lib/rpc/rpcsec/gssapi_wrap.c:2575: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘krb5oid_ptr’
lib/rpc/rpcsec/gssapi_wrap.c:2588: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘reordered_init_sec_context’
lib/rpc/rpcsec/gssapi_wrap.c:2759: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘reordered_gss_accept_sec_context’
lib/rpc/rpcsec/gssapi_wrap.c:2777: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘reordered_gss_get_mic’
lib/rpc/rpcsec/gssapi_wrap.c:2788: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘reordered_gss_wrap’
Can somebody help me resolve this issue? Also, for fedora the similar way of installation works.
|
Would I need to cross compile Python to ARM?
| 20,994,609
| 5
| 0
| 947
| 0
|
python,raspberry-pi,cross-compiling,ctype
|
Python is an interpreted bytecode language, so the actual python code does not need to be cross compiled in any way;
Your shared libraries, files ending in .so are not python, however. You will need to obtain versions of those compiled for the correct architecture. It might well be that those are ordinary C extensions for python, which can be built via setuptools or other means, which works equally well on ARM as it does on i386.
| 0
| 1
| 0
| 1
|
2014-01-08T11:25:00.000
| 1
| 1.2
| true
| 20,994,285
| 1
| 0
| 0
| 1
|
I have Python code that works on a 32bit intel machine running Ubuntu, and I need to run this code on Raspberry Pi. Would I need some sort of cross compiling? I have 32bit .so files included in python.
|
Python: File operation problems in python execution
| 21,091,640
| 0
| 1
| 73
| 0
|
python
|
The File was always going to home folder..
I grabbed the absolute path and it worked perfectly for me..
and thanks to Paulo's hint about permissions of file.
| 0
| 1
| 0
| 0
|
2014-01-08T14:37:00.000
| 2
| 0
| false
| 20,998,586
| 1
| 0
| 0
| 1
|
I am trying to execute a python file located in some folder named : folder1
for this I am using :
~$ python /folder1/a.py (doesn't work)
When i go to that folder and then execute everything works fine:
~/folder1$ python a.py (works)
I think I am using a file write operation in code written in file a.py because of which first way of execution is not working.
Please give some suggestions to fix this.
|
Python or Java as Backend Language in Google App engine?
| 20,999,802
| 3
| 1
| 4,232
| 0
|
java,android,python,google-app-engine
|
You can really go with either, to be honest, and use whatever suits your style.
When I started using App Engine, I was Java all the way. I recently switched to Python and love it too!
If you have a lot of existing java dependencies, such as libraries etc. that you want to continue using, then stick with it. Otherwise, it's worth dipping your toe in the Python waters.
| 0
| 1
| 0
| 0
|
2014-01-08T15:14:00.000
| 2
| 1.2
| true
| 20,999,456
| 0
| 0
| 1
| 2
|
I am developing an application in Android using Google App engine and Google Compute Engine as backend .
I have followed the Google's demo code in python as base for my application.
Now I have question in my mind that since I am more familiar with Java then Python and also need to consider the fact that Google is supporting Python more then Java in its most of the demo codes, Should I change my GAE backend language to Java??
I should stick with Python and hope that I would come around with Python eventually.
Any suggestions are appreciated. Thanks
|
Python or Java as Backend Language in Google App engine?
| 20,999,591
| 4
| 1
| 4,232
| 0
|
java,android,python,google-app-engine
|
Here are some points to consider:
Both Python and Java are capable languages and App Engine Services are available to a large extent in both the environments.
You should use the environment that you are most comfortable with. This will help when debugging issues on the Server side. I would go with the language that I am most familiar with in case the application is critical, is on a tight deadline, etc. If you are learning the environment and have the time, it is great to look at a new language.
Since you are writing an Android application that is interacting with your Server side application in App Engine, one assumes that you would be exposing this functionality over Web Services. Both Python and Java environments are capable of hosting Web Services. In fact, with Google Cloud Endpoints, you should be able to even generate client side bindings (client libraries) for Android that integrate easily.
| 0
| 1
| 0
| 0
|
2014-01-08T15:14:00.000
| 2
| 0.379949
| false
| 20,999,456
| 0
| 0
| 1
| 2
|
I am developing an application in Android using Google App engine and Google Compute Engine as backend .
I have followed the Google's demo code in python as base for my application.
Now I have question in my mind that since I am more familiar with Java then Python and also need to consider the fact that Google is supporting Python more then Java in its most of the demo codes, Should I change my GAE backend language to Java??
I should stick with Python and hope that I would come around with Python eventually.
Any suggestions are appreciated. Thanks
|
PyDev: Can I make the runtime stack shown on crashes look pretty?
| 21,000,138
| 0
| 0
| 36
| 0
|
eclipse,python-3.x,pydev
|
This is unrelated to Eclipse and PyDev. Somewhere in your code, you catch all exceptions and turn them into such ugly lists.
Stop doing that or convert the output into a single multi-line string and the output will look useful again.
Alternatively, you can try to format the list line by line when you log the error.
| 0
| 1
| 0
| 0
|
2014-01-08T15:32:00.000
| 1
| 0
| false
| 20,999,881
| 0
| 0
| 0
| 1
|
I'm writing python with PyDev and Eclipse. It's great, but when my code crashes, it prints my runtime stack to the console in the ugliest of ways. It just prints out a big list and it's really hard to read. There's gotta be a way to pretty this up, to make it way easier to read, right? Can PyDev do it? Thanks!
For example:
2014-01-08 10:28:04,173 [error] Traceable Error raised during rendering process... - R:\qa\examples\testcases\testcase1.xml
2014-01-08 10:28:04,175 [error] [Exception] Failed to complete request:
[' File "C:\Users\me\workspace\re\src\CntlrCmdLine.py", line 1001, in run\n mainFun(self, modelXbrl, coutputFolder)\n', ' File "C:\Users\me\workspace\re\src\Filing.py", line 27, in mainFun\n filing.mainFunDriver(cube)\n', '
File "C:\Users\me\workspace\re\src\Filing.py", line 115, in mainFunDriver\n embedding.parseCommandText()\n', ' Fi
le "C:\Users\me\workspace\re\src\Embedding.py", line 70, in parseCommandText\n raise Exception\n'] - Report.py
2014-01-08 10:28:04,175 [warning] Cannot process input file. - R:\qa\reExamples\gd001cabbage\cabbage-20090501.xml
|
Git pre-pushed object on remote server? git ls-tree
| 21,030,409
| 0
| 0
| 146
| 0
|
python,git,bash
|
java code formatter as a pre-receive hook
Don't do it. You're trying to run the equivalent of git filter-branch behind your developer's back. Don't do it.
Is there any other way of doing this?
If you want inbound code formatted in a particular way, validate the inbound files. If any aren't done right list them and reject the push.
How to get that object on a remote server?
You can't fetch arbitrary objects, you can only fetch by ref (branch or tag) name. The pre-receive hook runs before any refs have been updated, so no ref names the inbound commits.
| 0
| 1
| 0
| 0
|
2014-01-09T18:59:00.000
| 1
| 0
| false
| 21,028,845
| 0
| 0
| 1
| 1
|
I have a Atlassian Stash server for git.
I am looking to write a script that will run java code formatter as a pre-receive hook (before it pushes the changes to the repository).
So, what I am looking to do is NOT to do the work on the stash server itself rather perform the work on another server and send the status back (0 or 1) to the Stash server.
I have written the script in Python where it calls a cgi (python) script on the remote server with "ref oldrev newrev" as HTTP GET Method. Once I have the STDIN values (ref oldrev newrev) on a remote server, I created a dir, git init, git remote add origin URL, and git fetch (i even tried git pull) to get the latest contents/objects of a reporsitory in hoping to get the object that has not been pushed to the repository but its in a pre-pushed stage environment.
The hash or SHA key or "newrev" key of the object that is in the pre-pushed stage: 36ac63fe7b15049c132c310e1ee153e044b236b7
Now, when I run 'git ls-tree 36ac63fe7b15049c132c310e1ee153e044b236b7 Test.java' inside the directory I created above, it gives me error.
'fatal: not a tree object'
Now, My questions are:
How to get that object on a remote server?
What might be the git command that I run that will give me that object in that stage?
Is there any other way of doing this?
Does it make any sense of what I've asked above. Let me know if I am not clear and I will try to clear things up more.
Thanks very much in advanced for any/all the help?
|
Can't install psycopg2 on Maverick
| 21,414,139
| 0
| 2
| 916
| 1
|
python,macos,postgresql,psycopg2
|
I had the same problem when I tried to install psycopg2 via Pycharm and using Postgres93.app. The installer (when running in Pycharm) insisted it could not find the pg_config file despite the fact that pg_config is on my path and I could run pg_config and psql successfully in Terminal. For me the solution was to install a clean version of python with homebrew. Navigate to the homebrew installation of Python and run pip in the terminal (rather than with Pycharm). It seems pip running in Pycharm did not see the postgres installation on my PATH, but running pip directly in a terminal resolved the problem.
| 0
| 1
| 0
| 0
|
2014-01-09T23:15:00.000
| 1
| 0
| false
| 21,033,198
| 0
| 0
| 0
| 1
|
I am trying to install psycopg2 on Mac OS X Mavericks but it doesn't see any pg_config file.
Postgres was installed via Postgres.app .
I found pg_config in /Applications/Postgres.app/Contents/MacOS/bin/ and put it to setup.cfg but still can't install psycopg2.
What might be wrong?
|
App Engine dev server: bad runtime process port [''] No module named google.appengine.dist27.threading
| 22,256,340
| 1
| 4
| 1,955
| 0
|
python,google-app-engine
|
A recent upgrade of the development SDK started causing this problem for me. After much turmoil, I found that the problem was that the SDK was in a sub-directory of my project code. When I ran the SDK from a different (parent) directory the error went away.
| 0
| 1
| 0
| 0
|
2014-01-10T19:10:00.000
| 2
| 0.099668
| false
| 21,052,461
| 0
| 0
| 1
| 1
|
When I try to run any of my app engine projects by python GoogleAppEngineLauncher
I got the error log as follows:
Does anyone have any ideas of what's going on?
I tried remove the SDK and reinstall it. Nothing happens. Still got the same error.
Everything is working fine and I don't think I made any changes before this happens.
The only thing that I can think of is that I install bigquery command line tool before this happens. But I don't think this should be the reason of this.
bad runtime process port ['']
Traceback (most recent call last):
File
"/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/_python_runtime.py",
line 197, in
_run_file(file, globals()) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/_python_runtime.py",
line 193, in _run_file
execfile(script_path, globals_) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/runtime.py",
line 175, in
main() File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/runtime.py",
line 153, in main
sandbox.enable_sandbox(config) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py",
line 159, in enable_sandbox
import('%s.threading' % dist27.name) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py",
line 903, in load_module
raise ImportError('No module named %s' % fullname) ImportError: No module named google.appengine.dist27.threading
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.