Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Will adding Python to a machine with LibreOffice interfere with LibreOffice Python macro execution?
21,052,890
0
1
543
0
python,macros,libreoffice
Unless LibreOffice was programmed sloppy, it should not That would not be smart: using bundled software for anything else than what it was bundled for would not be smart
0
1
0
0
2014-01-10T19:15:00.000
3
0
false
21,052,563
1
0
0
2
I have LibreOffice installed on a Windows machine. LibreOffice comes with a bundled python.exe (version 3.3) to allow you to write LibreOffice macros in Python. This works fine. But the bundled python routines don't come with the IDLE python ide as far as I can see. 1) If I download and install Python on my machine will that interfere with the execution of LibreOffice python macros (by changing Python environmental variables, register settings etc.? or 2) Is there a way to download IDLE or another free Python IDE and have it work with the Python bundled into LibreOffice?
Will adding Python to a machine with LibreOffice interfere with LibreOffice Python macro execution?
28,912,818
0
1
543
0
python,macros,libreoffice
LibreOffice comes bundled with it's own copy of python. (Python 3.3 I think) So the answer to your question is no, it will not. I have found that a simple way of debugging python macros in libreoffice is to run libreoffice from the command line and put print commands in the macros. This at least allows you to trace where you are and what key values are as the print commands echo onto the terminal screen.
0
1
0
0
2014-01-10T19:15:00.000
3
0
false
21,052,563
1
0
0
2
I have LibreOffice installed on a Windows machine. LibreOffice comes with a bundled python.exe (version 3.3) to allow you to write LibreOffice macros in Python. This works fine. But the bundled python routines don't come with the IDLE python ide as far as I can see. 1) If I download and install Python on my machine will that interfere with the execution of LibreOffice python macros (by changing Python environmental variables, register settings etc.? or 2) Is there a way to download IDLE or another free Python IDE and have it work with the Python bundled into LibreOffice?
DatastoreInputReader using entity kind with ancestor
21,069,451
0
0
57
0
python,google-app-engine
You cannot specify an ancestor for the DatastoreInputReader -- except for a namespace -- so the pipeline will always go through all your Domain entities in a given namespace.
0
1
0
0
2014-01-11T21:48:00.000
1
1.2
true
21,068,311
0
0
1
1
Is there a way to use the standard DatastoreInputReader from AppEngine's mapreduce with entity kind requiring ancestors ? Let's say I have an entity kind Domain with ancestor kind SuperDomain (useful for transactions), where do I specify in mapreduce_pipeline.MapreducePipeline how to use a specific SuperDomain entity as ancestor to all queries?
python version doesn't update on OS X
21,069,629
0
1
57
0
macos,python-3.x,upgrade
As per Martijn Pieters's comment, I used python3 and now it works as expected.
0
1
0
0
2014-01-12T00:07:00.000
1
1.2
true
21,069,586
1
0
0
1
I've just installed python 3.3.3 on my OS X 10.9.1, however when I run python from the terminal the version that is indicated is 2.7.5. What have I done wrong and how can I make it right?
How should I schedule my task in django
21,097,588
0
0
114
0
python,django,scheduled-tasks
Basically you can use Celery's preiodic tasks with expire option, which makes you sure that your tasks will not be executed twice. Also you could run your own script with infinite loop like which will run calculation. If your calculation will run more than minute you can spawn your tasks using eventlet or gevent. Other option you could creare celery-tasks from this script and be sure that your tasks executes every N seconds, as you prefer.
0
1
0
0
2014-01-13T11:39:00.000
1
0
false
21,090,365
0
0
1
1
In my django project, I need to collect data from about 50 remote servers into the local database minutely or every 30-seconds. Though it works with crontab in the remote servers, I want to do this in the project. Firstly, I consider the django-celery. However it does well in asynchronous processing and the collect-data task could not be delayed. Therefore i think, it may be not fit. How if i do this use the timer for python and what need i to pay more attention. Excuse for my ignorance of python and django. I'll appreciate other advice or ideas. Many thanks
How does a cascading http proxy server works?
21,121,694
1
0
1,271
0
python,networking,proxy,network-programming,squid
Cascading proxy is just the proxy connecting to an upstream proxy. It speaks the same HTTP proxy requests to the upstream proxy as a browser does, e.g. using full urls (method://host[:port]/path..) in the requests instead of just /path and using CONNECT for https tunneling instead of directly connecting with SSL.
0
1
1
0
2014-01-14T04:17:00.000
1
1.2
true
21,106,004
0
0
0
1
Software like CCProxy in windows allows you to setup a cascading proxy. In squid we can do the same by mentioning a cache_peer directive? How does this work at application and TCP/IP layer ? Does it form a socket connection to the upstream proxy server ? Any details or RFCs in relation to this? PS - I want to implement it in Python for some testing purposes.
How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
21,117,703
0
2
882
0
python,virtualhost,cherrypy,bottle
perhaps you can simply put nginx as reverse proxy and configure it to send the traffic to the two domains to the right upstream (the cherryPy webserver).
0
1
0
0
2014-01-14T15:13:00.000
4
0
false
21,117,002
0
0
1
1
I have a website (which running in Amazon EC2 Instance) running Python Bottle application with CherryPy as its front end web server. Now I need to add another website with a different domain name already registered. To reduce the cost, I want to utilize the existing website host to do that. Obviously, virtual host is the solution. I know Apache mod_wsgi could play the trick. But I don't want to replace CherryPy. I've googled a a lot, there are some articles showing how to make virtual hosts on CherryPy, but they all assume Cherrypy as Web Sever + Web application, Not CherrPy as Web server and Bottle as Application. How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
SVN User authentication using Command Line or Python
21,130,791
0
1
3,064
0
python,svn,tortoisesvn,pysvn
Take a look at the documentation for pysvn.Client.callback_* and you will see that the methods you have to provide handle prompting for passwords and errors if they don't match.
0
1
0
0
2014-01-15T06:38:00.000
3
0
false
21,130,642
0
0
0
1
I need to verify if some user is valid SVN user or not in my application. I want to achieve this using SVN commandline tools/Tortoise SVN Commndline or Python in windows. I looked up PySVN but it seems to me that there is no way to authenticate a current user in that library. Please suggest some way of doing the same. Thank You
How to turn on the Auto-Complete in Python Wing IDE?
21,143,719
0
1
2,346
0
python,autocomplete,ide,wing-ide
You are probably using Wing 101, which is a very scaled back version that does not have auto-completion. It was designed for teaching beginning programmers and the professors we worked with in creating it felt auto-completion should be left off. Wing IDE Personal and Wing IDE Pro both have auto-completion and much more; Wing 101 really is severely scaled back to make it simple for beginners.
0
1
0
0
2014-01-15T16:39:00.000
1
0
false
21,143,142
1
0
0
1
I am using the Python IDE Wing and just cannot seem to find the auto-complete option. Is there even such option in this program?
Can llvm execute code from managed languages?
21,189,967
0
4
214
0
python,c++,llvm,llvm-ir
After some reading and some conversations I believe the answer is that the ExecutionEngine essentially executes code as if it was native C code. Which means if you wanted to execute lua/python/javascript code ontop of llvm you would need to actually send the bitcode for that runtime. Then the runtime could parse and execute the script as usual. As far as I know none of these runtimes have the ability to compile their script directly into llvm bitcode (yet).
0
1
0
1
2014-01-16T17:15:00.000
1
1.2
true
21,168,440
0
0
0
1
I'm making an application and I would like to load and execute llvm bitcode using the ExecutionEngine. I have managed to do this with really simple C code compiled via clang so far. My thought is, if I use llvm for this project then it could be more language agnostic than say, specifically picking lua/python/javascript. But I'm confused about how this might work for managed or scripting languages since they are often times tied to a platform with resources such as a GC. So I'm not sure how it would actually work through the ExecutionEngine. So as an example scenario, suppose a user wanted to write some python code that runs in my application. I then want them to deliver to me bitcode representing that python code, which I will then run in my C++ application using llvm's ExecutionEngine. Is this possible? Can python be simply compiled into bitcode and then run later using the ExecutionEngine? If not, what do I need to know to understand why not?
running python file in windows services
21,183,630
1
1
97
0
python,windows-services
A service is nothing but a process/program that run on regular interval checks and runs accordingly. If you have script already written, then another script,service_script which will do the following It should check if the program is required to run ? (Syn is required if two parties are not in same state) At what interval you should check, there is a chance that this script is required to run. Say you DB updated every 10 mintues. Then code you script to syn with it. If job is there do it else set it to sleep. If possible make sure your script is optimised, following standards & all basic things. As for GUI, you store these success/failure details in a Log file. If you want GUI - a small php interface/python simple http will help you set up a interface. I have some experience in doing some monitoring scipts & dashboard, but not quiet simmilar to your work. Godspeed.
0
1
0
0
2014-01-17T03:11:00.000
2
1.2
true
21,177,140
1
0
0
1
I have a python file that will synchronize my MySql Database from my own server to the local server. I want to install it as a windows services every time my local server boot up. Can you help me? I want to add also that can I make a GUI for that services just like an Apache that will display beside the task bar clock? Thank you so much in advance.
Learning python for security, having trouble with su
21,179,425
-1
4
230
0
python,linux,security
If you just want to do this for learning, you can easily build a fake environment with your own faked passwd-file. You can use some of the built-in python encrypt method to generate passwords. this has the advantage of proper test cases, you know what you are looking for and where you should succeed or fail.
0
1
0
1
2014-01-17T06:29:00.000
2
-0.099668
false
21,179,274
0
0
0
1
Preface: I am fully aware that this could be illegal if not on a test machine. I am doing this as a learning exercise for learning python for security and penetration testing. This will ONLY be done on a linux machine that I own and have full control over. I am learning python as my first scripting language hopefully for use down the line in a security position. Upon asking for ideas of scripts to help teach myself, someone suggested that I create one for user enumeration.The idea is simple, cat out the user names from /etc/passwd from an account that does NOT have sudo privileges and try to 'su' into those accounts using the one password that I have. A reverse brute force of sorts, instead of a single user with a list of passwords, Im using a single password with a list of users. My issue is that no matter how I have approached this, the script hangs or stops at the "Password: " prompt. I have tried multiple methods, from using os.system and echoing the password in, passing it as a variable, and using the pexpect module. Nothing seems to be working. When I Google it, all of the recommendations point to using sudo, which in this scenario, isnt a valid option as the user I have access to, doesnt have sudo privileges. I am beyond desperate on this, just to finish the challenge. I have asked on reddit, in IRC and all of my programming wizard friends, and beyond echo "password" |sudo -S su, which cant work because the user is not in the sudoers file, I am coming up short. When I try the same thing with just echo "password"| su I get su: must be run from a terminal. This is at a # and $ prompt. Is this even possible?
running a script from a directory in ipython
21,197,017
0
0
1,630
0
python,scripting,path,ipython
sys.path only affects imports, not IPython's %run. The run magic is like calling python script.py - you have to cd into the directory where scripts are, or pass the full path to those scripts.
0
1
0
0
2014-01-17T21:39:00.000
3
0
false
21,196,399
1
0
0
2
im pretty new to all of this so please try to bear with me. I've got a directory set up where i dump all the scripts im working on, and i'm trying to make it so that i can run the scripts from within that directory directly from ipython. so far, ive add an init.py to the aforementined directory, and have tried appending the path to sys.path, however, even after i successfully append the path, trying to use the run command for any script in the directory results in a not found error. another problem i have, is that after every kernel reset the sys.path seems to reset to its previous values, without the new path settings i enter. grateful for any help, ron
running a script from a directory in ipython
50,347,295
-1
0
1,630
0
python,scripting,path,ipython
in Ipython notebook type : %run script_name.py
0
1
0
0
2014-01-17T21:39:00.000
3
-0.066568
false
21,196,399
1
0
0
2
im pretty new to all of this so please try to bear with me. I've got a directory set up where i dump all the scripts im working on, and i'm trying to make it so that i can run the scripts from within that directory directly from ipython. so far, ive add an init.py to the aforementined directory, and have tried appending the path to sys.path, however, even after i successfully append the path, trying to use the run command for any script in the directory results in a not found error. another problem i have, is that after every kernel reset the sys.path seems to reset to its previous values, without the new path settings i enter. grateful for any help, ron
Python starting subprocess as admin without calling process being admin
21,368,197
1
0
1,811
0
python,windows,admin,cx-freeze
After much hunting I have found a solution, I tried using: os.popen os.startfile subprocess.call subprocess.Popen and finally os.system as os.system is essentially the same as typing on the command line or putting the arguments into a batch file and then executing it this asks for the executables default permissions, the only downside to this is that I get a shell window when the UAC window comes up, which remains until the program it opened is closed. the problem with the other solutions are: 1 - passes only the permissions of the calling application, regardless of what the called application requires. 2 - asks for higher level of permissions but no mechanism to pass arguments 3 - same as 1 4 - same as 1 if anyone can recommend a mechanism to prevent the shell window it would be appreciated. James
0
1
0
0
2014-01-18T14:16:00.000
1
0.197375
false
21,205,278
0
0
0
1
I am trying to write a program in python that consists of several parts: a config utility a hardware monitor a background process The idea being that once installed (using cx_freeze) the hardware monitor is constantly running in the background, when a piece of compatible hardware (using d2xx driver for FTDI devices) is connected it checks the registry to see if it has been previously configured, if it has then it starts the background process with the serial number as an argument, however if not it starts the config utility. However the hardware monitor needs to be running from start-up and as it only reads from the registry doesn't need full admin privileges, and the background process only reads so also does not need admin provileges, but the config utility needs to be able to write to the registry and hence needs admin. My question is this: How can I call another program from within python as admin and with arguments? I considered using os.startfile as I have set the frozen program as needing admin, however i then can't pass arguments to it. I also considered using subprocess.Popen but i can't work out how, or even if you can, elevate this to admin level, so while it will open the program and pass it the arguments it can't write to the registry. Any help would be appreciated, for further information my set-up is: Windows 7 64 bit (but also plan to do XP 32 bit) python2.7.6 (again 64 bit but plan to also do 32 bit) PyUSB-1.6 psutil-1.2.1 cx_freeze-4.3.2 Thanks James
Run python program from Erlang
21,226,686
0
2
1,127
0
python,while-loop,erlang,request
Ports communicate with Erlang VM by standard input/output. Does your python program use stdin/stdout for other purposes? If yes - it may be a reason of the problem.
0
1
0
1
2014-01-18T14:37:00.000
2
0
false
21,205,508
0
0
0
1
I want to read some data from a port in Python in a while true. Then I want to grab the data from Python in Erlang on a function call. So technically in this while true some global variables is gonna be set and on the request from erlang those variables will be return. I am using erlport for this communication but what I found was that I can make calls and casts to the python code but not run a function in python (in this case the main) and let it run. when I tried to run it with the call function erlang doesn't work and obviously is waiting for a response. How can I do this? any other alternative approaches is also good if you think this is not the correct way to do it.
Google App Engine: Using Ajax
21,220,947
2
1
281
0
python,ajax,google-app-engine
AJAX has nothing to do with PHP: it's a fancy name for a technique whose goal is to provide a way for the browser to communicate asynchronously with an HTTP server. It is independent of whatever is powering that server (be it PHP, Python or anything). I fear that you might not be able to understand this yet, so I recommend you to Google about it and experiment a lot before starting your project.
0
1
0
0
2014-01-19T18:10:00.000
2
0.197375
false
21,220,592
0
0
1
1
I was planning to develop an ecommerce site using Google App Engine in Python. Now, I want to use Ajax for some added dynamic features. However, I read somewhere that I need to know PHP in order to use AJAX on my website. So, is there no way I can use Ajax in Python in Google App Engine? Also, I would be using the webapp2 framework for my application. Also, if its possible to use Ajax in Google App Engine with Python, can anyone suggest some good tutorials for learning Ajax for the same?
How do I run a Django 1.6 project with multiple instances running off the same server, using the same db backend?
21,233,816
2
1
222
0
python,django,deployment,paas
If I was doing it (and I did a similar thing with a PHP application I inherited), I'd have a fabric command that allows me to provision a new instance. This could be broken up into the requisite steps (check-out code, create database, syncdb/migrate, create DNS entry, start web server). I'd probably do something sane like use the DNS entry as the database name: or at least use a reversible function to do that. You could then string these together to easily create a new instance. You will also need a way to tell the newly created instance which database and domain name they needed to use. You could have the provisioning script write some data to a file in the checked out repository that is then used by Django in it's initialisation phase.
0
1
0
0
2014-01-20T02:31:00.000
1
0.379949
false
21,225,368
0
0
1
1
I have a Django 1.6 project (stored in a Bitbucket Git repo) that I wish to host on a VPS. The idea is that when someone purchases a copy of the software I have written, I can type in a few simple commands that will take a designated copy of the code from Git, create a new instance of the project with its own subdomain (e.g. <customer_name>.example.com), and create a new Postgres database (on the same server). I should hopefully be able to create and remove these 'instances' easily. What's the best way of doing this? I've looked into writing scripts using some sort of combination of Supervisor/Gnunicorn/Nginx/Fabric etc. Other options could be something more serious like using Docker or Vagrant. I've also looked into various PaaS options too. Thanks in advance. (EDIT: I have looked at the following services/things: Dokku (can't use Heroku due to data constraints), Vagrant (inc Puppet), Docker, Fabfile, Deis, Cherokee, Flynn (under dev))
IPTABLE rules to get all network packets in promisc mode
21,226,492
1
0
607
0
python,rules,sniffer
In a modern switched network, you system is in general only going to see two kinds of traffic: unicast traffic explicitly directed to your system and broadcast traffic that is visible to all systems. Nothing you can do in your code will make other traffic on the network visible to you. Enabling promiscuous mode on your interfaces in this situation is going to net you very little additional traffic. This is less true in a network with a shared bus -- such as WifI, or back in the old days when we used hubs instead of switches. Netfilter -- the Linux firewall you manipulate with the iptables command -- really only operates on the layer 3 (ip) level, and isn't going to affect what traffic is visible to your interface.
0
1
0
0
2014-01-20T04:44:00.000
1
1.2
true
21,226,387
0
0
0
1
I am running Ubuntu on my machine and want to write some sniffer scripts. But I am getting packets related to my NIC only even if I run my Interface in promisc mode. Is there any IPTABLE rules that i need to put on so that i can get entrie packets on the network?? Please help. I am using python for everything i am doing , if it helps
Shared Library files .so x86 would work on ARM?
21,242,449
2
0
346
0
python,shared-libraries,raspberry-pi,ctype
You'll need to recompile them from source. x86 and ARM are completely different microprocessor architectures, and programs/libraries compiled for one will not work on the other.
0
1
0
1
2014-01-20T19:24:00.000
1
0.379949
false
21,242,436
0
0
0
1
I have .so files that work well on my Ubuntu 32bit, would I need different version of them to work on my Raspberry Pi? I am loading them using python. If it wont work, What should I go through?
Will XLWT work in linux platform?
21,259,963
2
2
379
0
python,linux,excel
Yes, xlrd/xlwt work fine on Linux. Most python code and libraries run the same on any platform.
0
1
0
1
2014-01-21T13:00:00.000
1
1.2
true
21,258,884
0
0
0
1
I am working on a project which is based on python windows version. Now the customer wants the project to be extended to linux platform also. My project uses the package xlwt, xlrd for writing the results to the excel sheet. So here, Will these packages are compatible with the linux platform also? Can I use this package in Linux? Or Is there any equivalent package for Linux to write the result to a spreadsheet? Since my code is very huge,Is there any tool to convert the whole code from windows platform to linux platform?
Shared Persistent Storage in Python
21,263,740
1
1
240
0
python,multiprocess
Using lock files may be an option. For example, each process checks for a file like "/target_dir/lock" before write. If file exists, process will not write anything. So you have to run separate monitor process, which checks directory size, and creates or deletes lock file.
0
1
0
0
2014-01-21T16:21:00.000
1
1.2
true
21,263,589
1
0
0
1
Several processes are each writing a file to a directory. The goal is to control the size of the directory such that whenever it reaches a size (S), all processes stop writing to the directory and discard the file they are about to write. If the size then becomes lower than S because some of those files were removed, the processes will resume writing files. It seems that I need inter-process locking to achieve this design. However, I thought maybe there's an easier way, since inter process locking is not readily available in python and obviously there's contention between processes. Python 2.7 Platforms (Win, Mac, Linux)
cronjob on CentOS running a python script
21,266,639
0
0
3,668
0
python,cron
Each line that contains a job must end in a newline character.
0
1
0
1
2014-01-21T18:35:00.000
2
0
false
21,266,405
0
0
0
1
I wrote a Python script which I need it to run every 5 mins. My server is running CentOS 6.4 Final. Here's what I did in detail. After logging into the server with an account has root access, I did cd /var/spool/cron/, I can see a couple of files has different usernames on it. Edit my file (the one has my username on it) with nano myusername and I added this line at the end of the file. */5 * * * * /usr/bin/python /home/myusername/Dev/cron/python_sql_image.py I waited a bit and the cronjob works now. But new question: this Python code will generate a png file after being executed. When I manually run it, the png file will be created under the same folder with the py script, but when cronjob runs it, the png file was created on /home/myusername. Is there anyway I can change the location?
Safely removing program from usr/local/bin on Mac OSX 10.6.8?
21,274,416
5
2
10,163
0
python,macos,command-line,scrapy,bin
First, next time you get a Permission Denied from pip uninstall foo, try sudo pip uninstall foo rather than trying to do it manually. But it's too late to do that now, you've already erased the files that pip needs to do the uninstall. Also: Up until this point, I've resisted the urge to just delete it. But I know that those folders are hidden by Apple for a reason... Yes, they're hidden so that people who don't use command-line programs, write their own scripts, etc. will never have to see them. That isn't you. You're a power-user, and sometimes you will need to see stuff that Apple hides from novices. You already looked into /Library, so why not /usr/local? The one thing to keep in mind is learning to distinguish stuff installed by OS X itself from stuff installed by third-party programs. Basically, anything in /System/Library or /usr is part of OS X, so you should generally not touch it or you might break the OS; anything installed in /Library or /usr/local is not part of OS X, so the worst you could do is break some program that you installed. Also, remember that you can always back things up. Instead of deleting a file, move it to some quarantine location under your home directory. Then, it it turns out you made a mistake, just move it back. Anyway, yes, it's safe to delete /usr/local/bin/scrapy. Of course it will break scrapy, but that's the whole point of what you're trying to do, right? On the other hand, it's also safe to leave it there, except for the fact that if you accidentally type scrapy at a shell prompt, you'll get an error about scrapy not being able to find its modules, instead of an error about no such program existing. Well, that, and it may get in the way of you trying to re-install scrapy. Anyway, what I'd suggest is this: pip install scrapy again. When it complains about files that it doesn't want to overwrite, those files must be from the previous installation, so delete them, and try again. Repeat until it succeeds. If at some point it complains that you already have scrapy (which I don't think it will, given what you posted), do pip install --upgrade scrapy instead. If at some point it fails because it wants to update some Apple pre-installed library in /System/Library/…/lib/python, don't delete that; instead, switch to pip install --no-deps scrapy. (Combine this with the --upgrade flag if necessary.) Normally, the --no-deps option isn't very useful; all it does is get you a non-working installation. But if you're only installing to uninstall, that's not a problem. Anyway, once you get it installed, pip uninstall scrapy, and now you should be done, all traces gone.
0
1
0
0
2014-01-22T04:43:00.000
1
1.2
true
21,274,359
0
0
1
1
So I've been having a lot of trouble lately with a messy install of Scrapy. While I was learning the command line, I ended up installing with pip and then easy_install at the same time. Idk what kinda mess that made. I tried the command pip uninstall scrapy, and it gave me the following error: OSError: [Errno 13] Permission denied: '/Library/Python/2.6/site-packages/Scrapy-0.22.0-py2.6.egg/EGG-INFO/dependency_links.txt' so, I followed the path and deleted the text file... along with anything else that said "Scrapy" within that path. There were two versions in the /site-packages/ directory. When I tried again with the pip uninstall scrapy command, I was given the following error: Cannot uninstall requirement scrapy, not installed That felt too easy, so I went exploring through my directory hierarchy and I found the following in the usr/local/bin directory: -rwxr-xr-x 1 greyelerson staff 173 Jan 21 06:57 scrapy* Up until this point, I've resisted the urge to just delete it. But I know that those folders are hidden by Apple for a reason... 1.) Is it safe to just delete it? 2.) Will that completely remove Scrapy, or are there more files that I need to remove as well? (I haven't found any robust documentation on how to remove Scrapy once it's installed)
Python package management in Mac OS X
23,864,710
0
1
867
0
python,macos,shell,installation
pip and easy_install are for python libraries. apt-get, brew, fink, port, etc. These tools are 'distro style' package management tools. They have one area of overlap in terms of 'why do i need one of each?' and that is Library dependencies. pip is the tool endorsed by the most python developers and the python packaging SIG going forward, so TLDR; use pip not easy_install these tools also work with virtualenvs and virtualenvs are great. use them :) You will however run into occasions where you need other libraries that python doesnt quite know what to do with when you try and build a python package with pip. It is these moments that make it necessary to have one of the other tools.
0
1
0
0
2014-01-22T17:37:00.000
1
0
false
21,289,974
1
0
0
1
Every time I tried to install a new package for python on Mac OS X, I had this issue which these packages had different ways to setup with different package management tools. Specially for new versions of Mac OS X 10.9 Mavericks, some of installers are buggy, then I needed to switch between them. I'm asking for a short description and comparison between these main command-line installers: easy_install, pip, port, apt-get, brew, fink, and etc. Of course, sometimes there is no way other than installing through source code make install, python setup.py, or .pkg installer files. But I guess that's not the case when you need to install more complicated packages with lots of dependencies. What I'm asking has two sides: Is it safe to use them side by side? or are there any known conflicts between these command-line tools? (at least brew throws warnings on port availability) Is there any known cons and pros based on nature of these package managements, in case when we had choice between them?
downloading or displaying BlobProperties from Google App Engine
21,323,348
0
0
97
0
python,google-app-engine
It entirely depends on what you are currently storing in the BlobProperty. Since it is typically used to store data with an upper size limit of 1 MB, I am assuming that you are storing it for images or even some files, which are under that limit. In all probability, you might want to either provide a link to the user via your web application to either download the document or if it is an image, you might want to render it yourself in the web application (for e.g. a user's avatar or something).
0
1
0
0
2014-01-24T01:28:00.000
2
0
false
21,322,741
0
0
1
1
How would I go about selecting and downloading or displaying individual entries from the Datastore. Specifically if those entries contain a BlobProperty.
Building Python Interpreter on Windows with UCS4 support
21,390,894
1
1
636
0
python
Python uses configure tools to configure the build. If you can't run the configure script, you may want to install cygwin in order to run the configure script. You'll want to pass the flag --enable-unicode=ucs4 to get UCS4, and you'll likely need a number of other flags to get it to work with the microsoft compiler. Of course, if you can tolerate a cygwin-based version, and install all the necessary cygwin bits to build, you'll be able to build under cygwin without supplying a lot of flags, because configure will auto-detect the necessary settings.
0
1
0
0
2014-01-24T03:53:00.000
1
1.2
true
21,324,095
1
0
0
1
How can I configure the Python Interpreter build project to build with UCS4 support on Windows? For example, I want to create Python 2.6.9, 64 bits + UCS4 support for Windows. We want to produce pre-compiled python files (.pyc files) for multiple platforms. Due to our existing build setup we wish to build all the .pyc files on Windows, with those files in turn being distributed for use on other, Unix-like platforms. So, we need to have various versions of the Python interpreter on Windows - versions that do not exist as an installer package. I have built Python 2.6.9 32 bits with UCS2 support using Visual Studio and the unmodified Python source tree and Visual Studio project files. I see in pyconfig.h there is #define Py_UNICODE_SIZE 2. However, changing this 2 to a 4 has no effect on the resulting Python Interpreter.
Keeping the .dll and .pyd files in other directory
23,767,893
0
0
853
0
python,dll,python-3.x,cx-freeze
Python for Windows really requires that the main pythonXX.dll (in this case, python33.dll) exists in C:\windows\system32\ In all of our various combinations of installing Python to different locations, network drives, etc. we've always had to use a little batch file to copy pythonXX.dll into the system32 dir. I don't think PATH manipulation will work for you, just try copying the dll to system32 and see if your issues go away. Then again, if you installed another version of Python to say C:\Python33 , then that windows-based installer will do this for you, and you'll be able to run your other Python location.
0
1
0
0
2014-01-24T20:46:00.000
1
0
false
21,342,188
0
0
0
1
I am using cx_Freeze to convert Python scripts to Windows executable. I am using cxfreeze script present in the Scripts directory. I want the executable generated by cxfreeze to be in a different directory and the .dll's and .pyd's in a different one. When I tried to put the two of them in separate directories the .exe did not work I got The application has failed to start because python33.dll was not found. Try reinstalling to fix this problem I know this happends because the executable and (dll's & .pyd's) are located in different directories. Is there a way to keep them in different directories ? I am using the following command to generate the executable C:\Python33\Scripts>cxfreeze C:\Users\me\Desktop\setup.py --target-name=setup.exe --target-dir=C:\Users\me\Desktop\new_dir
Installing Python on Mac OS X
21,346,715
1
0
262
0
python,macos,python-3.x
You don't want to actually update the system version of python. But also, python3 is the executable name.
0
1
0
0
2014-01-25T04:33:00.000
3
0.066568
false
21,346,669
1
0
0
1
This should be a really simple question, but I'm a little new to all this. I am trying to update my version of Python on Mac OS X 10.6.8. So, I downloaded Python 3.3 from www.python.org, and ran the .dmg file. This then created a "Python 3.3" icon in my Applications folder. However, when I type "python -V" into the Terminal, it prints "Python 2.7.6", so clearly my version of Python has not been updated. What am I doing wrong? Thanks!
Installing Python 2.7 on Windows 8
28,439,616
2
25
140,370
0
python-2.7,path,window,installation
Make sure you don't put a space between the semi-colon and the new folder location that you are adding to the path. For example it should look like... {last path entry};C:\Python27;C:\Python27\Scripts; ...not... {last path entry}; C:\Python27; C:\Python27\Scripts;
0
1
0
0
2014-01-27T03:38:00.000
9
0.044415
false
21,372,637
1
0
0
3
So I'm trying python 2.7 on my Windows. It is running Windows 8. I cannot add it to my path. I've done the usual: using the advanced system settings, environment variables, adding C:\Python27 in system variables. However, when I type Python in command prompt it says 'python is not recognized ..'
Installing Python 2.7 on Windows 8
21,372,892
5
25
140,370
0
python-2.7,path,window,installation
System variables usually require a restart to become effective. Does it still not work after a restart?
0
1
0
0
2014-01-27T03:38:00.000
9
0.110656
false
21,372,637
1
0
0
3
So I'm trying python 2.7 on my Windows. It is running Windows 8. I cannot add it to my path. I've done the usual: using the advanced system settings, environment variables, adding C:\Python27 in system variables. However, when I type Python in command prompt it says 'python is not recognized ..'
Installing Python 2.7 on Windows 8
21,373,239
1
25
140,370
0
python-2.7,path,window,installation
i'm using python 2.7 in win 8 too but no problem with that. maybe you need to reastart your computer like wclear said, or you can run python command line program that included in python installation folder, i think below IDLE program. hope it help.
0
1
0
0
2014-01-27T03:38:00.000
9
0.022219
false
21,372,637
1
0
0
3
So I'm trying python 2.7 on my Windows. It is running Windows 8. I cannot add it to my path. I've done the usual: using the advanced system settings, environment variables, adding C:\Python27 in system variables. However, when I type Python in command prompt it says 'python is not recognized ..'
How to deal with processing interdependent files in a pipeline
21,394,049
0
2
126
0
python,linux,functional-programming,puppet,pipeline
I think you are asking how you can transform multiple files when there are dependencies between the files, and possibly parallelise. The problem of resolving the dependencies is called a topological sort. Fortunately, the make utility will handle all of this for you, and you can use the -j flag to parallelise, which is easier than doing this yourself. By default it will only regenerate files if the input files change, but this is easy enough to get around by ensuring all outputs and intermediate files of each batch are removed / not present prior to invocation.
0
1
0
0
2014-01-27T21:45:00.000
1
0
false
21,392,346
0
0
0
1
I am trying to determine the best way to build a sort of pipeline system with many interdependent files that will be put through it, and I am wondering if anyone has specific recommendations regarding tools or approaches. We work mostly in Python and Linux. We get files of experimental data that are delivered to "inbox" directories on an HPC cluster, and these must be processed in several linear, consecutive steps. The issue is that sometimes there are multiple samples that must be processed at some stages of the pipeline as a group, so e.g. samples can independently be put through steps A and B, but all samples in the group must have completed this process to proceed through step C (which requires all of the samples together). It strikes me as a kind of functional problem, in that each step is kind of a modular piece and I will mostly only be checking for the existence of the output: if I have Sample 1 Step B output, I need Sample 2 Step B output so that I can then get Sample 1+2 C output. I don't know a great deal about Puppet but I wonder if this kind of tool might be something I could use for this -- something that handles dependencies and deals with monitoring states? Any ideas? Thanks, Mario
How can I install mysql-python in a virtualenv without compiling anything?
27,750,040
0
2
398
0
python,virtualenv
There is the option for virtualenv of --system-site-packages which will "Give access to the global site-packages modules to the virtual environment." If the parent host already has the mysql-python module installed, it will use that.
0
1
0
0
2014-01-28T15:11:00.000
1
0
false
21,409,370
1
0
0
1
I don't have access to gcc on my shared hosting provider (Hostgator), so when I try to install mysql-python from within a virtualenv using pip install MySQL-python, I get unable to execute gcc: Permission denied. Is there another way to install the MySQL-python library in my virtualenv?
Python not recognizing the greater than sign?
21,412,006
0
0
274
0
python,batch-file,unicode
If you are using shell constructs such as redirection, you need the parameter shell=True.
0
1
0
0
2014-01-28T16:56:00.000
2
0
false
21,411,914
0
0
0
1
I am using Python to create windows commands using subprocess.call(command) where command is a string I've generated for the Windows command. I need the results of my command to output to a .txt file so I use 2>> C:\Users\me\out.txt as part of command except Python does not seem to recognize the greater than character, >. I've tried using the Unicode value, u'\u003E' too. [EDIT] If I copy command and paste it into my command prompt, then it will execute the command properly. Otherwise it won't work from my Python script.
How to list all Python versions installed in the system?
21,434,929
4
8
4,374
0
python
I'm writing a Python IDE and I want to let user to choose the interpreter for executing the program. Just do it like other IDEs then and simply supply a dialog where users can add interpreters they want to be able to run the code with. Eclipse does this for example for Java Runtimes, and it’s perfectly fine to have it like that. Especially for languages like Python where virtual environments are an important thing which each have their own exectutable. You certainly can come up with a one-time detection that checks some common locations. For Windows, this would obviously be the registry, as the py.exe launcher requires the interpreters to be registered there—at least the system-wide ones. On Unix machines, you could check the common bin/ folders, most prominently /usr/local/bin/ which is the standard location where Python installs itself. You could also check the PATH for Python executables. But all those things should be considered carefully and only offer an initial setup. There are always edge cases where a user didn’t do the “standard thing” where your detection will fail. For example I don’t have my Python interpreters in my path, and a linux server I access I have installed Python into a non-standard folder in my home directory. And finally, just because it looks like Python doesn’t mean it is Python. Yes, you can do some guesswork to come up with an initial set of interpreters, but really don’t spend too much time on it. In the end, you won’t be able to detect everything perfectly anyway. And you will miss virtual environments—which might be very crucial to the project the user is working on in your IDE. So instead of wasting time on bad detection, spend more time on creating a manual dialog to register interpreters. You will need that anyway, and a good interface can make it very easy—even for beginners—to use it.
0
1
0
0
2014-01-29T14:51:00.000
2
0.379949
false
21,434,533
1
0
0
1
I need to present the user a list of Python installations to choose from for executing something. I suppose in Windows I could get this information from registry. Don't know about Linux and Mac. Any hints? Or maybe you even know a place where I could find Python code for this? EDIT: it is not important that I find really all interpreters. Finding interpreters from standard locations would be actually fine. Agreed, it's not something too difficult, but I was just hoping that maybe someone has code for this lying around or that I've overlooked a function for that in stdlib.
Creating a web service for Qualtrics written in Python on Google App Engine
21,449,385
0
0
909
0
python,web-services,google-app-engine,rest,qualtrics
I am familiar with Qualtrics but I will answer (b) first. You can write a Python Web Service in a variety of ways, depending on your choice: You could write a simple get handler Use Google Cloud Endpoints Use one of several Web Services Python libraries Having said that, a quick glance at Qualtrics indicated that it required a RSS feed in the result format(I could be wrong). So what you will need to take care of while doing (b) is to ensure that it is in a format that Qualtrics understand and parses out the response format for you. For e.g. if you have to return RSS, you could write your Python Web Service to return that data. Optionally, it can also take one or more parameters to fine tune the results.
0
1
0
0
2014-01-30T00:55:00.000
1
1.2
true
21,445,897
0
0
1
1
Has anyone out there created a a.) web service for Qualtrics or b.) a Python web service on Google App Engine? I need to build in some functionality to a Qualtrics survey that seems only a web service (in the Qualtrics Survey Flow) could do, like passing parameters to a web service then getting a response back. I've looked at GAE Protocol RPC, but I'm not quite sure if that's the right path. Qualtrics gave me a PHP code example but I don't know how to begin translating it to python and/or GAE.
Python IDLE GUI not starting
26,771,569
0
1
2,234
0
windows-8.1,python-idle
Find file HOME/.idlerc/config-keys.cfg, where on Win7 HOME would be 'C:/Users/yourloginname', and delete the key binding or, if there is nothing else in the file or nothing you want to keep, the whole file. If you were to run Idle from a console with python -m idlelib, you would probably see an error message. (Yes, you were probably running with pythonw, as when using the start menu or icon. This works better in 3.4.2 and I am working or more improvements.) I do not know the specific reason for your crash. I set Zoom-height to --space, restarted, and it works, no problem.
0
1
0
0
2014-01-30T22:45:00.000
1
0
false
21,469,046
0
0
0
1
I feel like I have been coming the internet for days with absolutely no result. I have taken some web programming classes, and would like to learn some python, just because programming is wicked interesting altogether, and have run into a fairly large hurdle given my experience. the problem is this: Python.exe (or is is more properly pythonw.exe?) v3.3.3, running on windows 8.1 used to launch fine. Typed up a simple program to roll various sided die, worked out well. Then I changed the key bindings for 'Run Module' from 'ctrl+f5' to 'crtl+alt+spacebar.' As soon as I did this IDLE crashed and so did the shell. Now the process will not run AT ALL. I cannot access it through the desktop icon to go back and revert the settings. I also attempted to look at the .def files and change it from there but could not find the 'run module' command. It looked like all the key bindings in the .def files were for the shell. When I double click, nothing, when I run as admin, nothing. run from the start menu, nothing. I uninstalled and re-installed, rebooted, everything low tech I can think of. Now i'm out of my element and could use one of you brilliant social programmers!! I've found information about checking with some tool called 'Windows Process Manager' some stuff about what to do with the CMD prompt (something about a path problem ...it intuitivly sounds like I very well could have created a 'path problem' but I'm not 100% I know what that is exactly). I'm sorry for the lack of links, the pages were farther back in my browsing history than I expected. Hopefully i'm not asking an instant many down vote question here, most of the resources online are for either an older version of windows, Lunix, or an older version of python (which is actually where the path problem hint came from) Thanks any and all greatly for any time spend reading/answering. Immensely appreciated.
How to run a system command from a django web application?
21,489,049
1
0
148
0
python,linux,django,celery
Here is one approach: in your Django web application, write a message to a queue (e.g., RabbitMQ) containing the information that you need. In a separate system, read the message from the queue and perform any file actions. You can indeed use Celery for setting up this system.
0
1
0
0
2014-01-31T17:29:00.000
1
1.2
true
21,486,362
0
0
1
1
Does anyone knows of a proven and simple way of running a system command from a django application? Maybe using celery? ... From my research, it's a problematic task, since it involves permissions and insecure approaches to the problem. Am i right? EDIT: Use case: delete some files on a remote machine. Thanks
how do I get the name of application launched by subprocess in python?
21,498,354
1
0
35
0
python,subprocess
Given you're working in a Linux/POSIX environment you could read the EDITOR environment variable using the os.environ map.
0
1
0
0
2014-02-01T12:57:00.000
1
1.2
true
21,498,342
1
0
0
1
I am launching the text editor but for different users the default text editor could be different, so how do I get the name of which text editor is being used just to handle if an error occur switch to different text editor ?
Changing from PIL to PILLOW on a mac
22,077,355
2
0
301
0
python,macos,python-imaging-library,pillow
Reinstall X11 from XQuartz.org Install the latest XCode Install the command line tools: xcode-select --install Worked for me on mavericks
0
1
0
0
2014-02-01T20:20:00.000
1
0.379949
false
21,503,147
0
0
0
1
I'm having trouble upgrading from PIL to PILLOW on my mac. I tried "brew install libtiff lbjpeg webp littlecms" but homebrew couldn't find the lbjpeg - any tips?
How do I tell Aptana Studio to use Python virtualenv?
24,506,641
5
3
2,520
0
python,eclipse,python-2.7,ubuntu
Configure Aptana Studio's python interpreter( you can configure more than one) In aptana, Window -> Preferences -> Interpreter Python and create a New interpreter. Select the python executable from the virtual environment (in windows it is python.exe which resides in Scripts subfoler of the virtualenv,where as in ubuntu python is under bin subfolder) . Now Aptana will show a list of directories to add also remember to check C:\Python27\Lib or Ubuntu conterpart. Now on creating use this interpreter. Or if to use with existing project Step 1.Take project properties(File -> Properties OR By right clicking on Project). Step 2.From PyDev Interpreter/Grammer select the interpreter you configured above. Edit : In this way you can even configure both python 3 and python 2 for Aptana. You have to configure an interpreter for each python 3 and python 2. Then follow steps above to select the interpreter.
0
1
0
0
2014-02-01T22:40:00.000
1
0.761594
false
21,504,617
1
0
0
1
I did some searches on this topic and the solutions didn't work for me. I am running both a Linux (Ubuntu) environment and Windows. My system is Windows 8.1 but I have virtualbox with Ubuntu on that. Starting with Windows... I created a venv directory off the root of the e drive. Created a project folder and then ran the activate command, which is in the venv>Scripts directory. So, after activating that (note, I had installed virtualenv already)... so after activating that I then changed into the folder with my module and it ran fine, with the shebang, I didn't even have to type python in front of my filename. However, in Aptana Studio, it cannot find the module I installed with pip. So, it doesn't work. In an earlier post it was recommended that one choose a different interpreter and browse to the env and select that. So, how does one get this installed and working with an IDE like Eclipse and Aptana Studio? I am having problems on Ubuntu. The instructions I found had me using package installer to install virtualenv, pip and a few other tools that package these. The problem is that on Ubuntu the default version of python is 2.7.x. I need 3.3 or 3.x. So, can someone point me in the direction of how to setup virtual environments for the 2.7.x branch of python and the 3.x branch. Also, how does one tell the IDE (Eclipse or Aptana Studio) to use the virtualenv? Thanks, Bruce
How do you return a Partial response in app engine python endpoints?
23,165,174
0
0
418
0
google-app-engine,python-2.7,google-cloud-endpoints
From what I gather, Google has enabled partial response for their APIs, but has not yet explained how to enable it for custom APIs. I'm assuming if they do let us know, it might entail annotations, and possibly overriding a method or two. I've been looking also, to no avail. I've been looking into this just due to a related question, where I'd like to know how to force the JSON object in the response from my google Endpoint API, to include even the members of the class that are null valued. I was trying to see if anything would be returned if I used a partial response with a field indicated that was null.. would the response have the property at least, or would it still not even exist as a property. Anyway, this lead me into the same research, and I do not believe we can enable partial responses in our own APIs yet.
0
1
1
0
2014-02-02T21:10:00.000
2
1.2
true
21,516,287
0
0
1
1
I am learning endpoints and saw that other Google APIs have this "fields" query attribute. Also it appears in the api explorer. I would like to get a partial response for my api also, but when using the fields selector from the api explorer it is simply ignored by the server. Do I need to implement something in the server side? Haven't found anything in the docs. Any help is welcome.
Emacs freezing when asking Jedi/Auto Complete information while Interpreter is busy
21,522,470
2
3
673
0
python,python-2.7,python-3.x,emacs,autocomplete
Maybe disable auto-complete at all? BTW from my feeling relying on company, not jedi, the distraction from auto-complete in most cases is far over gain. Emacs comes with a lot of great tools making edits faster: abbrev, dabbrev etc. which seem much more efficient. Well, if jedi delivers really intelligent completions, it might be part of the game.
0
1
0
1
2014-02-02T23:35:00.000
1
1.2
true
21,517,747
0
0
0
1
I just now realize what is causing the trouble: Whenever the interpreter is busy, my Emacs buffer containing the python script buffer gets stuck, as I suspect that Emacs is trying to get the information of a function, and display it as a pop up. My usual solution is to spam C-g, but that gets old quickly. It has been bothering me for months, did anyone find a solution (such as a separate thread for the python info)? Even simply ceasing Jedi work while the interpreter is busy really would save a lot of frustration. I am using Jedi, auto-complete, Python 2.7 and Python 3.3 (the problems occur in both), on Ubuntu.
Installing pycurl on mac
22,518,780
12
5
10,669
0
python,libcurl,pycurl
Use one of the two methods Method 1: sudo easy_install pycurl Method 2: pip install pycurl
0
1
0
0
2014-02-03T07:09:00.000
2
1
false
21,521,587
1
0
0
1
I am very new to python and need help installing the pycurl library on my machine. I am running python 2.7 at the moment. A brief tutorial would be much appreciated.
uwsgi attach-daemon before python process starts
21,546,696
2
1
380
0
python,process,uwsgi
Use --lazy-apps, in this way the app will be loaded by each worker after the master has been fully spawned (and its external daemons started)
0
1
0
0
2014-02-03T21:24:00.000
1
0.379949
false
21,538,086
0
0
0
1
I have a separate process that I want to run alongside the python process I have managed by uWSGI. I wanted to use the attach-daemon option to start this process, but it seems that bash command specified in attach-daemon does not get called until after the python process' app gets started up. However, I need the process to be running before the python process starts up in order for everything to run correctly. Is there any way to specify which order things get started in? It's not even necessary to me that I use attach-daemon, if there's a simpler way to initialize a set of managed processes in a defined order.
Pagination in Google App EngineSearch API
37,264,173
1
6
575
0
python,google-app-engine
Sorry to revive this old question, but I have a solution for this issue given a few constraints with possible workarounds. Basically, the cursors for previous pages can be stored and reused for revisiting that page. Constraints: This requires that pagination is done dynamically (e.g. with Javascript) so that older cursors are not lost. Workaround if pagination is done across html pages, the cursors would need to be passed along. Users would not be able to arbitrarily select a forward page, and would only be given next/back buttons. Though any previously visited page could easily be jumped to. Workaround could be to internally iterate and discard entries while generating cursors at pagination points until finally reaching the desired results. Then return the list of previous page cursors as well. All of this requires a lot of extra bookkeeping and complexity, which almost makes the solution purely academic, but I suppose that would depend on how much more efficient cursors are than simply limit/offset. This could be a worthwhile endeavor if your data is such that you don't expect your users to want to jump ahead more than one page at a time (which includes most type of searches).
0
1
0
0
2014-02-04T06:40:00.000
2
0.099668
false
21,545,635
0
0
1
1
I want to do pagination in google app engine search api using cursors (not offset). the forward pagination is straight forward , the problem is how to implement the backward pagination.
can't execute pyw on windows
21,546,717
4
1
16,518
0
python,pythonw
Change the program that opens python files. Assuming you're using Windows, right click any python file (in your case any .pyw file, not .py), properties, change Opens with to pythonw instead of IDLE
0
1
0
0
2014-02-04T06:43:00.000
2
1.2
true
21,545,667
1
0
0
2
I want to hide the console window of a python program, so I change the file extensions to "pyw", but when I open it, the python IDLE show up even though I choose open it with "pythonw.exe" If I use "pythonw test.py" in cmd, it works. So I want to know what's wrong with this and how to solve this, thank you.
can't execute pyw on windows
60,558,957
0
1
16,518
0
python,pythonw
For me, I had multiple version of Python installed that was causing issues. Once I had only had one version, I applied that pythonw.exe was the default for .pyw files and it worked.
0
1
0
0
2014-02-04T06:43:00.000
2
0
false
21,545,667
1
0
0
2
I want to hide the console window of a python program, so I change the file extensions to "pyw", but when I open it, the python IDLE show up even though I choose open it with "pythonw.exe" If I use "pythonw test.py" in cmd, it works. So I want to know what's wrong with this and how to solve this, thank you.
Executable shell file in Windows
21,559,703
0
1
262
0
python,bash,exe,samba
As you've said, this executable file would need to be something that runs on both Linux and Windows. That will exclude binary files, such as compiled C files. What you are left with would be an executable script, which could be Bash Ruby Python PHP Perl If need be the script could simply be a bootstrapper that loads the appropriate binary executable depending on the operating system.
0
1
0
1
2014-02-04T16:32:00.000
3
0
false
21,558,022
0
0
0
1
I have a executable file working in Ubuntu that runs a script in Python and works fine. I have also a shared directory with Samba server. The idea is that everyone (even Windows users) can execute this executable file located in this shared folder to run the script located in my computer. But, how can I make an executable file that runs the python script of MY computer from both Linux and Windows remote users?
Keeping live audio stream synced
27,043,950
0
0
661
0
python,streaming,audio-streaming,ntp,gstreamer
Sorry for bring up an old question but this is something that I am looking into. I believe you need to look at Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in entertainment and communications systems to control streaming media servers. The protocol is used for establishing and controlling media sessions between end points. This is a much better way of keeping things synchronized. As for sending it to multiple devices looking into multicast addresses. Hope this helps
0
1
0
0
2014-02-04T19:18:00.000
2
0
false
21,561,405
0
0
1
2
I'm working on an application to provide multi-room audio to devices and have succeeded in keeping audio playing from a file (e.g. mp3) synced using GST and manually using NTP but I can't seem to get a live audio stream to sync. Essentially I want to be able to stream audio from one device to one or more other devices but rather than them buffering and getting out of sync I want them to all play at around the same time (close enough for any delay to not be noticeable anyway). Has anyone got any suggestions on ways that this can be achieved or can provide any material discussing the matter? (Search hasn't turned up much) It's worth noting that this application will be coded in Python.
Keeping live audio stream synced
21,566,534
1
0
661
0
python,streaming,audio-streaming,ntp,gstreamer
Unfortunately, delay as low as 10 milliseconds is noticeable to most folks. Musicians tend to appreciate even lower delay than that. And if you have any of the speakers from different devices within earshot of each other, you're going to run into phase issues at even the slightest unpredictable delay (which is inevitable on a computer). Basically, it is impossible to have a delay that isn't noticeable. Even if you do succeed in synchronizing the start times exactly, each device has a different sample clock on it, and they will drift apart over time. What is 44.1kHz to one device might be 44.103kHz on the other. If you have a more realistic expectation of synchronization... around 50-100ms, then this becomes more feasible. I would have one master device doing the decoding and then sending PCM samples out to the other devices for playback. Keep track of your audio device buffers and make sure they aren't getting too big (indicating that your device is behind) or underrunning (indicating a network problem or that your device is ahead). Have all the devices with the same buffer sizes and maybe even use broadcast packets to send the audio, since all devices are on the same network anyway.
0
1
0
0
2014-02-04T19:18:00.000
2
0.099668
false
21,561,405
0
0
1
2
I'm working on an application to provide multi-room audio to devices and have succeeded in keeping audio playing from a file (e.g. mp3) synced using GST and manually using NTP but I can't seem to get a live audio stream to sync. Essentially I want to be able to stream audio from one device to one or more other devices but rather than them buffering and getting out of sync I want them to all play at around the same time (close enough for any delay to not be noticeable anyway). Has anyone got any suggestions on ways that this can be achieved or can provide any material discussing the matter? (Search hasn't turned up much) It's worth noting that this application will be coded in Python.
How to use easy_install with python 3.3 while 2.7 is also installed
23,478,462
2
1
727
0
python-2.7,ubuntu,python-3.3,easy-install
Tested on Ubuntu 14.04: 1) Use pip instead of easy_install. It's the way of the future. ;-) 2) sudo apt-get install python3-pip 3) sudo pip3 install AWESOME-PACKAGE
0
1
0
0
2014-02-04T23:44:00.000
1
1.2
true
21,565,926
1
0
0
1
I am using ubuntu with root access and have python 3.3 and 2.7 installed in my system. When I use easy_install by default it installs the package for 2.7. How can I use it to install to 3.3 instead
How to find the breakpoint numbers in pdb (ipdb)?
21,582,431
11
13
4,097
0
python,breakpoints,ipdb
Use the break command. Don't add any line numbers and it will list all instead of adding them.
0
1
0
0
2014-02-05T16:10:00.000
2
1
false
21,582,358
0
0
0
2
Trying to find how to execute ipdb (or pdb) commands such as disable. Calling the h command on disable says disable bpnumber [bpnumber ...] Disables the breakpoints given as a space separated list of bp numbers. So how whould I get those bp numbers? was looking through the list of commands and couldn't get any to display the bp numbers [EDIT] The break, b and info breakpoints commands don't do anything, although in my module i clearly have 1 breakpoint set like this import pdb; pdb.set_trace( ) - same for ipdb. Moreover info is not defined. The output of help in pdb: Documented commands (type help ): ======================================== EOF bt cont enable jump pp run unt a c continue exit l q s until alias cl d h list quit step up args clear debug help n r tbreak w b commands disable ignore next restart u whatis break condition down j p return unalias where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv And for ipdb: Documented commands (type help ): ======================================== EOF bt cont enable jump pdef psource run unt a c continue exit l pdoc q s until alias cl d h list pfile quit step up args clear debug help n pinfo r tbreak w b commands disable ignore next pinfo2 restart u whatis break condition down j p pp return unalias where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv I have saved my module as pb3.py and am executing it within the command line like this python -m pb3 The execution does indeed stop at the breakpoint, but within di pdb (ipdb) console, the commands indicated don't display anything - or display a NameError If more info is needed, i will provide it.
How to find the breakpoint numbers in pdb (ipdb)?
21,582,459
-3
13
4,097
0
python,breakpoints,ipdb
info breakpoints or just info b lists all breakpoints.
0
1
0
0
2014-02-05T16:10:00.000
2
-0.291313
false
21,582,358
0
0
0
2
Trying to find how to execute ipdb (or pdb) commands such as disable. Calling the h command on disable says disable bpnumber [bpnumber ...] Disables the breakpoints given as a space separated list of bp numbers. So how whould I get those bp numbers? was looking through the list of commands and couldn't get any to display the bp numbers [EDIT] The break, b and info breakpoints commands don't do anything, although in my module i clearly have 1 breakpoint set like this import pdb; pdb.set_trace( ) - same for ipdb. Moreover info is not defined. The output of help in pdb: Documented commands (type help ): ======================================== EOF bt cont enable jump pp run unt a c continue exit l q s until alias cl d h list quit step up args clear debug help n r tbreak w b commands disable ignore next restart u whatis break condition down j p return unalias where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv And for ipdb: Documented commands (type help ): ======================================== EOF bt cont enable jump pdef psource run unt a c continue exit l pdoc q s until alias cl d h list pfile quit step up args clear debug help n pinfo r tbreak w b commands disable ignore next pinfo2 restart u whatis break condition down j p pp return unalias where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv I have saved my module as pb3.py and am executing it within the command line like this python -m pb3 The execution does indeed stop at the breakpoint, but within di pdb (ipdb) console, the commands indicated don't display anything - or display a NameError If more info is needed, i will provide it.
Using python2.7 with Emacs 24.3 and python-mode.el
21,590,370
0
2
1,019
0
python,python-2.7,emacs,emacs24
I don't use python, but from the source to python-mode, I think you should look into customizing the variable python-python-command - It seems to default to the first path command matching "python"; perhaps you can supply it with a custom path?
0
1
0
1
2014-02-05T21:03:00.000
2
0
false
21,588,464
0
0
0
1
I'm new to Emacs and I'm trying to set up my python environment. So far I've learned that using "python-mode.el" in a python buffer C-c C-c loads the contents of the current buffer into an interactive python shell, apparently using what which python yields. In my case that is python 3.3.3. But since I need to get a python 2.7 shell, I'm trying to get Emacs to spawn such a shell on C-c C-c. Unfortunatly I can't figure out, how to do this. Setting py-shell-name to what which python2.7 yields (i.e. /usr/bin/python2.7) does not work. How can get Emacs to do this, or how can I trace back what Emacs executes when I hit C-c C-c?
What does the Pydoc module do?
21,591,666
3
19
62,279
0
python,terminal,pydoc
Pydoc is the documentation generation system for Python. Say you can document your functions using the Pydoc standard and then it can be used to generate documentation in your code.
0
1
0
0
2014-02-06T00:24:00.000
10
0.059928
false
21,591,572
1
0
0
4
New to programming and python altogether. In the book I'm learning from, the author suggested I find out the purpose of Pydoc. I did a google search on it, and found a match (from Gnome Terminal) but it didn't make much sense to me. Anyone mind simplifying a bit?
What does the Pydoc module do?
66,218,375
0
19
62,279
0
python,terminal,pydoc
pydoc generates online documentation from docstrings... for example you can see that Numpy.histograms() function's, online documentation is actually made based on that function docstring...
0
1
0
0
2014-02-06T00:24:00.000
10
0
false
21,591,572
1
0
0
4
New to programming and python altogether. In the book I'm learning from, the author suggested I find out the purpose of Pydoc. I did a google search on it, and found a match (from Gnome Terminal) but it didn't make much sense to me. Anyone mind simplifying a bit?
What does the Pydoc module do?
46,895,246
-1
19
62,279
0
python,terminal,pydoc
Concise description provided by to Wikipedia: "Pydoc allows Python programmers to access Python's documentation help files, generate text and HTML pages with documentation specifics, and find the appropriate module for a particular job."
0
1
0
0
2014-02-06T00:24:00.000
10
-0.019997
false
21,591,572
1
0
0
4
New to programming and python altogether. In the book I'm learning from, the author suggested I find out the purpose of Pydoc. I did a google search on it, and found a match (from Gnome Terminal) but it didn't make much sense to me. Anyone mind simplifying a bit?
What does the Pydoc module do?
50,296,072
-1
19
62,279
0
python,terminal,pydoc
Just type pydoc in your terminal where you normaly run python. It will give simple explanation !. : )
0
1
0
0
2014-02-06T00:24:00.000
10
-0.019997
false
21,591,572
1
0
0
4
New to programming and python altogether. In the book I'm learning from, the author suggested I find out the purpose of Pydoc. I did a google search on it, and found a match (from Gnome Terminal) but it didn't make much sense to me. Anyone mind simplifying a bit?
Running jobs on a cluster submitted via qsub from Python. Does it make sense?
21,597,874
1
3
5,882
0
python,cluster-computing,qsub
You obviously have built yourself a string cmd containing a command that you could enter in a shell for running the 2nd program. You are currently using subprocess.call(cmd, shell=True) for executing the 2nd program from a Python script (it then becomes executed within a process on the same machine as the calling script). I understand that you are asking how to submit a job to a cluster so that this 2nd program is run on the cluster instead of the calling machine. Well, this is pretty easy and the method is independent of Python, so there is no 'pythonic' solution, just an obvious one :-) : replace your current cmd with a command that defers the heavy work to the cluster. First of all, dig into the documentation of your cluster's qsub command (the underlying batch system might be SGE or LSF, or whatever, you need to get the corresponding docs) and try to find the shell command line that properly submits an example job of yours to the cluster. It might look as simple as qsub ...args... cmd, whereas cmd here is the content of the original cmd string. I assume that you now have the entire qsub command needed, let's call it qsubcmd (you have to come up with that on your own, we can't help there). Now all you need to do in your original Python script is calling subprocess.call(qsubcmd, shell=True) instead of subprocess.call(cmd, shell=True) Note that qsub likely only works on very few machines, typically known as your cluster 'head node(s)'. This means that your Python script that wants to submit these jobs should run on this machine (if that is not possible, you need to add an ssh login procedure to the submission process that we don't want to discuss here). Please also note that, if you have the time, you should look into the shell=True implications of your subprocess usage. If you can circumvent shell=True, this will be the more secure solution. This might however not be an issue in your environment.
0
1
0
0
2014-02-06T06:19:00.000
2
0.099668
false
21,595,488
0
0
0
1
I have the situation where I am doing some computation in Python, and based on the outcomes I have a list of target files that are candidates to be passed to 2nd program. For example, I have 50,000 files which contain ~2000 items each. I want to filter for certain items and call a command line program to do some calculation on some of those. This Program #2 can be used via shell command line, but requires also a lengthy set of arguments. Because of performance reasons I would have to run Program #2 on a cluster. Right now, I am running Program #2 via 'subprocess.call("...", shell=True) But I'd like to run it via qsub in future. I have not much experience of how exactly this could be done in a reasonably efficient manner. Would it make sense to write temporary 'qsub' files and run them via subprocess() directly from the Python script? Is there a better, maybe more pythonic solution? Any ideas and suggestions are very welcome!
Where should I save the Amazon Manifest json file on an app hosted at PythonAnywhere?
21,627,045
2
1
155
0
json,pythonanywhere
You can get to /var/www/static in the File browser. Just click on the '/' in the path at the top of the page and then follow the links. You can also just copy things there from a Bash console. You may need to create the static folder in /var/www if it's not there already.
0
1
0
0
2014-02-07T01:25:00.000
2
0.197375
false
21,617,616
1
0
1
1
I am trying to have my app on Amazon appstore. In order to do this Amazon needs to park a small json file (web-app-manifest.json). If I upload it to the the root of my web site (as suggested), Amazon bot says it cannot access file. Amazon support mention I should save it to /var/www/static but either I don't know how to get there or I don't have access to this part of the server. Any ideas ?
Enthought canopy python -lpython2.7 not found
21,994,749
0
0
241
0
python,debian,enthought,pycuda
Depending on what version of Canopy you're using, try to set your LIBRARY_PATH variable. Example: export LIBRARY_PATH=~/Enthought/lib Then try to build the package. This worked for me but I don't know the root cause as to why Canopy virtual environment isn't setting this variable.
0
1
0
0
2014-02-07T04:28:00.000
1
1.2
true
21,619,353
1
0
0
1
I am using Canopy enthought on a machine without su access. Whenever i try to build any package dependent on python I get this error: /usr/bin/ld: cannot find -lpython2.7 collect2: ld returned 1 exit status error: command 'g++' failed with exit status 1 Any idea what's going wrong? I am running Debian OS. Thanks
Exe created with Pyinstaller in windows 7 is not working in xp and linux
51,320,586
0
2
4,424
0
python,linux,installation,exe,pyinstaller
Pyinstaller does not allow cross compilation. so if you want to have an executable file you should compile your project first in Linux OS and then you may use wine in which you can compile the project to have the windows executable
0
1
0
0
2014-02-07T07:26:00.000
2
0
false
21,621,742
0
0
0
1
I am new to python.I Have a python script for copying files from local machine to sftp location.The script will use the wxpython,pycrypto and ssh modules of python.I created an exe file by using the pyinstaller.My machine is windows 7 64-bit.I used pyinstaller 2.1 and python 2.7.6.amd 64 for creating the exe file.It's working fine in windows 7 64-bit.But it's not working in xp,win7 32-bit.In linux i used wine for executing this exe but there also it's not working. Then i created one more exe in windows7 32-bit machine.this exe is working fine in win7 32 and 64 bit versions.but it's not working in xp. Can anyone tell me what cpuld be the reason and how to resolve it. I want one installer which can be installed in windows or linux. Thanks in advance.
Google App Engine (Python): Allow entity 'previewing before 'submit'
21,631,790
4
1
54
0
python,google-app-engine,google-cloud-datastore,app-engine-ndb
It aint that difficult. Abstract: [User]-> Posts [Data] to the [EntityCreatorPreviewHandler] [EntityCreatorPreviewHandler]-> Recieves the data and creates the entity eg: book = Book(title='Test'). [EntityCreatorPreviewHandler]-> Templates the html and basically shows the entity with all it's attributes etc. [EntityCreatorPreviewHandler]-> Also hides the initial [Data] in a hidden post form [User]-> Accepts save after preview and as soon as the save button is pressed the hidden form is submitted to a EntitySaveHandler [EntitySaveHandler] saves the data
0
1
0
0
2014-02-07T15:25:00.000
1
1.2
true
21,631,528
0
0
1
1
I'd like users to create an entity, and preview it, before saving it in the datastore. For example: User completes entity form, then clicks "preview". Forwarded to an entity 'preview' page which allows the user to "submit" and save the entity in the datastore, or "go back" to edit the entity. How can I achieve this?
Python DNS module import error
36,287,320
7
32
147,472
0
python,python-2.7,module,resolver
You could also install the package with pip by using this command: pip install git+https://github.com/rthalley/dnspython
0
1
0
1
2014-02-08T03:57:00.000
15
1
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
67,931,629
0
32
147,472
0
python,python-2.7,module,resolver
If you don't have (or don't want) pip installed there is another way. You can to solve this is to install package with native OS package manager. For example for Debian-based systems this would be command: apt install python3-dnspython
0
1
0
1
2014-02-08T03:57:00.000
15
0
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
66,007,768
0
32
147,472
0
python,python-2.7,module,resolver
I have faced similar issue when importing on mac.i have python 3.7.3 installed Following steps helped me resolve it: pip3 uninstall dnspython sudo -H pip3 install dnspython Import dns Import dns.resolver
0
1
0
1
2014-02-08T03:57:00.000
15
0
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
61,213,715
0
32
147,472
0
python,python-2.7,module,resolver
ok to resolve this First install dns for python by cmd using pip install dnspython (if you use conda first type activate and then you will go in base (in cmd) and then type above code) it will install it in anaconda site package ,copy the location of that site package folder from cmd, and open it . Now copy all dns folders and paste them in python site package folder. it will resolve it . actually the thing is our code is not able to find the specified package in python\site package bcz it is in anaconda\site package. so you have to COPY IT (not cut).
0
1
0
1
2014-02-08T03:57:00.000
15
0
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
59,751,991
1
32
147,472
0
python,python-2.7,module,resolver
I faced the same problem and solved this like i described below: As You have downloaded and installed dnspython successfully so Enter into folder dnspython You will find dns directory, now copy it Then paste it to inside site-packages directory That's all. Now your problem will go If dnspython isn't installed you can install it this way : go to your python installation folder site-packages directory open cmd here and enter the command : pip install dnspython Now, dnspython will be installed successfully.
0
1
0
1
2014-02-08T03:57:00.000
15
0.013333
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
57,668,403
0
32
147,472
0
python,python-2.7,module,resolver
In my case, I hava writen the code in the file named "dns.py", it's conflict for the package, I have to rename the script filename.
0
1
0
1
2014-02-08T03:57:00.000
15
0
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
57,207,302
1
32
147,472
0
python,python-2.7,module,resolver
I was getting an error while using "import dns.resolver". I tried dnspython, py3dns but they failed. dns won't install. after much hit and try I installed pubdns module and it solved my problem.
0
1
0
1
2014-02-08T03:57:00.000
15
0.013333
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
53,703,267
0
32
147,472
0
python,python-2.7,module,resolver
I installed DNSpython 2.0.0 from the github source, but running 'pip list' showed the old version of dnspython 1.2.0 It only worked after I ran 'pip uninstall dnspython' which removed the old version leaving just 2.0.0 and then 'import dns' ran smoothly
0
1
0
1
2014-02-08T03:57:00.000
15
0
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
40,167,343
0
32
147,472
0
python,python-2.7,module,resolver
This issue can be generated by Symantec End Point Protection (SEP). And I suspect most EPP products could potentially impact your running of scripts. If SEP is disabled, the script will run instantly. Therefore you may need to update the SEP policy to not block python scripts accessing stuff.
0
1
0
1
2014-02-08T03:57:00.000
15
0
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Python DNS module import error
21,643,858
4
32
147,472
0
python,python-2.7,module,resolver
I installed dnspython 1.11.1 on my Ubuntu box using pip install dnspython. I was able to import the dns module without any problems I am using Python 2.7.4 on an Ubuntu based server.
0
1
0
1
2014-02-08T03:57:00.000
15
0.053283
false
21,641,696
0
0
0
10
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
Simple explanation of Google App Engine NDB Datastore
21,658,988
13
17
7,423
1
python,google-app-engine,app-engine-ndb
I think you've overcomplicating things in your mind. When you create an entity, you can either give it a named key that you've chosen yourself, or leave that out and let the datastore choose a numeric ID. Either way, when you call put, the datastore will return the key, which is stored in the form [<entity_kind>, <id_or_name>] (actually this also includes the application ID and any namespace, but I'll leave that out for clarity). You can make entities members of an entity group by giving them an ancestor. That ancestor doesn't actually have to refer to an existing entity, although it usually does. All that happens with an ancestor is that the entity's key includes the key of the ancestor: so it now looks like [<parent_entity_kind>, <parent_id_or_name>, <entity_kind>, <id_or_name>]. You can now only get the entity by including its parent key. So, in your example, the Shoe entity could be a child of the Person, whether or not that Person has previously been created: it's the child that knows about the ancestor, not the other way round. (Note that that ancestry path can be extended arbitrarily: the child entity can itself be an ancestor, and so on. In this case, the group is determined by the entity at the top of the tree.) Saving entities as part of a group has advantages in terms of consistency, in that a query inside an entity group is always guaranteed to be fully consistent, whereas outside the query is only eventually consistent. However, there are also disadvantages, in that the write rate of an entity group is limited to 1 per second for the whole group.
0
1
0
0
2014-02-09T05:53:00.000
2
1.2
true
21,655,862
0
0
1
1
I'm creating a Google App Engine application (python) and I'm learning about the general framework. I've been looking at the tutorial and documentation for the NDB datastore, and I'm having some difficulty wrapping my head around the concepts. I have a large background with SQL databases and I've never worked with any other type of data storage system, so I'm thinking that's where I'm running into trouble. My current understanding is this: The NDB datastore is a collection of entities (analogous to DB records) that have properties (analogous to DB fields/columns). Entities are created using a Model (analogous to a DB schema). Every entity has a key that is generated for it when it is stored. This is where I run into trouble because these keys do not seem to have an analogy to anything in SQL DB concepts. They seem similar to primary keys for tables, but those are more tightly bound to records, and in fact are fields themselves. These NDB keys are not properties of entities, but are considered separate objects from entities. If an entity is stored in the datastore, you can retrieve that entity using its key. One of my big questions is where do you get the keys for this? Some of the documentation I saw showed examples in which keys were simply created. I don't understand this. It seemed that when entities are stored, the put() method returns a key that can be used later. So how can you just create keys and define ids if the original keys are generated by the datastore? Another thing that I seem to be struggling with is the concept of ancestry with keys. You can define parent keys of whatever kind you want. Is there a predefined schema for this? For example, if I had a model subclass called 'Person', and I created a key of kind 'Person', can I use that key as a parent of any other type? Like if I wanted a 'Shoe' key to be a child of a 'Person' key, could I also then declare a 'Car' key to be a child of that same 'Person' key? Or will I be unable to after adding the 'Shoe' key? I'd really just like a simple explanation of the NDB datastore and its API for someone coming from a primarily SQL background.
How to make bash script use a particular python version for executing a python script?
21,670,472
0
0
921
0
python,bash
It is probably caused by the following. Your script imports some third-party library which was compiled by an older python version. To fix this, reinstall the up-to-date library.
0
1
0
1
2014-02-10T06:16:00.000
4
1.2
true
21,670,272
0
0
0
2
I have python 2.6 and python installed on my Freebsd box. I want my bash script to execute a particular python script using python2.6 interpreter. It is showing import error.... Undefined symbol "PyUnicodeUCS2_DecodeUTF8"
How to make bash script use a particular python version for executing a python script?
21,670,553
0
0
921
0
python,bash
Use the absolute path to the python version you want.
0
1
0
1
2014-02-10T06:16:00.000
4
0
false
21,670,272
0
0
0
2
I have python 2.6 and python installed on my Freebsd box. I want my bash script to execute a particular python script using python2.6 interpreter. It is showing import error.... Undefined symbol "PyUnicodeUCS2_DecodeUTF8"
Google Drive API + App Engine = time out
21,677,121
0
0
588
0
python,google-app-engine,oauth-2.0,google-drive-api,httplib2
You mention "AppEngine's oauth2 library", but then you say "Drive API calls time out". So modifying the Oauth http library won't affect Drive. Are you using the Google library for your Drive calls, or making direct REST HTTP calls? If the former, try ... HttpRequest.setConnectTimeout(55000) , if the latter just ... request.getFetchOptions().setDeadline(55d) NB. Drive is having a brain fart today, so one would hope the underlying problem will go away of its own accord.
0
1
0
0
2014-02-10T10:53:00.000
1
1.2
true
21,675,209
0
0
1
1
So I built an app on App Engine that takes users files and move them to certain folders in the same domain. I made REST api that calls Drive API to list files, rename files, and change permissions etc. On app load, it fires 4 ajax calls to the server to get name and id of folders and checking if certain folder exists. The problem is front end ajax calls time out all the time in production. App engine url fetch has 60 sec limit. I used App engine's oauth2 library which uses a different httplib2. So I modified httplib2 source deadline to max 60 sec but it seems to time out after 30 sec. As a result, Drive API calls time out almost every time and app just doesn't work. I have read the guideline on optimizing drive api calls with partial response and implemented it but didn't see noticeable difference. It's driving me crazy.... please help
Google App Engine, Illegal string in dataset id when uploading to local datastore
21,805,320
0
0
141
0
python,google-app-engine
I managed to discover the problem on my own. The issue was with adding s~ before the app_id in the app.yaml file. Despite the Google App Engine documentation stating that s~ should be before the app_id for applications using the High Replication Datastore, this apparently causes an error when uploading the the development server.
0
1
0
0
2014-02-12T05:30:00.000
1
1.2
true
21,719,461
0
0
1
1
Using the bulk loader, I've downloaded the live datastore, and am now trying to upload it to the development server. When running the upload_data command to upload the datastore to the dev server I get the following error, BadRequestError: Illegal string "dev~s~app_id" in dataset id. The command I'm using the upload the data is appcfg.py upload_data --url=://localhost:8080/_ah/remote_api --filename=datastore_2-11-14 The command I used the downlaod the data is appcfg.py download_data --url=://app_id.appspot.com/_ah/remote_api --filename=datastore_2-11-14
Close all (keep-alive) socket connections in tornado?
21,735,934
1
1
1,783
0
python,sockets,tornado
finish() doesn't apply here because a connection in the "keep-alive" state is not associated with a RequestHandler. In general there's nothing you can (or need to) do with a keep-alive connection except close it, since the browser isn't listening for a response. Websockets are another story - in that case you may want to close the connections yourself before shutting down (but don't have to - your clients should be robust against the connection just going away).
0
1
1
0
2014-02-12T06:29:00.000
2
0.099668
false
21,720,346
0
0
0
1
I'm running a set of tornado instances that handles many requests from a small set of keep-alive connections. When I take down the server for maintenance I want to gracefully close the keep-alive requests so I can take the server down. Is there a way to tell clients "Hey this socket is closing" with Tornado? I looked around and self.finish() just flushes the connection.
Issue installing python windows extension
21,722,581
1
1
149
0
python-2.7
You should run the exe file as "Administrator". Even if you are in the administrator account, you have to explicitly run it with administrator permission by right clicking on the exe.
0
1
0
0
2014-02-12T07:54:00.000
2
0.099668
false
21,721,827
1
0
0
2
I installed Python 2.7.6 Windows Installer (Windows binary) and then, I was trying to install the extension pywin32-218.win-amd64-py2.7.exe. But everytime I run this extension, I get the issue of "pywin32-218.win-amd64-py2.7.exe has stopped working".
Issue installing python windows extension
28,891,881
0
1
149
0
python-2.7
You need to run it as administrator, anything that modifies folders it is not in or in its folder require root access which is right click -> run as admin in Windows or sudo in mac and linux.
0
1
0
0
2014-02-12T07:54:00.000
2
0
false
21,721,827
1
0
0
2
I installed Python 2.7.6 Windows Installer (Windows binary) and then, I was trying to install the extension pywin32-218.win-amd64-py2.7.exe. But everytime I run this extension, I get the issue of "pywin32-218.win-amd64-py2.7.exe has stopped working".
How to have a natural MacOSX .app of a complex python application (including custom interpreter) via a shell initialization script?
21,756,168
0
0
138
0
python,macos,bundle,.app
I solved the problem, and in hindsight it was rather trivial. In the shell script, I need to invoke my binary with exec, so that the running bash process is replaced (a la execve()) rather than spawning a new process. The only problem is that my interpreter now replaces the icon with the stock one, but I have only one icon in the dock now, and behaves naturally.
1
1
0
0
2014-02-13T13:11:00.000
2
0
false
21,755,235
0
0
0
1
I am trying to integrate a complex python application (with a custom python interpreter shipped along) for OSX. In order to handle a set of issues due to cross platform requirements, I created a .app bundle pointing at a shell script with its CFExecutable entry in Info.plist. This works, and the invoked shell script starts up the actual application binary. However, I have the following problems: The .app icon bounces endlessly on the dock, never reaching the "activated" status. I guess it's because the shell script does not terminate. This dock entry has the correct "application icon" When the binary executable is invoked by the script, a new Dock entry appears with a generic python icon. This icon successfully starts up and stops bouncing as the application starts up. When I try to kill the first Dock entry via Force quit, the actual application still keeps running, as it's clearly controlled by the second entry on the dock. Is there a way to have this setup behave more naturally? Do I need to ditch shell script for an objective C wrapper? If I have to use a obj-C wrapper (instead of a shell script) to spawn my application, how can I prevent the same spawning of a secondary icon to happen? Edit: note, I am not running a python script. I am running a custom made python interpreter. py2app is not what I need.
PyDev installation not working. No editor. No preferences
39,937,221
0
0
869
0
python,linux,eclipse,pydev,java-7
Debian Jessie . Eclipse Mars 4.1 . I installed whilst my java environment was set to /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java and after the restart no reference to the PyDev install could be found other than in the installation details. After Changing to java 1.8 ( sudo update-alternatives --config java ) and restarting eclipse all PyDev components appeared.
0
1
0
0
2014-02-13T19:29:00.000
2
0
false
21,763,762
1
0
0
1
Hope it helps someone else. So the problem I had was: I installed PyDev into Eclipse Kepler using Eclipse Marketplace. Everything goes on fine and ends successfully. But PyDev doesn't show up anywhere after restart. E.g. no Python Editor, No "PyDev" in the preferences, no PyDev perspective, ... It's as if PyDev isn't installed. The only place where it shows up is in the Eclipse Maretplace where I can see it under installed tab. Tried to reinstall (uninstall from Marketplace) via update site. Same result. I was using Java 1.6 with Eclipse Kepler and installing latest version of PyDev 3.3.3. No errors reported in eclipse logs.
Simple way to send emails asynchronously
21,764,727
0
0
345
0
django,python-multithreading,django-commands
If you don't want to implement celery (which in my opinion isn't terribly difficult to setup), then your best bet is probably implementing a very simple queue using either your database. It would probably work along the lines of this: System determines that an email needs to be sent and creates a row in the database with a status of being 'created' or 'queued' On the other side there will be a process that scans your "queue" periodically. If they find anything to send (in this case any rows that with status "created/queued", they will update the status to 'sending'. The process will then proceed to send the email and finally update the status to sent. This will take care of both asynchronously sending the objects and keeping track of the statuses of all emails should things go awry. You could potentially go with a Redis backend for your queue if the additional updates are too taxing onto your database as well.
0
1
0
0
2014-02-13T19:38:00.000
2
0
false
21,763,924
0
0
1
1
I'm running a django app and when some event occurs I'd like to send email to a list of recipients. I know that using Celery would be an intelligent choice, but I'd like to know if there's another, most simple way to do it without having to install a broker server, supervisor to handle the daemon process running in the background... I'd like to find a more simple way to do it and change it to celery when needed. I'm not in charge of the production server and I know the guy who's running it will have big troubles setting all the configuration to work. I was thinking about firing a django command which opens several processes using multiprocessing library or something like that.
Does Django Block When Celery Queue Fills?
21,765,816
1
0
400
0
python,django,multithreading,rabbitmq,celery
It's impossible to really answer your question without an in-depth analysis of your actual code AND benchmark protocol, and while having some working experience with Python, Django and Celery I wouldn't be able to do such an in-depth analysis. Now there are a couple very obvious points : if your workers are running on the same computer as your Django instance, they will compete with Django process(es) for CPU, RAM and IO. if the benchmark "client" is also running on the same computer then you have a "heisenbench" case - bombing a server with 100s of HTTP request per second also uses a serious amount of resources... To make a long story short: concurrent / parallel programming won't give you more processing power, it will only allow you to (more or less) easily scale horizontally.
0
1
0
0
2014-02-13T20:54:00.000
2
1.2
true
21,765,266
0
0
1
2
I'm doing some metric analysis on on my web app, which makes extensive use of celery. I have one metric which measures the full trip from a post_save signal through a celery task (which itself calls a number of different celery tasks) to the end of that task. I've been hitting the server with up to 100 requests in 5 seconds. What I find interesting is that when I hit the server with hundreds of requests (which entails thousands of celery worker processes being queued), the time it takes for the trip from post save to the end of the main celery task increases significantly, even though I never do any additional database calls, and none of the celery tasks should be blocking the main task. Could the fact that there are so many celery tasks in the queue when I make a bunch of requests really quickly be slowing down the logic in my post_save function and main celery task? That is, could the processing associated with getting the sub-tasks that the main celery task creates onto a crowded queue be having a significant impact on the time it takes to reach the end of the main celery task?
Does Django Block When Celery Queue Fills?
34,550,948
0
0
400
0
python,django,multithreading,rabbitmq,celery
I'm not sure about slowing down, but it can cause your application to hang. I've had this problem where one application would backup several other queues with no workers. My application could then no longer queue messages. If you open up a django shell and try to queue a task. Then hit ctrl+c. I can't quite remember what the stack trace should be, but if you post it here I could confirm it.
0
1
0
0
2014-02-13T20:54:00.000
2
0
false
21,765,266
0
0
1
2
I'm doing some metric analysis on on my web app, which makes extensive use of celery. I have one metric which measures the full trip from a post_save signal through a celery task (which itself calls a number of different celery tasks) to the end of that task. I've been hitting the server with up to 100 requests in 5 seconds. What I find interesting is that when I hit the server with hundreds of requests (which entails thousands of celery worker processes being queued), the time it takes for the trip from post save to the end of the main celery task increases significantly, even though I never do any additional database calls, and none of the celery tasks should be blocking the main task. Could the fact that there are so many celery tasks in the queue when I make a bunch of requests really quickly be slowing down the logic in my post_save function and main celery task? That is, could the processing associated with getting the sub-tasks that the main celery task creates onto a crowded queue be having a significant impact on the time it takes to reach the end of the main celery task?
Python send jobs to queue processed by Popen
21,790,418
-1
0
222
0
python,queue,popen
Use subprocess.call() instead of Popen, or use Popen.wait().
0
1
0
0
2014-02-14T21:59:00.000
2
-0.099668
false
21,790,271
0
0
0
2
I currently have a working python application, gui with wxpython. I send this application a folder which then gets processed by a command line application via Popen. Each time I run this application it take about 40 mins+ to process before it finishes. While a single job processes I would like to queue up another job, I don't want to submit multiple jobs at the same time, I want to submit one job, while it's processing I want to submit another job, so when the first one finishes it would then just process the next, and so on, but I am unsure of how to go about this and would appreciate some suggestions.
Python send jobs to queue processed by Popen
21,790,443
1
0
222
0
python,queue,popen
Presumably you have either a notification that the task has finished being passed back to the GUI or the GUI is checking the state of the task periodically. In either case you can allow the user to just add to a list of directories to be processed and when your popen task has finished take the first one off of the list and start a new popen task, (remembering to remove the started one off of the list.
0
1
0
0
2014-02-14T21:59:00.000
2
1.2
true
21,790,271
0
0
0
2
I currently have a working python application, gui with wxpython. I send this application a folder which then gets processed by a command line application via Popen. Each time I run this application it take about 40 mins+ to process before it finishes. While a single job processes I would like to queue up another job, I don't want to submit multiple jobs at the same time, I want to submit one job, while it's processing I want to submit another job, so when the first one finishes it would then just process the next, and so on, but I am unsure of how to go about this and would appreciate some suggestions.
Python development on Mac OS X: pure Mac OS or linux in virtualbox
21,791,729
2
1
1,076
0
python,macos
I do all of my main development on OSX. I deploy on a linux box. Pycharm (CE) is your friend.
0
1
0
0
2014-02-14T23:57:00.000
2
0.197375
false
21,791,565
1
0
0
2
I'm new to Mac, and I have OS X 10.9.1. The main question is whether it is better to create a virtual machine with Linux and do port forwarding or set all packages directly to the Mac OS and work with it directly? If I create a virtual machine, I'm not sure how it will affect the health of SSD and ease of development. On the other hand, I also do not know how to affect the stability and performance of Mac OS installation packages directly into it. Surely there are some best practices, but I do not know them.
Python development on Mac OS X: pure Mac OS or linux in virtualbox
21,791,847
3
1
1,076
0
python,macos
On my Mac, I use Python and PyCharm and all the usual Unix tools, and I've always done just fine. Regard OS X as a Unix machine with a very nice GUI on top of it, because it basically is -- Mac OS X is POSIX-compliant, with BSD underpinnings. Why would you even consider doing VirtualBox'd Linux? Even if you don't want to relearn the hotkeys, PyCharm provides a non-OS X mapping, and in Terminal, CTRL and ALT work like you expect. If you're used to developing on Windows but interfacing with Unix machines through Cygwin, you'll be happy to use Terminal, which is a normal bash shell and has (or can easily get through Homebrew) all the tools you're used to. Plus the slashes go the right way and line endings don't need conversion. If you're used to developing on a Linux distro, you'll be happy with all the things that "just work" and let you move on with your life. So in answer to your question, do straight Mac OS X. Working in a virtualized Linux environment imparts a cost and gains you nothing.
0
1
0
0
2014-02-14T23:57:00.000
2
1.2
true
21,791,565
1
0
0
2
I'm new to Mac, and I have OS X 10.9.1. The main question is whether it is better to create a virtual machine with Linux and do port forwarding or set all packages directly to the Mac OS and work with it directly? If I create a virtual machine, I'm not sure how it will affect the health of SSD and ease of development. On the other hand, I also do not know how to affect the stability and performance of Mac OS installation packages directly into it. Surely there are some best practices, but I do not know them.
GAE Request Timeout when user uploads csv file and receives new csv file as response
21,802,072
2
0
79
0
python,google-app-engine
You have many options: Use a timer in your client to check periodically (i.e. every 15 seconds) if the file is ready. This is the simplest option that requires only a few lines of code. Use the Channel API. It's elegant, but it's an overkill unless you face similar problems frequently. Email the results to the user.
0
1
0
0
2014-02-15T17:05:00.000
2
0.197375
false
21,800,806
0
0
1
1
I have an app on GAE that takes csv input from a web form and stores it to a blob, does some stuff to obtain new information using input from the csv file, then uses csv.writer on self.response.out to write a new csv file and prompt the user to download it. It works well, but my problem is if it takes over 60 seconds it times out. I've tried to setup the do some stuff part as a task in task queue, and it would work, except I can't make the user wait while this is running, and there's no way of calling the post that would write out the new csv file automatically when the task queue is complete, and having the user periodically push a button to see if it is done is less than optimal. Is there a better solution to a problem like this other than using the task queue and having the user have to manually push a button periodically to see if the task is complete?
What is the difference between mod_wsgi and uwsgi?
21,814,847
3
6
5,775
0
python,apache,nginx,wsgi,uwsgi
They are just 2 different ways of running WSGI applications. Have you tried googling for mod_wsgi nginx? Any wsgi compliant server has that entry point, that's what the wsgi specification requires. Yes, but that's only how uwsgi communicates with Nginx. With mod_wsgi the Python part is run from within Nginx, with uwsgi you run a separate app.
0
1
0
1
2014-02-16T17:14:00.000
1
1.2
true
21,814,585
0
0
1
1
There seems to be mod_wsgi module in Apache and uwsgi module in Nginx. And there also seems to be the wsgi protocol and uwsgi protocol. I have the following questions. Are mod_wsgi and uwsgi just different implementations to provide WSGI capabilities to the Python web developer? Is there a mod_wsgi for Nginx? Does uwsgi also offer the application(environ, start_response) entry point to the developers? Is uwsgi also a separate protocol apart from wsgi? In this case, how is the uwsgi protocol different from the wsgi protocol?
Pull files into script from two directories back
21,838,385
0
0
2,992
0
python,path
I would use your suggested method of os.chdir(r'..\..') to make sure your current working directory is in folder2. I'm not really sure what you're asking though, so maybe clarify why you think this ISN'T the right solution?
0
1
0
0
2014-02-17T20:03:00.000
2
0
false
21,838,287
1
0
0
1
I have a script that will pull files from two directories back, so the script resides at: /folder2/folder1/folder0/script.py and the files that will be processed will be in folder2. I can get back one level with "..//" (I'm making a Windows executable with cx_free) but I'm thinking this isn't the best way to do this. I am setting an input directory and an output directory. I want to keep the paths relative to the location of the script so that "folder2" can be moved without screwing up the functionality of the script or force rewriting of it. thanks
OpenShift, Python Application run script every 10 min
21,893,287
3
1
1,892
0
python,openshift
You are looking for the add-on cartridge that is called cron. However, by default the cron cartridge only supports jobs that run every minute or every hour. You would have to write a job that runs minutely to determine if its a 10 minute interval and then execute your script. Make sense? rhc cartridge add cron -a yourAppName Then you will have a cron directory in application directory under .openshift for placing the cron job.
0
1
0
1
2014-02-19T19:54:00.000
2
1.2
true
21,890,973
0
0
0
1
How to create shedule on OpenShift hosting to run python script that parses RSS feeds and will send filtered information to my email? It feature is available? Please help, who works with free version of this hosting. I have script that works fine. But i dont know how to run it every 10 min to catch freelance jobs. Or anyone does know free hosting with python that can create shedule for scripts.
Using both Python 2 and 3 in Vim (on Windows)
21,903,485
1
1
1,174
0
python,vim
Vim's Python integration (i.e. the :python[3] commands that most plugins use) does not depend on the python interpreter binary (from PATH); instead, Vim must have been compiled with the Python library(-ies), which you can check in the :version output (look for +python, and the -DDYNAMIC_PYTHON_DLL=...). To be able to use both Python versions, you need both +python/dyn and +python3/dyn, and the corresponding DLLs accessible. You can check with the :py / :py3 commands.
0
1
0
0
2014-02-20T09:18:00.000
2
1.2
true
21,903,246
1
0
0
1
I'm using Vim and lots of Vim plugins, on a Windows machine. Some of these plugins use Python 2, and some use Python 3. I can use only one in the system %PATH% environment variable, how can I overcome this limitation?
Install "elasticsearch" instead of "pyelasticsearch"
21,909,665
1
2
2,082
0
python,django,elasticsearch,django-haystack
I used haystack in my last project. I checked my virtualenv and I have only 'pyelasticsearch==0.5'. Keep in my mind that documentation can be outdated.
0
1
0
0
2014-02-20T13:31:00.000
2
1.2
true
21,909,346
1
0
0
1
How can I install official elasticsearch binding for python instead of pyelasticsearch? Haystack documentation says: You’ll also need an Elasticsearch binding: elasticsearch-py (NOT pyes). Place elasticsearch somewhere on your PYTHONPATH (usually python setup.py install or pip install elasticsearch). But when I install elasticsearch with pip, haystack still asks for pyelasticsearch.
Reading values over ssh in python
21,923,164
0
1
1,205
0
python
If you can put your own programs or scripts on the remote machine there are a couple of things you can do: Write a script on the remote machine that outputs just what you want, and execute that over ssh. Use ssh to tunnel a port on the other machine and communicate with a server on the remote machine which will respond to requests for information with the data you want over a socket.
0
1
0
1
2014-02-21T00:56:00.000
4
0
false
21,923,046
0
0
0
1
I would like to be able to gather the values for number of CPUs on a server and stuff like storage space etc and assign them to local variables in a python script. I have paramiko set up, so I can SSH to remote Linux nodes and run arbitrary commands on them, and then have the output returned to the script. However, many commands are very verbose "such as df -h", when all I want to assign is a single integer or value. For the case of number of CPUs, there is Python functionality such as through the psutil module to get this value. Such as 'psutil.NUM_CPUS' which returns an integer. However, while I can run this locally, I can't exactly execute it on remote nodes as they don't have the python environment configured. I am wondering how common it is to manually parse output of linux commands (such as df -h etc) and then grab an integer from it (similar to how bash has a "cut" function). Or whether it is somehow better to set up an environment on each remote server (or a better way).
Use python 2 module in python 3 in mac OS
21,923,496
1
0
333
0
python,macos,python-2.7,python-3.x
In general, no, you can't do that easily. Just bite the bullet and install new copies of the modules you need for your Python 3 installation. Remember to first install a new copy of pip (or, if you must, easy_install) using your Python 3.3 and use it to install the modules you need for Python 3. One of the reasons you can't is that for many packages that support both Python 2 and 3 by using 2to3 require the source distribution to do so. The resultant Python 2 installed distribution will not necessarily have everything needed to produce a new Python 3 installation.
0
1
0
0
2014-02-21T01:37:00.000
1
1.2
true
21,923,479
1
0
0
1
I would like to use installed Python 2 modules in Python 3. One step would be to add to the PythonPath3 the directories where the Python2 modules are installed. Of course this would work only if the modules are coded for Python3 compatibility. Is there a way that I can import modules in Python3 and have them automatically converted (using 2to3) to usable Python3 code? Specs: Mac OS 10.9.1 Python2 = python 2.7.6 Python3 = python 3.3.3