Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
How to generate path of a directory in python
23,785,492
0
0
1,068
0
python,path,operating-system,listdir
I think you are asking about how to get the relative path instead of absolute one. Absolute path is the one like: "/home/workspace" Relative looks like the following "./../workspace" You should construct the relative path from the dir where your script is (/home/workspace/tests) to the dir that you want to acces (/home/workspace) that means, in this case, to go one step up in the directory tree. You can get this by executing: os.path.dirname(os.path.join("..", os.path.abspath(__file__))) The same result may be achieved if you go two steps up and one step down to workspace dir: os.path.dirname(os.path.join("..", "..", "workspace", os.path.abspath(__file__))) In this manner you actually can access any directory without knowing it's absolute path, but only knowing where it resides relatively to your executed file.
0
1
0
1
2014-05-21T12:28:00.000
2
0
false
23,783,185
0
0
0
1
I have a file abc.py under the workspace dir. I am using os.listdir('/home/workspace/tests') in abc.py to list all the files (test1.py, test2.py...) I want to generate the path '/home/workspace/tests' or even '/home/workspace' instead of hardcoding it. I tried os.getcwd() and os.path.dirname(os.path.abspath(____file____)) but this instead generates the path where the test script is being run. How to go about it?
Programmatically launch and interact with python virtual machine
30,846,061
1
3
923
0
python,vm-implementation
It may depend on Python implementation such as Pypy, Jython. In CPython, you have to use a separate process if you want an independent interpreter otherwise at the very least GIL is shared. multiprocessing, concurrent.futures modules allow you to run arbitrary Python code in separate processes and to communicate with the parent easily.
0
1
0
0
2014-05-22T12:55:00.000
2
0.099668
false
23,807,459
1
0
0
1
Does anyone know how to launch a new python virtual machine from inside a python script, and then interact with it to execute code in a completely separate object space? In addition to code execution, I'd like to be able to access the objects and namespace on this virtual machine, look at exception information, etc. I'm looking for something similar to python's InteractiveInterpreter (in the code module), but as far as I've been able to see, even if you provide a separate namespace for the interpreter to run in (through the locals parameter), it still shares the same object space with the script that launched it. For instance, if I change an attribute of the sys module from inside InteractiveInterpreter, the change takes affect in the script as well. I want to completely isolate the two, just like if I was running two different instances of the python interpreter to run two different scripts on the same machine. I know I can use subprocess to actually launch python in a separate process, but I haven't found any good way to interact with it the way I want. I imagine I could probably invoke it with '-i' and push code to it through it's stdin stream, but I don't think I can get access to its objects at all.
Does cron.yaml support conditions?
23,835,748
2
0
134
0
python,google-app-engine,python-2.7,cron
As far as I know, that isn't possible. The Cron.yaml page is only made for defining the jobs, not to code. I'd recommend putting your logic inside of the job that you're calling, as you mentioned. Hope this helps.
0
1
0
0
2014-05-23T15:57:00.000
1
1.2
true
23,833,693
0
0
1
1
Is it possible to have conditions (if ... else ...) in GAE cron.yaml? For ex., to have something like if app_identity.get_application_id() == 'my-appid' then run the job. Understand, that probably the same result I can have by implementing it in the job handler. Just interesting if it could be done within cron.yaml.
runnig python tests via teamcity: Error: Source not found
23,838,949
1
1
1,992
0
python,continuous-integration,teamcity,virtualenv
ok, solution seem like that implement custom bash scripts like tests.sh create step which execute this files like bash tests.sh
0
1
0
0
2014-05-23T19:58:00.000
3
1.2
true
23,837,487
1
0
0
2
I want to use teamcity as CI for my python project. My project use virtualenv to store project related dependencies. So I create venv folder under project root and put env. related stuff there. But when I trying to create build step with source venv/bin/activate as custom script - it fails with source: not found, if i creating this step also as command line, but executable file and put source as file and venv/bin/activate as parameter, then it fails with Cannot run process source venv/bin/activate : file not found How to solve this?
runnig python tests via teamcity: Error: Source not found
62,431,055
0
1
1,992
0
python,continuous-integration,teamcity,virtualenv
Actually I solved it by adding #!/bin/sh at the beggining. :) Thank you for your answers as well.
0
1
0
0
2014-05-23T19:58:00.000
3
0
false
23,837,487
1
0
0
2
I want to use teamcity as CI for my python project. My project use virtualenv to store project related dependencies. So I create venv folder under project root and put env. related stuff there. But when I trying to create build step with source venv/bin/activate as custom script - it fails with source: not found, if i creating this step also as command line, but executable file and put source as file and venv/bin/activate as parameter, then it fails with Cannot run process source venv/bin/activate : file not found How to solve this?
Python - Impersonate currently logged user (from system user)
23,874,294
0
0
881
0
python,winapi,impersonation,python-2.6
Your script must be running as a service. I believe since Windows Vista you must have a separate app in the user session for the GUI.
0
1
0
0
2014-05-26T15:49:00.000
1
0
false
23,873,744
0
0
0
1
I have to write a Python script on Windows (using win32 api), executed with system privileges. I need to impersonate the currently logged user to show him a popup (because system user can't). So, I'm searching a way to do this operation. I find this method: win32security.ImpersonateLoggedOnUser but requires a handle that can be obtained with this method: win32security.LogonUser The last method however requires the user password, but I haven't. There is a way to get this handler (or another way to impersonate the currently logged user, or another way to show a popup from system user) without the user password? I'm the system user, so I have full privileges on the machine... Thanks a lot and sorry for my bad english! Federico
is it possible to install python on windows RT?
23,879,072
0
0
896
0
python
Portable Python and Python (x,y) both work on RT
0
1
0
0
2014-05-26T23:06:00.000
1
0
false
23,878,794
1
0
0
1
I would like to install and run python on windows RT. Is it possible? I have tried with python.org but it doesn't seem to have a specific version for it. I wonder whether there is anything I could use instead?
Python wsgi OSError: [Errno 10] No child process
23,920,537
1
0
1,426
0
python,linux,mod-wsgi,wsgi
OSError [Errno 10] no child processes can mean the program ran, but took too much memory and died. Starting jobs within Apache is fine. Running as root is a bit sketchy, but isn't that big of a deal. Note that the 'root' account setup, like PATH, might be different from your account. This would explain why it runs from the shell but not from Apache. In your program log the current directory. If the script requires a certain module in a certain location, then that would cause weird problems. Also 'root' tends to not have "current directory" (ie: ".") on the sys.path.
0
1
0
0
2014-05-28T13:56:00.000
1
0.197375
false
23,913,689
0
0
1
1
I have a python wsgi script that is attempting to make a call to generate an openssl script. Using subprocess.check_call(args), the process throws an OSError [Errno 10] no child processes. The owner of the opensll bin is root:root. Could this be the problem? Or does apache not allow for child processes? Using just the subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) seems to work fine, I just want to wait and make sure the process finishes before moving on. communicate() and wait() both fail with the same error. Running it outside of wsgi the code works fine. This is python 2.6 btw.
How to run GUI Python script on Apache?
23,967,560
1
2
417
0
python,apache,web-deployment
It depends on how the GUI is written, what abc.exe does and how you want to use the web interface. In general, what you want is not possible. While for local applications there is only one user and it is clear, when the user terminates the program, for web applications there can be millions of users at the same time, and when the application doesn't hear anything form a user, it is not clear, if the user closed the window, or there is a network connection broken, or anything else. That's why web applications are as far as possible stateless, or session information is written to databases. This is not the case for local applications, so you have to rewrite probably large parts of the C code.
0
1
0
1
2014-05-31T06:44:00.000
1
1.2
true
23,967,242
0
0
0
1
I wrote a program in C, and designed its GUI using Python. Now I want to convert it to a web app. I have GUI.py and abc.exe file. Can I directly execute GUI Python script (GUI.py) on 'Apache2' local server? If yes, then how?
File is locked by Python debugger
23,990,975
0
0
79
0
python,debugging,file-io,lsof
I never worked with Mac OS but I could imagine this: Maybe Python locks the file on open and the hex-editor is failing if you try to open it afterwards. The system hangs and get slow (even after killing all processes) -> I think thats some kind of caching which fill up your memory till your computer starts using the harddisk as memory (and turns really slow) I think you should try to find out how files get open with python on Mac OS (if there is some kind of lock) and you should take care that this large file never get stored complete in memory (there are different methods how to read large files in chunks). Greetings Kuishi PS: I apologize for my English. It isnt my native language.
0
1
0
0
2014-06-02T05:37:00.000
1
0
false
23,987,917
0
0
0
1
I have a problem with understanding a strange file locking behavior in Python Debugger. I have a 2TB image file, which my script reads. Everything works perfect, until I want to read the same file with a different hex editor. If the file is opened in hex editor before I start my script, everything is fine. If I try to open the file during script paused at breakpoint, my system almost hangs and becomes very slow. I normally can kill Pyhon and hex editor from terminal, but it is very slow and takes up to 10 minutes. The same problem apperares AFTER I stop the script and even extensively kill all Python instances. The disk, where this image is situated is remained locked and it's not possible to unmount it (only with diskutil force command), system hangs if I try to open the file anywhere else. Also I can't start scripts one after another, next scripts just stops working and hangs my system. I have to wait up to 10 minutes to be able to work with the file again. I tried to find the process which locks the file with "sudo lsof +D" command but it doesn't list anything. Here are some more details: — My system is Mac Os X 10.9. Python is 3.4. I use Eclipse with Pydev to develop the script. — I use open('image.dmg', mode='rb') command to open the file in python and close()to close it. — The file is a 2TB disk image on external ExFat formatted drive. Other files don't have such problems. File is write-protected in Finder settings. Can anyone direct me in a proper direction to locate the source of this problem?
Hadoop streaming jobs SUCCEEDED but killed by ApplicationMaster
25,421,497
0
13
3,350
0
python,hadoop
As far as I know, the same task is run on many nodes. As soon as one node returnes the result, tasks on onther nodes are killed. That's why job SUCCEEDED but single tasks are in KILLED state.
0
1
0
0
2014-06-02T11:39:00.000
3
0
false
23,993,638
0
0
0
1
I just finished setting up a small hadoop cluster (using 3 ubuntu machines and apache hadoop 2.2.0) and am now trying to run python streaming jobs. Running a test job I encounter the following problem: Almost all map tasks are marked as successful but with note saying Container killed. On the online interface the log for the map jobs says: Progress 100.00 State SUCCEEDED but under Note it says for almost every attempt (~200) Container killed by the ApplicationMaster. or Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 In the log file associated with the attempt I can see a log saying Task 'attempt_xxxxxxxxx_0' done. I also get 3 attempts with the same log, only those 3 have State KILLED which are under killed jobs. stderr output is empty for all jobs/attempts. When looking at the application master log and following one of the successful (but killed) attempts I find the following logs: Transitioned from NEW to UNASSIGNED Transitioned from UNASSIGNED to ASSIGNED several progress updates, including: 1.0 Done acknowledgement RUNNING to SUCCESS_CONTAINER_CLEANUP CONTAINER_REMOTE_CLEANUP KILLING attempt_xxxx Transitioned from SUCCESS_CONTAINER_CLEANUP to SUCCEEDED Task Transitioned from RUNNING to SUCCEEDED All the attempts are numbered xxxx_0 so I assume they are not killed as a result of speculative execution. Should I be worried about this? And what causes the containers to be killed? Any suggestions would be greatly appreciated!
How can I manage to install packages within python-dev on windows systems just like that on Linux?
23,996,033
1
0
7,578
0
python,windows,pip
python-dev installs the headers and libraries for linux, allowing you to write / compile extensions. In windows, the standard installer provides this by default. The header files are in the $installroot\include directory. The link libraries are found in $installroot\libs. For example I've installed python 2.7 into c:\python27. This means my include files are located here: c:\python27\include. And the link libraries are in c:\python27\libs
0
1
0
0
2014-06-02T12:58:00.000
1
1.2
true
23,995,129
1
0
0
1
Actually, we can simply install python-dev on linux just use the following command: sudo apt-get install python-dev since python is integrated with Linux. But no such luck on windows. For input "pip install python-dev", there is no corresponding package on pypi.python.org. What packages should I install to match those when installed by "apt-get install python-dev" command under Linux on Windows?
multimech is not recognized on Windows
23,999,339
0
1
79
0
python
I just solved the issue following these steps from Python26/Scripts/ I ran the following command easy_install pip Then I ran - pip install -U multi-mechanize and it works...
0
1
0
0
2014-06-02T16:10:00.000
1
0
false
23,998,849
0
0
0
1
I just follow up the steps to install multimechanize on Windows, I didn't get errors during the installation, I'm tried with python 2.7 and 2.6... but I'm getting the following error when I tried to create a new project, C:\multi-mechanize-1.2.0>multimech-newproject msilesMultimech 'multimech-newproject' is not recognized as an internal or external command, operable program or batch file. is there something else that I need to do or install
Python: Installing OmniOrbpy in Windows64(Windows 7) environment
24,046,108
3
2
801
0
python,corba,omniorb
With help from Duncan Grisby. The version of omniORBpy must match the Win32/Win64 status of your environment. Copy the distribution to a directory (I used python27/lib/site-packages/omniORB Add to or create a PYTHONPATH environment variable that points to ../omniORB/lib/python and ../omniORB/lib/x86_win32 Merge the contents of sample.reg into your Windows Registry Add an explicit PATH environment entry to ../omniORB/bin/x86_win32 Please note that omniORB is case sensitive for the paths, even though Windows is not.
0
1
0
0
2014-06-02T16:19:00.000
1
1.2
true
23,999,013
1
0
0
1
I'd like to experiment with a Python (v2.7) app accessing a CORBA API, but I keep going around in circles about which OmniOrb pieces are necessary and where they should be placed. I've downloaded omniORBpy-4.2.0-win64-py27 which I thought should have contained the bits I needed. Is is as simple as adding the files in the bin\x86_win32 directory into my Python lib\Site-packages directory ? I've found conflicting information about using the PYTHONPATH environment variable (I don't have one now), is it necessary?
How can I find the tcp address for another computer to transfer over ethernet?
24,155,570
0
1
223
0
python,network-programming,zeromq,ethernet,pyzmq
Maybe you could periodically send datagram message containing peer's ip address (or some other useful information) to broadcast address, to allow other peers to discover it. And after peer's address is dicovered you can estabish connection via ZeroMQ or other kind... connection. :)
0
1
1
0
2014-06-03T15:40:00.000
2
0
false
24,019,331
0
0
0
1
I need to transfer data via pyzmq through two computers connected by an ethernet cable. I have already set up a script that runs on the same computer correctly, but I need to find the tcp address of the other computer in order to communicate. They both run Ubuntu 14.04. One of them should be a server processing requests while the other sends requests. How do I transfer data over tcp through ethernet? I simply need a way to find the address. EDIT: (Clarification) I am running a behavioural study. I have a program called OpenSesame which runs in python and takes python scripts. I need a participant to be able to sit at a computer and be able to ask another person questions (specifically for help in a task). I need a server (using pyzmq preferably) to be connected by ethernet and communicate with that computer. It wrote a script. It works on the same computer, but not over ethernet. I need to find the address
How protobuf-net serialize DateTime?
24,038,019
11
8
3,765
0
c#,python,protocol-buffers,protobuf-net
DateTime is spoofed via a multi-field message that is not trivial, but not impossible to understand. In hindsight, I wish I had done it a different way, but it is what it is. The definition is available in bcl.proto in the protobuf-net project. However! If you are targering multiple platforms, I strongly recommend you simply use a long etc in your DTO model, representing some time granularity into some epoch (seconds or milliseconds since 1970, for example).
0
1
0
0
2014-06-04T11:33:00.000
1
1.2
true
24,036,291
0
0
0
1
I'm working on a project consisting on Client/Server. Client is written in Python (will run on linux) and server in C#. I'm communicating through standard sockets and I'm using protobuf-net for protocol definition. However, I'm wondering how would protobuf-net handle DateTime serialization. Unix datetime differs from .net standard datetime, so how should I handle this situation? Thanks
How do you test the consistency models and race conditions on Google App Engine / NDB?
32,898,743
1
1
261
0
python,google-app-engine,app-engine-ndb
I am answering this over a year since it was asked. The only way to test these sorts of things is by deploying an app on GAE. What I sometimes do when I run across these challenges is to just "whip up" a quick application that is tailor made to just test the scenario under consideration. And then, as you put it, you just have to 'script' the doing of stuff using some combination of tasks, cron, and client side curl type operations. The particular tradeoff in the original question is write throughput versus consistency. This is actually pretty straightforward once you get the hang of it. A strongly consistent query requires that the entities are in the same entity group. And, at the same time, there is the constraint that a given entity group may only have approximately 1 write per second. So, you have to look at your needs / usage pattern to figure out if you can use an entity group.
0
1
0
0
2014-06-05T01:20:00.000
4
0.049958
false
24,050,155
0
0
1
3
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
How do you test the consistency models and race conditions on Google App Engine / NDB?
24,050,936
1
1
261
0
python,google-app-engine,app-engine-ndb
I am not sure it can be tested. The inconsistencies are inconsistent. I think you just have to know that datastore operations have inconsistencies, and code around them. You don't want to plan on observations from your tests being dependable in the future.
0
1
0
0
2014-06-05T01:20:00.000
4
0.049958
false
24,050,155
0
0
1
3
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
How do you test the consistency models and race conditions on Google App Engine / NDB?
24,050,360
2
1
261
0
python,google-app-engine,app-engine-ndb
You really need to do testing in the real environment. At best the dev environment is an approximation of production. You certainly can't draw any conclusions at all about performance by just using the SDK. In many cases the SDK is faster (startup times) and slower (queries on large datasets. Eventual Consistency is emulated and not 100% the same as production.
0
1
0
0
2014-06-05T01:20:00.000
4
0.099668
false
24,050,155
0
0
1
3
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
How to play a sound onto a input stream
24,119,286
-1
0
3,682
0
python,linux,audio,alsa,pulseaudio
I guess it depends on what you would like to do with it after you got it "into" python. I would definitely look at the scikits.audiolab library. That's what you might use if you wanted to draw up spectrograms of what ever sound you are trying process (I'm guessing that's what you want to do?).
0
1
0
0
2014-06-06T05:19:00.000
2
-0.099668
false
24,074,684
0
0
0
1
I was wondering if it was possible to play a sound directly into a input from python. I am using linux, and with that I am using OSS, ALSA, and Pulseaudio
Google App Engine Check Success of backup programmatically
24,088,125
1
0
69
0
python,google-app-engine
Unfortunately there is not currently a well-supported way to do this. However, with the disclaimer that this is likely to break at some point in the future, as it depends on internal implementation details, You can fetch the relevant _AE_Backup_Information and _AE_DatastoreAdmin_Operation entities from your datastore and inspect them for information regarding the backup. In particular, the _AE_DatastoreAdmin_Operation has fields active_jobs, completed_jobs, and status.
0
1
0
0
2014-06-06T08:05:00.000
1
1.2
true
24,077,041
0
0
1
1
I am taking the backup of datastore , using Taskqueues. I want to check whether the backup has completed successfully or not. I can check the end of the backup job by checking the taskqueue, but how can i check whether the backup was successful or it failed due to some errors.
Python, How to check windows application progress
24,079,238
1
1
81
0
python,windows
I guess a work-around for this would be to check the process's memory usage with shell command. If the specific process does not matter, I guess you could run a shell command and get the general system memory status. But this will only work for memory hungry processes.
0
1
0
0
2014-06-06T08:27:00.000
1
1.2
true
24,077,365
0
0
0
1
In python, is there any way to check the progress of another windows application. That is to say for example, downloading a file in chrome, or converting a file in handbrake, Is there any way to get the current status of these processes. Specifically I want my script to wait until another program finishes a conversion, then continue.
Google App Engine SDK Fatal Error
24,108,384
0
0
194
0
python,google-app-engine
App Engine does not support Python 3.x. Do you still have 2.x installed? Go to your Google App Engine Launcher > Preferences, and make sure you have the proper Python Path to your v2.x. It should be something like "/usr/bin/python2.7" From Terminal, typewhereis python to help find it. If you know you were using version 2.7, try: whereis python2.7
0
1
0
0
2014-06-08T16:27:00.000
1
0
false
24,108,241
0
0
1
1
I installed Python 3.4 on my Mac (OS 10.9.3) and my command for running Google App Engine from the terminal via /usr/local/dev_appengine stopped working. I then (stupidly) did some rather arbitrary things from online help forums and now my Google App Engine itself stopped working as well. When I open it, it says: Sorry, pieces of GoogleAppEngineLauncher.app appear missing or corrupted, or I can't run python2.5 properly. Output was: I have tried to delete the application and all related files and reinstall, but nothing has worked for me. It now fails to make the command symlinks as well so when I try to run from terminal I get /usr/local/bin/dev_appserver.py: No such file or directory.
Long running task scalablity EC2
24,115,986
0
0
63
0
python,amazon-web-services,amazon-ec2,flask,scalability
Autoscaling is tailor-made for situations like these. You could run an initial diagnostic to see what the CPU usage usually is when a single server is running it's maximum allowable tasks (let's say it's above X%). You can then set up an autoscaling rule to spin up more instances once this threshold is crossed. Your rule could ensure a new instance is created every time one instance crosses X%. Further, you can also add a rule to scale down (setting the minimum instances to 1) based on a similar usage threshold.
0
1
0
1
2014-06-09T04:15:00.000
1
0
false
24,113,602
0
0
1
1
There is a long running task (20m to 50m) which is invoked from a HTTP call to a Webserver. Now, since this task is compute intensive, the webserver cannot take up more than 4-5 tasks in parallel (on m3.medium). How can this be scaled? Can the auto-scaling feature of EC2 be used in this scenario? Are there any other frameworks available which can help in scaling up and down, preferably on AWS EC2?
Launch Python script in Git Bash
24,123,328
0
3
28,190
0
python,git
in the top of your python file add #!/usr/bin/python then you can rename mv myScript.py myScript and run chmod 755 myScript. This will make it so you can run the file with ./myScript look into adding the file directory to your path or linking it to the path if you want to be able to run it from anywhere.
0
1
0
1
2014-06-09T15:05:00.000
2
0
false
24,123,128
0
0
0
1
I'm writing git commands through a Python script (on Windows) When I double click on myScript.py, commands are launched in the Windows Command Prompt. I would like to execute them in Git Bash. Any idea how to do that without opening Git Bash and write python myScript.py?
Custom domain routing to Flask server with custom domain always showing in address bar
24,125,918
5
6
2,809
0
python,web-services,dns,flask,tornado
I managed to solve it by myself, but I'll add this as an answer since evidently someone thought it was a worthwhile question. It turns out that it was just me that did not understand how DNS works and what the difference between DNS and domain forwarding is. At most domain hosts you can configure "domain forwarding", which sounds what precisely what you need but is NOT. Rather, for the simple usecase above, I went into the DNS Zone Records in the options and created a DNS Zone Record type A that pointed xyz.com to a.b.c.d. The change does not seem to have propagated entirely yet, but already on some devices I can see it working exactly how I want it to, so I will consider this issue resolved.
0
1
0
0
2014-06-09T15:19:00.000
1
1.2
true
24,123,389
0
0
1
1
I have a small home-server running Flask set up at IP a.b.c.d. I also have a domain name xyz.com. Now I would like it so that when going to xyz.com, the user is served the content from a.b.c.d, with xyz.com still showing in the address bar. Similarly, when going to xyz.com/foo the content from a.b.c.d/foo should be shown, with xyz.com/foo showing in the address bar. I have path forwarding activated at my domain name provider, so xyz.com/foo is correctly forwarded to a.b.c.d/foo, but when going there a.b.c.d/foo is shown in the address bar. I'm currently running tornado, but I can switch to another server if it is necessary. Is it possible to set up this kind of solution? Or is my only option to buy some kind of hosting?
Google Appengine runs Cron tasks even if they are no cron.yaml file defined
49,346,121
1
1
645
0
google-app-engine,python-2.7,cron
The tip from @Greg above solved that problem for me. Note the full sequence of events: A past version of the application included a cron.yaml file that ran every hour In a successive version, I removed the cron.yaml file, thinking that was enough, but then I discovered that the cron jobs were still running! Uploading an EMPTY cron.yaml file didn't change things either Uploading a cron.yaml file with only "cron:" in it did it -- the cron jobs just stopped. From the above I reckon that the way things work is this: when a cron.yaml file is found it is parsed, and if the syntax is correct, its cron jobs are loaded in the app server. And apparently, just removing the cron.yaml file from a successive version, or loading an empty one (i.e. un-parseable, bad syntax) doesn't remove the cron jobs. The only way to remove the cron jobs is to load a new, PARSEABLE cron.yaml file, i.e. with the "cron:" line, but with no actual jobs after that. And the proof of the pudding is that after you do that, you can now remove cron.yaml from successive versions, and those old cron jobs will not come back any more.
0
1
0
0
2014-06-10T08:07:00.000
1
1.2
true
24,135,908
0
0
1
1
I start receiving errors from the CRON service even if I have not a single cron.yaml file defined. The cron task runs every 4 hours. I really don't know where to look at in order to correct such behaviour. Please tell me what kind of information is needed to correct the error. Cron jobs First Cron error Cron Job : /admin/push/feedbackservice/process - Query APNS Feedback service and remove inactive devices Schedule/Last Run/Last Status (All times are UTC) : every 4 hours (UTC) 2014/06/10 07:00:23 on time Failed Second Cron error Cron job: /admin/push/notifications/cleanup - Remove no longer needed records of processed notifications Schedule/Last Run/Last Status (All times are UTC) : every day 04:45 (America/New_York) - 2014/06/09 04:45:01 on time Failed Console log 2014-06-10 09:00:24.064 /admin/push/feedbackservice/process 404 626ms 0kb AppEngine-Google; (+http://code.google.com/appengine) module=default version=1 0.1.0.1 - - [10/Jun/2014:00:00:24 -0700] "GET /admin/push/feedbackservice/process HTTP/1.1" 404 113 - "AppEngine-Google; (+http://code.google.com/appengine)" "xxx-dev.appspot.com" ms=627 cpu_ms=353 cpm_usd=0.000013 queue_name=__cron task_name=471b6c0016980883f8225c35b96 loading_request=1 app_engine_release=1.9.5 instance=00c61b17c3c8be02ef95578ba43 I 2014-06-10 09:00:24.063 This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
Trouble with adding pip's system environment variable
24,155,174
0
2
1,824
0
python,path,pip
Try using easy_install easy_install -U pip This will remove pip. If you don't have easy_install ... installed, do that first. After uninstalling pip, simply reinstall it using the get-pip.py file.
0
1
0
0
2014-06-11T05:14:00.000
1
1.2
true
24,155,105
1
0
0
1
I was able to install pip by using distribute_setup.py and get-pip.py. However, I had to add pip to path by adding C:\Python27\Scripts. This worked for a little while but I had to do a system restore for other reasons and now when I type pip in the command prompt it returns: 'pip' is not recognized as an internal or external command. I tried adding C:\Python27\Scripts to path again both to the user variable and system variable but to no avail. I also tried re-installing pip but it just said I had the latest version installed already. Pip can be imported without any error within python so I am at a loss here. Can anyone help? Thanks in advance.
How do I communicate and share data between python and other applications?
24,171,852
1
0
2,813
0
python,database,web-applications,ipc
Python has since early stages a very comfortable PyZMQ binding for ZeroMQ. MATLAB can have the same, a direct ZeroMQ at work for your many-to-many communications. Let me move in a bit broader view, from a few, but KEY PRINCIPAL POINTS, that are not so common in other software-engineering "products" & "stacks" we meet today around us: [1] ZeroMQ is first of all a very powerful concept rather than a code or a DIY kit [2] ZeroMQ's biggest plus for any professional grade project sits in rather using the genuine Scaleable Formal Communication Patterns end-to-end, not in the ability to code pieces or to "trick/mod" the published internals [3] ZeroMQ team has done a terrific job and saves users from re-inventing wheels ("inside") and allows to rather stay on the most productive side by a re-use of the heroic knowledge ( elaborated, polished & tested by the ZeroMQ gurus, supporters & team-members ) from behind the ZMQ-abstraction-horizon. Having said these few principles, my recommendation would be to spend some time on the concepts in a published book from Peter Hintjens on ZeroMQ ( also available in PDF). This a worthwhile place to start from, to get the bigger picture. Then, there it would be a question of literally a few SLOC-s to make these world most powerful ( and believe me, that this sounds bold only on first sight, as there are not many real alternatives to compare ZeroMQ with ... well, ZeroMQ co-architect Martin Sustrik's [nanomsg] is that case, to mention at least one, if you need to go even higher in speed / lower in latency, but the above key principal points hold & remain the same even there ... ) Having used a ZeroMQ orchestrated Python & MQL4 & AI/ML system in FOREX high speed trading infrastructure environment is just a small example, where microseconds matter and nanosecond make a difference in the queue ... Presented in a hope that your interest in ZeroMQ library will only grow & that you will benefit as much as many other uses of this brilliant piece of art have gained giant leap & benefited from whatever the PUB/SUB, PAIR/PAIR, REQ/REP formal patterns does best match the very communication need of your MATLAB / Python / * heterogeneous multi-party / multi-host Project.
0
1
0
1
2014-06-11T17:59:00.000
2
0.099668
false
24,169,539
0
0
0
1
At a high level, what I need to do is have a python script that does a few things based on the commands it receives from various applications. At this stage, it's not clear what the application may be. It could be another python program, a MATLAB application, or a LAMP configuration. The commands will be sent rarely, something like a few times every hour. The problem is - What is the best way for my python script to receive these commands, and indicate to these applications that it has received them? Right now, what I'm trying to do is have a simple .txt file. The application(s) will write commands to the file. The python script will read it, do its thing, and remove the command from the file. I didn't like this approach for 2 reasons- 1) What happens if the file is being written/read by python and a new command is sent by an application? 2) This is a complicated approach which does not lead to anything robust and significant.
Using Google App Engine to update files on Google Compute Engine
24,194,583
0
0
273
0
python,google-app-engine,file-transfer,google-compute-engine
The most straightforward approach seems to be: A user submit a form on App Engine instance. App Engine instance makes a POST call to a handler on GCE instance with the new data. GCE instance updates its own file and processes it.
0
1
0
0
2014-06-12T21:28:00.000
4
0
false
24,194,217
0
0
1
2
I am working on a project that involves using an Google App Engine (GAE) server to control one or more Google Compute Engine (GCE) instances. The rest of the project is working well, but I am having a problem with one specific aspect: file management. I want my GAE to edit a file on my GCE instance, and after days of research I have come up blank on how to do that. The most straightforward example of this is: Step 1) User enters text into a GAE form. Step 2) User clicks a button to indicate they would like to "submit" the text to GCE Step 3) GAE replaces the contents of a particular (hard-coded path) text file on the GCE with the user's new content. Step 4) (bonus step) GCE notices that the file has changed (either by detecting a change or by way of GAE alerting it when the new content is pushed) and runs a script to process the new file. I understand that this is easy to do using SCP or other terminal commands. I have already done that, and that works fine. What I need is a way for GAE to send that content directly, without my intervention. I have full access to all instances of GAE and GCE involved in this project, and can set up whatever code is needed on either of the platforms. Thank you in advance for your help!
Using Google App Engine to update files on Google Compute Engine
24,215,374
0
0
273
0
python,google-app-engine,file-transfer,google-compute-engine
You can set an action URL in your form to point to the GCE instance (it can be load-balanced if you have more than one). Then all data will be uploaded directly to the GCE instance, and you don't have to worry about transferring data from your App Engine instance to GCE instance.
0
1
0
0
2014-06-12T21:28:00.000
4
0
false
24,194,217
0
0
1
2
I am working on a project that involves using an Google App Engine (GAE) server to control one or more Google Compute Engine (GCE) instances. The rest of the project is working well, but I am having a problem with one specific aspect: file management. I want my GAE to edit a file on my GCE instance, and after days of research I have come up blank on how to do that. The most straightforward example of this is: Step 1) User enters text into a GAE form. Step 2) User clicks a button to indicate they would like to "submit" the text to GCE Step 3) GAE replaces the contents of a particular (hard-coded path) text file on the GCE with the user's new content. Step 4) (bonus step) GCE notices that the file has changed (either by detecting a change or by way of GAE alerting it when the new content is pushed) and runs a script to process the new file. I understand that this is easy to do using SCP or other terminal commands. I have already done that, and that works fine. What I need is a way for GAE to send that content directly, without my intervention. I have full access to all instances of GAE and GCE involved in this project, and can set up whatever code is needed on either of the platforms. Thank you in advance for your help!
Running Python from PHP on one.com
34,741,305
5
0
3,118
0
php,python
I know this post i old but for future reference, as I assume you have moved on... I can inform that after I read your question I contacted One.com (12 jan 2016) and they said that they do not support Python and are not planning to do so in the near future.
0
1
0
1
2014-06-13T19:37:00.000
1
0.761594
false
24,212,724
0
0
0
1
I am trying to run a python script on one.com after a user completes an action on my website. If I run it using a shell file (every couple of minutes) when I run it in the background and end the ssh session it ends the script. I have tried running if from php using shell_exec and system but these are blocked by one.com. I was wondering if anyone had any success with this?
What's the proper Tornado response for a log in success?
24,232,201
1
1
246
0
python,ios,tornado
You can send your response with either self.write() or self.finish() (the main difference is that with write() you can assemble your response in several pieces, while finish() can only be called once. You also have to call finish() once if you're using asynchronous functions that are not coroutines, but in most cases it is done automatically). As for what to send, it doesn't really matter if it's a non-browser application that only looks at the status code, but I generally send an empty json dictionary for cases like this so there is well-defined space for future expansion.
0
1
0
0
2014-06-14T21:38:00.000
1
1.2
true
24,224,539
0
0
1
1
So far I have a pretty basic server (I haven't built in any security features yet, like cookie authentication). What I've got so far is an iOS app where you enter a username and password and those arguments are plugged into a URL and passed to a server. The server checks to see if the username is in the database and then sends a confirmation to the app. Pretty basic but what I can't figure out is what the confirmation should look like? The server is a Python Tornado server with a MySQL dbms.. What I'm unsure of is what Tornado should/can send in response? Do I use self.write or self.response or self.render? I don't think it's self.render because I'm not rendering an HTML file, I'm just sending the native iOS app a confirmation response which, once received by the app, will prompt it to load the next View Controller. After a lot of googling I can't seem to find the answer (probably because I don't know how to word the question correctly). I'm new to servers so I appreciate your patience.
How to use Google Cloud Datastore Statistics
24,227,785
1
0
68
1
python,google-app-engine,google-cloud-datastore
You can't, that's not what it's for at all. It's only for very broad-grained statistics about the number of each types in the datastore. It'll give you a rough estimate of how many Person objects there are in total, that's all.
0
1
0
0
2014-06-15T07:34:00.000
2
0.099668
false
24,227,510
0
0
1
1
How can I use Google Cloud Datastore stats object (in Python ) to get the number of entities of one kind (i.e. Person) in my database satisfying a given constraint (i.e. age>20)?
Does Google App Engine support Python 3?
45,396,993
1
50
20,010
0
python,google-app-engine
YES! Google App engine supports python v3, you need to set up flexible environments. I got a chance to deploy my application on app engine and It's using python 3.6 runtime and works smoothly... :)
0
1
0
0
2014-06-15T11:37:00.000
7
0.028564
false
24,229,203
1
0
1
1
I started learning Python 3.4 and would like to start using libraries as well as Google App Engine, but the majority of Python libraries only support Python 2.7 and the same with Google App Engine. Should I learn 2.7 instead or is there an easier way? (Is it possible to have 2 Python versions on my machine at the same time?)
Poll the linux cp command to GET progress
24,233,353
1
0
606
0
python,linux,bash,raspberry-pi,cp
you can use rsync, which can be used pretty much like cp is used, but offers an option for progress indicator. It is sent to standard out, and you ought to be able to intercept it for your own fancies.
0
1
0
1
2014-06-15T19:25:00.000
3
0.066568
false
24,233,264
0
0
0
1
Is there a way to poll the cp command to get its current progress? I understand there's a modified/Advanced copy utility that adds a small little ASCII progress bar, but I want to build my own progress bar using led lights and whatnot, and need to be able to see the current percentage of the file activity in order to determine how many LEDs to light up on the progress bar.
Need to ignore proxy without disabling it on control panel
24,250,751
0
0
27
0
python,google-app-engine,python-2.7
If you are running your app from the terminal, using dev_appserver.py try using the --skip_sdk_update_check switch, it may be the SDK update check that is failing.
0
1
1
0
2014-06-16T12:35:00.000
2
0
false
24,243,891
0
0
0
1
I trying to run a hello world app on my device environment using ga and Python, however even not doing any explicit url request, the urllib2 is having some problems with my proxy server. I tried adding the localhost to the list of exclusion and it didn't work. If I disable the proxy on machine it works perfectly. How can I make it work without disabling the proxy for all programs?
Copying Django project to root server is not working
24,258,210
0
0
103
0
python,django,deployment
I know some of this tip can be obvious but never knows: Do you update all your settings in settings.py ? (paths to static files, path to project...) Wich server are you using ? django server ? apache ? nginx ? Do you have permissions over all file in the project ? You should check that the owner of the files is the user you have, not root. If the owner is root you'll have this permissions problem in every file that is owned by root. Are you using uwsgi ? Have you installed all the apps you got in your VM ? Have you installed the SAME version you got in your VM ? When I move a project from VM to real server I go over this steps: Review settings.py and update paths Check permissions in the folder that the web server may use I have a list with the packages and versions in a txt file, let's call it packages.txt I install all that packages using pip install -r packages.txt I allways use apache/nginx, so I have to update the virtualhost to the new paths If I'm using uwsgi, update uwsgi settings To downgrade some pip packages you may need to delete the egg files, because if you uninstall a package and reinstall it, although your using pip install package==VERSION, if you have a package already downloaded, pip will install this one, even if the VERSION is different. To check actual version of pip packages use pip freeze To export all pip packages to a file, to import them in other place: pip freeze > packages.txt nad to install packages from this file pip install -r packages.txt
0
1
0
0
2014-06-17T01:00:00.000
2
0
false
24,254,300
0
0
1
1
I hope you can help me. I have been building this webshop for the company I work for with Django and Lightning Fast Shop. It's basically finished now and I have been running it of a virtual ubuntu machine on my PC. Since it got annoying leaving my PC on the entire time, so others could access the site, I wanted to deploy it on a root server. So I got a JiffyBox and installed ubuntu on it. I managed to get Gnome working on it and to connect to it with VNC. I then uploaded my finished project via FTP to the server. Now I thought I would only need to download Django-LFS, create a new project and replace the project files with my finished ones. This worked when I tested it on my virtual machine. To my disappointment it did not work on the root server. When I tried running "bin/django runserver" I got an error message saying "bash: bin/django: Permission denied" and when I try it with 'sudo' I get "sudo: bin/django: command not found" I then realized that I had downloaded a newer version of Django-LFS and tried it with the same version to no avail. I am starting to get really frustrated and would appreciate it very much if somebody could help me with my problem. Greetings, Krytos.
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
37,644,135
12
147
56,894
0
python,google-app-engine,installation,pip,distutils
Another solution* for Homebrew users is simply to use a virtualenv. Of course, that may remove the need for the target directory anyway - but even if it doesn't, I've found --target works by default (as in, without creating/modifying a config file) when in a virtual environment. *I say solution; perhaps it's just another motivation to meticulously use venvs...
0
1
0
0
2014-06-17T07:16:00.000
8
1
false
24,257,803
0
0
1
3
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
48,954,780
2
147
56,894
0
python,google-app-engine,installation,pip,distutils
If you're using virtualenv*, it might be a good idea to double check which pip you're using. If you see something like /usr/local/bin/pip you've broken out of your environment. Reactivating your virtualenv will fix this: VirtualEnv: $ source bin/activate VirtualFish: $ vf activate [environ] *: I use virtualfish, but I assume this tip is relevant to both.
0
1
0
0
2014-06-17T07:16:00.000
8
0.049958
false
24,257,803
0
0
1
3
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
45,668,067
24
147
56,894
0
python,google-app-engine,installation,pip,distutils
On OSX(mac), assuming a project folder called /var/myproject cd /var/myproject Create a file called setup.cfg and add [install] prefix= Run pip install <packagename> -t .
0
1
0
0
2014-06-17T07:16:00.000
8
1
false
24,257,803
0
0
1
3
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
Why isn't serialport.py installed by default?
24,295,427
1
2
91
0
ubuntu,python-3.x,twisted
Twisted has not been entirely ported to Python 3. Only parts of it have been ported. When you install Twisted using Python 3, only the parts that have been ported are installed. The unported modules are not installed because they are not expected to work. As you observed, this code does not actually work on Python 3 because it uses implicit relative imports - a feature which has been removed from Python 3.
0
1
0
1
2014-06-18T17:12:00.000
1
1.2
true
24,291,443
0
0
0
1
I'm using Ubuntu in several PCs (versions 12.04 and 14.04), and I noticed that serialprotocol.py is not being installed when I run "sudo python3 setup3.py install" in the default source tar package for twisted 14.0.0. I had to manually copy the file in my computers. I also tried installing the default ubuntu package python3-twisted-experimental with the same results. So I always end up copying "serialprotocol.py" and "_posixserialport.py" manually. And they work fine after that. As a side note: _posixserialport.py fails to import BaseSerialPort because it says: from serialport import BaseSerialPort but it should be: from twisted.internet.serialport import BaseSerialPort
Removing characters from filename in batch
32,706,802
2
7
19,762
0
python,batch-file
I don't have enough reputation to comment on nicholas's solution, but that code breaks if any of the folder names contain the character you want to replace. For instance, if you want to newname = path.replace('_', '') but your path looks like /path/to/data_dir/control_43.csv you will get an OSError: [Errno 2] No such file or directory
0
1
0
0
2014-06-19T01:34:00.000
4
0.099668
false
24,297,468
1
0
0
1
I have 3 main folder in Windows explorer that contain files with naming like this ALB_01_00000_intsect_d.kml or Baxters_Creek_AL_intsect_d.kml. Even though the first name changes the consistent thing that I would like to remove from all these files is "_intsect_d". Would like to do this for all files within each of the folders. The files have an extension .kml. The result I am expecting as per the example above is ALB_01_00000.kml and the other one would be Baxters_Creek_AL.kml. Dont know much about programming in python, but would like help to write a script that can acheive the result mentioned above. Thanks
How to resume file transferring with paramiko
27,151,379
2
2
1,637
0
python,sftp,file-transfer,paramiko,resume
Paramiko doesn't offer an out of the box 'resume' function however, Syncrify, DeltaCopy's big successor has a retry built in and if the backup goes down the server waits up to six hours for a reconnect. Pretty trusty, easy to use and data diff by default.
0
1
0
0
2014-06-19T18:53:00.000
2
1.2
true
24,314,270
0
0
0
2
I'm working on a Python project that is required some file transferring. One side of the connection is highly available ( REHL 6 ) and always online. But the other side is going on and off ( Windows 7 ) and the connection period is not guaranteed. The files are transporting on both directions and sizes are between 10MB to 2GB. Is it possible to resume the file transferring with paramiko instead of transferring the entire file from the beginning. I would like to use rSync but one side is windows and I would like to avoid cwRsync and DeltaCopy
How to resume file transferring with paramiko
50,497,310
2
2
1,637
0
python,sftp,file-transfer,paramiko,resume
paramiko.sftp_client.SFTPClient contains an open function, which functions exactly like python's built-in open function. You can use this to open both a local and remote file, and manually transfer data from one to the other, all the while recording how much data has been transferred. When the connection is interrupted, you should be able to pick up right where you left off (assuming that neither file has been changed by a 3rd party) by using the seek method. Keep in mind that a naive implementation of this is likely to be slower than paramiko's get and put functions.
0
1
0
0
2014-06-19T18:53:00.000
2
0.197375
false
24,314,270
0
0
0
2
I'm working on a Python project that is required some file transferring. One side of the connection is highly available ( REHL 6 ) and always online. But the other side is going on and off ( Windows 7 ) and the connection period is not guaranteed. The files are transporting on both directions and sizes are between 10MB to 2GB. Is it possible to resume the file transferring with paramiko instead of transferring the entire file from the beginning. I would like to use rSync but one side is windows and I would like to avoid cwRsync and DeltaCopy
Tornado: Pre-forking with unix sockets
24,319,467
1
0
583
0
python,webserver,tornado
The error has nothing to do with unix sockets. IOLoops do not survive a fork gracefully, so if you are going to fork you must do it before initializing any global IOLoop (but after binding any sockets). In general, you must do as little as possible before the fork, since many Tornado components implicitly start the IOLoop. If you are using multiple TCPServers, be sure to only fork from the first one you start; all the others should be in single-process mode.
0
1
0
0
2014-06-19T19:43:00.000
2
0.099668
false
24,315,020
0
0
0
1
Using Tornado Web Server, I'm attempting to use their pre-fork after binding to a unix socket, but I get the following error: RuntimeError: Cannot run in multiple processes: IOLoop instance has already been initialized. You cannot call IOLoop.instance() before calling start_processes() Is there a reason tornado throws this issue when binding unix sockets and using: myserver.start(0) vs using an TCP Port?
sshfs mount failing using fabric run command
24,329,791
2
1
295
0
python,ssh,fabric,sshfs
I figured out finally there is an issue with SSH and need to pass pty=False flag. run("sshfs -o reconnect -C -o workaround=all localhost:/home/test/ /mnt",pty=False)
0
1
0
1
2014-06-19T22:41:00.000
1
1.2
true
24,317,368
0
0
0
1
I am trying to mount the SSHFS using the following run command run("sshfs -o reconnect -C -o workaround=all localhost:/home/test/ /mnt") and it is failing with the following error fuse: bad mount point `/mnt': Transport endpoint is not connected However if i demonize it works. Is there any work around?.
How to tell a file is renamed or not after opened in Python?
24,321,323
0
0
193
0
python,file
You are trying to defeat the purpose of log rotation if you are populating the same log file even after its rotated. One of the reason for doing log rotation is not to grow log size too much so that we don't have to face difficulties in opening\searching log information and your case is defeating this purpose. Still if you want to do that then you can check the folder where log files are kept after rotation and you can find out the latest rotated log file say that latest rotated log file name is application.log.x (where x is a number i.e.1,2,3,..) then before performing a write operation to log file you need to again check the log directory to check what is the latest rotated file and if there is a file later than application.log.x that means the log file in which you were writing is rotated and you can write the log to the logfile named as application.log.x+1. On the other hand if log rotation is appending timestamp value in the logfile name to rename it then you need to check the latest rotated file before you open the log file for writing (say its app.log.timestamp ) and before writing again you need to check the log directory to find out the latest rotated log file if you find a rotated log file with greater time stamp than the time stamp of app.log.timestamp then you should use the file with name app.log.(timestamp + time duration for log rotation) Note: Rotation happens mainly on two basis i.e. size or timestamp and usually in my observation a file is renamed after rotation by appending a number or timestamp in its name e.g. if log name is application.log then after location its name become application.log.x (where x is a number 1,2,3...) or application.log. where timestamp is the date time when log rotation happened
0
1
0
0
2014-06-20T05:47:00.000
1
0
false
24,320,713
1
0
0
1
The story is there is a log file that will be rotated repeatedly by some interval. I need to write a small tool in Python that always print new logs in that file even after it rotated. How can I tell the old log file is renamed, and open the new one in Python?
How to package a Mac OS app with Pyinstaller that shows both a console and a GUI?
53,956,162
0
6
425
0
python,macos,pyinstaller
while you create application don't add those options --windowed and --noconsole
1
1
0
0
2014-06-20T18:15:00.000
1
0
false
24,333,323
1
0
0
1
I'm packaging a GUI app for MacOS with Pyinstaller, using --windowed flag. Is it possible to package it so that it would show a console in addition to the GUI? When I tried to set console=True, the GUI part fails. In other words, when I start the App from the terminal by typing "open My.App/Contents/MacOS/myapp", then I do get both GUI and console. I'd like to get similar behaviour by just double-clicking on the App without starting the terminal. Is there a way to do it?
How to determine the default executable for a specific file format?
24,344,493
0
0
33
0
python,cross-platform
No. You can write assumptions into your program, which is what all developers do to handle these formats. It doesn't matter what extension a file has, it can be used as a format regardless. Take for example an XML file. If you take that XML data and put it into a .txt file, or simply rename the .xml file to .txt, reading from that file and parsing the data within will still render XML formats.
0
1
0
0
2014-06-21T18:11:00.000
1
0
false
24,344,448
0
0
0
1
Is there a way to do this, just by relying on the file's extension? For example: os.system(filepath) opens the given filepath using the default application, but what is the executable's filepath?
How to stop multiple processes from creating multiple instances of another process?
24,356,913
0
1
572
0
python,concurrency,race-condition
There are likely pythonic ways, but myself I would use a process supervisor like daemontools, systemd, runit etc - to start and supervise the status process to ensure there is one and only one.
0
1
0
0
2014-06-22T23:50:00.000
2
0
false
24,356,820
1
0
0
1
I have 2 processes: Start and Status. There can be multiple Start processes executed on the same time and there should only be 1 instance of Status process. On startup of the Start process, it will attempt to start Status. At the moment, I try to stop multiple Statuses from starting by getting the Status process to check if Status's server port has been binded to determine if there is another Status that exists and if so it will shutdown gracefully. However this has a race condition where the time it checks for the binded port, there might be another Status that had done that check and is in the process of binding that port, hence 2 Statuses will be created. Is there a process level solution to this? I have considered having another process monitoring the number of Statuses in the System but is there another approach? Edit: This is done in Python 2.6 Edit2: Both Start and Status are excuted from the shell.
How to fix corrupted Python search path in Debian 7.5?
24,382,572
0
0
175
0
python,debian
I fixed it with the following re-install: apt-get install python2.7-minimal --reinstall Reinstalling python and python-dev wasn't solving, but python2.7-minimal made the job.
0
1
0
1
2014-06-23T12:22:00.000
1
1.2
true
24,365,844
0
0
0
1
I'm configuring a Debian 7.5 server, and up to yesterday the mail server and the policyd-spf Python plugin were running fine. I added some more Python-related libraries in order to configure Plone (python-setuptools, python-dev, python-imaging), and now the Python setup seems corrupted for some reason. If I now run policyd-spf manually, I get an ImportError on the spf module. Opening a Python interpreter and checking the sys.path, I get the following: ['', '/usr/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/usr/lib/python2.7/site-packages/virtualenv-1.11.6-py2.7.egg', '/usr/lib/python27.zip', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/site-packages'] I noticed that /usr/lib/python2.7/site-packages is there, but /usr/lib/python2.7/dist-packages is missing, and that's the reason for the import error. I already tried re-installing the python and python-all packages, hoping that a reinstall would have fixed it, but I still have the same problem. Does anyone know where exactly Debian configured dist-packages to be included in the search path, and how can I recover it? thanks!
Mac Address Timestamp python
24,386,429
-2
1
334
0
python,network-programming,timestamp
The timestamp is in seconds. You can import datetime in python and use its fromtimestamp method to get it in a easier to read format like so. import datetime ts = datetime.datetime.fromtimestamp(1305354670.602149) print ts 2011-05-14 02:31:10.602149 Hope this helped.
0
1
0
0
2014-06-24T11:58:00.000
2
-0.197375
false
24,386,080
0
0
0
1
I have this silly question. I analyze data packets with scapy. And there is a variable inside the packet, it's called timestamp (TSFT) , which is the time that the packet was constructed. So i grab that vairable (packet[RadioTap].TSFT) but I do not know if the value is in nanoseconds or in microseconds. Could anyone inform me ? I haven't seen it anywhere. Thanks in advance.
Python interpreter does not recognize control keys
24,401,731
1
1
54
0
python,bash
If you compiled Python 3.4 from source, you are probably missing the development libraries for readline. The package is typically called libreadline-dev.
0
1
0
0
2014-06-25T06:20:00.000
1
0.197375
false
24,401,550
0
0
0
1
Previously I ran python 2.7 on Debian Linux terminal (bash). I conveniently use control-f, control-b to move forward/back word. But it does not work on updated 3.4 version, which generates unreadable symbol. Is there a way to configure the control-key recognition?
Python multiprocessing broken pipe, access namespace
24,442,244
0
0
665
0
python,multiprocessing,pipe
A few suggestions for transferring unpicklable raw data back from multiprocessing workers: 1) have each worker write to a database or file (or print to the console) 2) translate the raw data into a string, to return to the parent. If the parent is just logging things then this is the easiest. 3) translate to JSON, to return to the parent. This solution is best if the parent is aggregating data, not just logging it.
0
1
0
0
2014-06-26T21:04:00.000
1
0
false
24,440,210
1
0
0
1
I have a long-running process running a simulation using python's multiprocessing. At the end, the process sends the results through a pipe back to the main process. The problem is that, I had redefined the class for the results object, so I knew it would give me an unpickling error. In an attempt to head this off, I got the file descriptor of the pipe and tried to open it with os.fdopen. Unfortunately, I got a "bad file descriptor" error, and now I get the same if I try to receive from the pipe. Because this is a very long simulation, I don't want to kill the process and start over. Is there any way to get the object out of the pipe OR just access the namespace of the child process so that I can save it to disk? Thanks so much.
Can't find launcher in ~/anaconda 2.0 in mac osx
24,472,251
0
1
1,398
0
ipython,spyder,qtconsole
You have two problems here: The Anaconda launcher haven't been ported to Python 3 yet, so that's why you can't find it. To fix the ValueError: unknown locale: UTF-8 problem, you need to: Open a terminal Write this command on it nano ~/.bashrc (nano is terminal-based editor) Paste this text in nano: export LANG=en_US.UTF-8 export LC_ALL=en_US.UTF-8 Hit these keys to save: Ctrl+O+Enter, then Ctrl+X to exit. Close that terminal, open a new one and try to start spyder. Everything should be fixed now.
0
1
0
0
2014-06-27T00:28:00.000
1
1.2
true
24,442,307
1
0
0
1
I just did a clean install of ananconda 2.0 (python 3.4) on my mac osx after uninstalling the previous version of anaconda. I used the graphical installer but the launcher is missing in the ~/anaconda directory. I tried running spyder and ipython from the terminal but i got long error messages that ended with: ValueError: unknown locale: UTF-8 I am a newbie to python programming and this is quite unnerving for me. I have gone through related answers but I still need help. Guys, please kindly point me in the right direction. Thanks.
How does cx_freeze compile a Python script?
24,457,483
2
4
226
0
python,cx-freeze
cx_Freeze doesn't really compile your code. It really just packages up your Python code along with the Python interpreter, so that when you launch your application, it sets up a Python interpreter and starts running your Python code. It has the necessary machinery to run from either Python source code or bytecode, but it mostly stores modules as bytecode, because that's quicker to load. Options like Cython and Nuitka go a step further - they translate your code to C and compile it to machine code, but they still use the Python VM machinery. It's just compiled code calling Python functionality rather than the VM running Python bytecode.
0
1
0
0
2014-06-27T03:48:00.000
1
1.2
true
24,443,621
0
0
0
1
Does cx_freeze contain its own compiler that goes from Python -> binary? Or does it translate it (e.g. to C), and compile the translated code? Edit: It appears to be compiled to byte-code. So does this mean a cx_freeze exe is just the byte-code -> binary part of the Python interpreter?
What do scripts(stored in bin directory of the project) do in addition to modules in a python project?
24,447,080
1
0
59
0
python,project
Scripts can be used as stand-alone programs for tasks both simple and complex. When you put them in a bin directory, and have the bin directory in your PATH, you can execute them just like an exe, assuming you have configured the interpreter correctly (in Windows), or have put #!/usr/bin/python as the top line for Linux. For example, you might write a Python script that computes the mean of a list of numbers passed into stdin, stick it in your bin directory, and execute it just like you would a C program for the same purpose.
0
1
0
1
2014-06-27T08:09:00.000
1
0.197375
false
24,446,966
1
0
0
1
I have been following LPTHW ex. 46 in which it says to put a script in bin directory that you can run. I don't get the idea of using script when you have modules. What extra significance do scripts provide? Are scripts executable *.exe files(in case of windows) rather than modules which are compiled by python? If modules provide all the code needed for the project then do scripts provide the code needed to execute them? How are scripts and modules linked to each other, if they do so?
Finding a line of a C module called by a python script that segfaults
24,473,958
-1
0
77
0
python,c,segmentation-fault
segfault... Check if the number of variables or the types of variables you passed to that c function (in .so) are correct. If not aligned, usually it's a segfault.
0
1
0
1
2014-06-29T06:40:00.000
3
-0.066568
false
24,473,765
0
0
0
1
I have a caller.py which repeatedly calls routines from some_c_thing.so, which was created from some_c_thing.c. When I run it, it segfaults - is there a way for me to detect which line of c code is segfaulting?
cx_Freeze Unfreeze. Is it possible? [python]
24,698,264
0
1
1,104
0
python,cx-freeze,panda3d
No, it is not possible to recover the original source code. If the application used CPython, though, it is always possible to recover the CPython bytecode, which you can use a disassembler on to make a reconstruction of the Python code, but a lot of information will be lost; the resulting code will look rather unreadable and obfuscated, depending on the degree to which the bytecode was optimised. If you want to go down that path, though, I advise looking into CPython's "dis" module. There are also numerous other utilities available that can reconstruct Python code from CPython bytecode.
0
1
0
1
2014-06-30T02:29:00.000
1
0
false
24,482,222
0
0
0
1
need help with something... I had this python program which i made. The thing is, i need the source of it, but the thing is, the hdd i had with it is dead , and when i tried to lookup any backups, it wasn't there. The only thing i have the binary, which i think, was compiled in cx_Freeze. I'm really desperate about it, and i tried any avialble ways to do it, and there was none or almost little. Is there a way to ''unfreeze'' the executable or at least get the pyc out of it?
Google App Engine NDB Query on Many Locations
24,501,164
0
0
144
1
javascript,python,google-maps,google-app-engine
You didn't say how frequently the data points are updated, but assuming 1) they're updated infrequently and 2) there are only hundreds of points, then consider just querying them all once, and storing them sorted in memcache. Then your handler function would just fetch from memcache and filter in memory. This wouldn't scale indefinitely but it would likely be cheaper than querying the Datastore every time, due to the way App Engine pricing works.
0
1
0
0
2014-06-30T19:09:00.000
2
0
false
24,497,219
0
0
1
1
I am developing a web app based on the Google App Engine. It has some hundreds of places (name, latitude, longitude) stored in the Data Store. My aim is to show them on google map. Since they are many I have registered a javascript function to the idle event of the map and, when executed, it posts the map boundaries (minLat,maxLat,minLng,maxLng) to a request handler which should retrieve from the data store only the places in the specified boundaries. The problem is that it doesn't allow me to execute more than one inequality in the query (i.e. Place.latminLat, Place.lntminLng). How should I do that? (trying also to minimize the number of required queries)
Does enabling developer mode in ChromeOS disable automatic updates?
25,711,026
3
4
1,800
0
python,terminal,google-chrome-os
No, developer mode does not disable automatic updates. My Chromebook has been in dev mode for over a year and I haven't missed an update yet.
0
1
0
0
2014-06-30T22:44:00.000
4
0.148885
false
24,500,025
0
0
0
2
I want to enable full access to the terminal (to install Python), so I need to enable developer mode. But I don't want to lose automatic updates to ChromeOS. Does enabling developer mode in ChromeOS disable automatic updates?
Does enabling developer mode in ChromeOS disable automatic updates?
25,073,166
2
4
1,800
0
python,terminal,google-chrome-os
I receive automatic canary updates every day in dev mode. That info must be outdated.
0
1
0
0
2014-06-30T22:44:00.000
4
0.099668
false
24,500,025
0
0
0
2
I want to enable full access to the terminal (to install Python), so I need to enable developer mode. But I don't want to lose automatic updates to ChromeOS. Does enabling developer mode in ChromeOS disable automatic updates?
Start a virtualenv inside a shell script
24,505,623
1
0
267
0
python,shell,virtualenv
Use a function instead of a separate script. A function executes in the context of your current shell.
0
1
0
0
2014-07-01T06:58:00.000
1
1.2
true
24,504,185
1
0
0
1
I'm working on a Python project that's wrapped in a virtualenv. I'd like to have a script that does all the "footwork" of getting set up as soon as I clone my git repo -- namely make the virtualenv, download my requirements, and stay in the virtualenv after exiting. However, once the shell script finishes, I'm no longer in my virtualenv, since the changes it makes to its shell don't propagate to mine. How can I have the virtualenv "stick" to the parent shell that ran the script?
Python subprocess calls hang?
24,514,261
2
1
877
0
python
subprocesses run in the background. In the subprocess module, there is a class called Popen that starts a process in the background. It has a wait() method you can use to wait for the process to finish. It also has a communicate() helper method that will handle stdin/stdout/stderr plus wait for the process to complete. It also has convenience functions like call() and check_call() that create a Popen object and then wait for it to complete. So, subprocess implements a non-blocking model but also gives you blocking helper functions.
0
1
0
0
2014-07-01T15:32:00.000
2
0.197375
false
24,514,129
1
0
0
1
Do subprocess calls in Python hang? That is, do subprocess calls operate in the same thread as the rest of the Python code, or is it a non-blocking model? I couldn't find anything in the docs or on SO on the matter. Thanks!
Gunicorn, Django, Gevent: Spawned threads are blocking
24,544,667
-2
8
5,406
0
python,django,multithreading,gunicorn,gevent
I have settled for using a synchronous (standard) worker and making use of the multiprocessing library. This seems to be the easiest solution for now. I have also implemented a global pool abusing a memcached cache providing locks so only two tasks can run.
0
1
0
0
2014-07-02T01:42:00.000
3
1.2
true
24,521,661
0
0
1
2
we recently switched to Gunicorn using the gevent worker. On our website, we have a few tasks that take a while to do. Longer than 30 seconds. Preamble We did the whole celery thing already, but these tasks are run so rarely that its just not feasible to keep celery and redis running all the time. We just do not want that. We also do not want to start celery and redis on demand. We want to get rid of it. (I'm sorry for this, but I want to prevent answers that go like: "Why dont you use celery, it's great!") The tasks we want to run asynchronously I'm talking about tasks that perform 3000 SQL queries (inserts) that have to be performed one after each other. This is not done all too often. We limited to running only 2 of these tasks at once as well. They should take like 2-3 minutes. The approach Now, what we are doing now is taking advantage of the gevent worker and gevent.spawn the task and return the response. The problem I found that the spawned threads are actually blocking. As soon as the response is returned, the task starts running and no other requests get processed until the task stops running. The task will be killed after 30s, the gunicorn timeout. In order to prevent that, I use time.sleep() after every other SQL query, so the server gets a chance to respond to requests, but I dont feel like this is the point. The setup We run gunicorn, django and use gevent. The behaviour described occurs in my dev environment and using 1 gevent worker. In production, we will also run only 1 worker (for now). Also, running 2 workers did not seem to help in serving more requests while a task was blocking. TLDR We consider it feasible to use a gevent thread for our 2 minute task (over celery) We use gunicorn with gevent and wonder why a thread spawned with gevent.spawn is blocking Is the blocking intended or is our setup wrong? Thank you!
Gunicorn, Django, Gevent: Spawned threads are blocking
24,769,760
0
8
5,406
0
python,django,multithreading,gunicorn,gevent
It would appear no one here gave an actual to your question. Is the blocking intended or is our setup wrong? There is something wrong with your setup. SQL queries are almost entirely I/O bound and should not be blocking any greenlets. You are either using a SQL/ORM library that is not gevent-friendly, or something else in your code is causing the blocking. You should not need to use multiprocessing for this kind of task. Unless you are explicitly doing a join on the greenlets, then the server response should not be blocking.
0
1
0
0
2014-07-02T01:42:00.000
3
0
false
24,521,661
0
0
1
2
we recently switched to Gunicorn using the gevent worker. On our website, we have a few tasks that take a while to do. Longer than 30 seconds. Preamble We did the whole celery thing already, but these tasks are run so rarely that its just not feasible to keep celery and redis running all the time. We just do not want that. We also do not want to start celery and redis on demand. We want to get rid of it. (I'm sorry for this, but I want to prevent answers that go like: "Why dont you use celery, it's great!") The tasks we want to run asynchronously I'm talking about tasks that perform 3000 SQL queries (inserts) that have to be performed one after each other. This is not done all too often. We limited to running only 2 of these tasks at once as well. They should take like 2-3 minutes. The approach Now, what we are doing now is taking advantage of the gevent worker and gevent.spawn the task and return the response. The problem I found that the spawned threads are actually blocking. As soon as the response is returned, the task starts running and no other requests get processed until the task stops running. The task will be killed after 30s, the gunicorn timeout. In order to prevent that, I use time.sleep() after every other SQL query, so the server gets a chance to respond to requests, but I dont feel like this is the point. The setup We run gunicorn, django and use gevent. The behaviour described occurs in my dev environment and using 1 gevent worker. In production, we will also run only 1 worker (for now). Also, running 2 workers did not seem to help in serving more requests while a task was blocking. TLDR We consider it feasible to use a gevent thread for our 2 minute task (over celery) We use gunicorn with gevent and wonder why a thread spawned with gevent.spawn is blocking Is the blocking intended or is our setup wrong? Thank you!
Python - Calling a Procedure in a Windows DLL (by address, not by name)
24,601,961
0
2
391
0
python,function,dll,loadlibrary
I was able to modify the export table, changing the base address of an already exported routine to my own routine. This allowed me to execute the subroutine I was interested in via Python by using the exported name.
0
1
0
0
2014-07-02T13:24:00.000
2
1.2
true
24,532,338
1
0
0
1
I would like to know if it is possible (and if so, how) to call a routine from a DLL by the Proc address instead of by name - in Python. Case in point: I am analyzing a malicious dll, and one of the routines I want to call is not exported (name to reference it by), however I do know the address to the base of the routine. This is possible in C/C++ by casting the function pointer to a typedef'ed function prototype. Is there a similar way to do this in Python? If not, are there any concerns with modifying the export table of the dll, to make a known exported name map to the address.
Console output delay with Python but not Java using PsExec
24,534,244
0
0
156
0
java,python,psexec
Are you sure the remote python script flushes the stdout? It should get flushed every time you print a new line, or when you explicitly call sys.stdout.flush().
0
1
0
0
2014-07-02T14:00:00.000
1
1.2
true
24,533,128
0
0
1
1
I have two files on a remote machine that I am running with PsExec, one is a Java program and the other Python. For the Python file any outputs to screen (print() or sys.stdout.write()) are not sent back to my local machine until the script has terminated; for the Java program I see the output (System.out.println()) on my local machine as soon as it is created on the remote machine. If anyone can explain to me why there is this difference and how to see the Python outputs as they are created I would be very grateful! (Python 3.1, Remote Machine: Windows Server 2012, Local: Windows 7 32-bit)
Cygwin not same python version as windows
24,540,295
1
2
736
0
python,cygwin
Cygwin has its own option to install its own version of Python. If you run setup.exe and poke through the Development packages, you'll find it. You probably installed Python here as well, and are running it in Bash. If you use CMD, you're running a different version. The fact that the version numbers overlap is just coincidental.
0
1
0
0
2014-07-02T20:10:00.000
1
0.197375
false
24,540,192
1
0
0
1
Background: I am a .NET developer trying to set up a python programming environment. I have installed python 2.7.5. However I changed my mind and uninstalled 2.7.5 and installed python 2.7.6. If I CMD in windows command promopt, the python version is 2.7.6 When I start the cygwin shell and type: python --version It says 2.7.5, this version is was uninstalled. How do I get cygwin to understand it should use the new version. 2.7.6? I believe there is commands to type in cygwin shell to solve this? Thanks on advance!
Difficulty accessing local webserver
24,558,290
0
1
184
0
android,python,solr,webserver,tokyo-tyrant
Both "localhost" and "127.0.0.1" are local loopback interfaces only: they only make sense within the same machine. From your Android device, assuming it's on the same wifi network as your machine, you'll need to use the actual IP address of your main machine: you can either find that from the network settings of that machine, or from your router's web interface.
0
1
1
0
2014-07-03T15:25:00.000
4
0
false
24,557,707
0
0
1
3
I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine. I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port. However, when I try to access it through my android on the same wifi I get a connection refused error.
Difficulty accessing local webserver
24,635,024
0
1
184
0
android,python,solr,webserver,tokyo-tyrant
In case someone has the same problem, I solved it. The connection has to be by cable and on the VMware Player settings the network connection has to be bridged, also you must click "Configure adapters" and uncheck the "VirtualBox Host-Only Ethernet Adapter".
0
1
1
0
2014-07-03T15:25:00.000
4
1.2
true
24,557,707
0
0
1
3
I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine. I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port. However, when I try to access it through my android on the same wifi I get a connection refused error.
Difficulty accessing local webserver
24,557,803
0
1
184
0
android,python,solr,webserver,tokyo-tyrant
Is the server bound to localhost or 0.0.0.0? Maybe your host resolves that ip to some kind of a localhost as well, due to bridging.
0
1
1
0
2014-07-03T15:25:00.000
4
0
false
24,557,707
0
0
1
3
I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine. I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port. However, when I try to access it through my android on the same wifi I get a connection refused error.
Why does virtualenv inherit $PYTHONPATH from my shell?
39,600,194
0
7
8,833
0
python,virtualenv,pythonpath,virtualenvwrapper,zshrc
The $PYTHONPATH appears in your virtualenv because that virtualenv is just a part of your shell environment, and you (somewhere) told your shell to export the value of PYTHONPATH to child shells. One of the joys of working in virtual environments is that there is much less need to put additional directories on your PYTHONPATH, but it appears as though you have unwittingly been treating it as a global (for all shells) setting, when it's more suited to being a per-project setting.
0
1
0
1
2014-07-05T06:44:00.000
3
0
false
24,583,777
1
0
0
1
So I'm migrating all my tools from python2 to python3.4 on an Ubuntu 14.04 machine. So far I've done the following: aliased python to python3 in my zshrc for just my user installed pip3 on the system itself (but I'll just be using virtualenvs for everything anyway so I won't really use it) changed my virtualenvwrapper "make" alias to mkvirtualenv --python=/usr/bin/python3 ('workon' is invoked below as 'v') Now curiously, and you can clearly see it below, running python3 from a virtualenv activated environment still inherits my $PYTHONPATH which is still setup for all my python2 paths. This wreaks havoc when installing/running programs in my virtualenv because the python3 paths show up AFTER the old python2 paths, so python2 modules are imported first in my programs. Nulling my $PYTHONPATH to '' before starting the virtualenv fixes this and my programs start as expected. But my questions are: Is this inheritance of $PYTHONPATH in virtualenvs normal? Doesn't that defeat the entire purpose? Why set $PYTHONPATH as an env-var in the shell when python already handles it's own paths internally? Am I using $PYTHONPATH correctly? Should I just be setting it in my 'zshrc' to only list my personal additions ($HOME/dev) and not the redundant '/usr/local/lib/' locations? I can very easily export an alternate python3 path for use with my virtualenvs just before invoking them, and reset them when done, but is this the best way to fix this? ○ echo $PYTHONPATH /usr/local/lib/python2.7/site-packages:/usr/local/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages:/home/brian/dev brian@zeus:~/.virtualenvs ○ python2 Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys, pprint >>> pprint.pprint(sys.path) ['', '/usr/local/lib/python2.7/dist-packages/pudb-2013.3.4-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/Pygments-1.6-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/urwid-1.1.1-py2.7-linux-x86_64.egg', '/usr/local/lib/python2.7/dist-packages/pythoscope-0.4.3-py2.7.egg', '/usr/local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/home/brian/dev', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/ubuntuone-client', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol', '/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode'] >>> brian@zeus:~/.virtualenvs ○ v py3venv (py3venv) brian@zeus:~/.virtualenvs ○ python3 Python 3.4.0 (default, Apr 11 2014, 13:05:11) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys, pprint >>> pprint.pprint(sys.path) ['', '/usr/local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/home/brian/dev', '/home/brian/.virtualenvs/py3venv/lib/python3.4', '/home/brian/.virtualenvs/py3venv/lib/python3.4/plat-x86_64-linux-gnu', '/home/brian/.virtualenvs/py3venv/lib/python3.4/lib-dynload', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu', '/home/brian/.virtualenvs/py3venv/lib/python3.4/site-packages'] >>> (py3venv)
Turning .py file into an executable?
24,598,315
0
1
237
0
python,executable
py2exe is what you need for windows.
0
1
0
0
2014-07-06T17:08:00.000
1
0
false
24,598,269
1
0
0
1
I have an extremely basic text based game that I have been coding and want to turn into an executable for both Windows and Mac. I am an extreme beginner and am not quite sure how this works. Thus far, in the coding process, I've been running the game in terminal (I have a Mac), in order to test it and debug. I've installed PyInstaller to my computer, tried to follow the directions to make it work, yet when I finally get the Game.app (again, for a Mac because I was testing the process), it does not open. The game is all contained between two files, ChanceGame.py (the one with the actual game), and ChanceGameSetup.py (one that contains a command to setup the game.) ChanceGame.py imports ChanceGameSetup.py at the start so that it can use the functions in ChanceGameSetup.py where needed. My point in this is that I don't actually have to be able to run ChanceGameSetup.py, it only needs to be able to be imported by ChanceGame.py. Is there a way to turn ChanceGame.py into an executable? Or is it just too simple of a file? I'm an extreme beginner, therefore I have no experience on the subject. Thanks in advance for any help! P.S. I just want to be able to email the game to some friends to try out, and I assume this is the only way of doing so without them having their own compiler, etc. If there is actually an easier way, I would appreciate hearing that as well. Thanks!
How do I create a double SYN packet?
24,612,638
0
0
270
0
python,tcp,scapy
I don't know if I understand you correctly. Is there any difference between your two SYN packets? If so, just create two SYN as you want and then send them together. If not, send same packets twice using scapy.send(pkt, 2).I don't remember the specific parameters, but I'm sure scapy.send can send as many packets and fast as you like.
0
1
1
0
2014-07-07T12:38:00.000
1
0
false
24,610,812
0
0
0
1
I am doing allot of network developing and I am starting a new research. I need to send a packet which will then cause another SYN packet to be sent. This is how I want it to look: I send syn --> --> sends another SYN before SYN/ACK packet. How can I cause? I am using Scapy + Python.
Multiprocessing with Screen and Bash
24,635,814
1
4
869
0
python,bash,multiprocessing,numerical-methods
I would think they are about the same. I would prefer screen just because I have an easier time managing it. Depending on the scripts usage, that could also have some effect on time to process.
0
1
0
0
2014-07-07T20:31:00.000
2
0.099668
false
24,619,330
1
0
0
1
Running a python script on different nodes at school using SSH. Each node has 8 cores. I use GNU Screen to be able to detach from a single process. Is it more desirable to: Run several different sessions of screen. Run a single screen process and use & in a bash terminal. Are they equivalent? I am not sure if my experiments are poorly coded and taking an inordinate amount of time (very possible) OR my choice to use 1. is slowing the process down considerably. Thank you!
Execute Python Script from Django
24,620,400
0
0
292
0
python,django,subprocess,popen,django-celery
I suggest using Celery. subprocess, multiprocessing, and threading all are powerful tools, but are in general hard to get working. They're more useful if you already have a working system, are running at the limit of the hardware, and don't mind spending a good deal of effort to get lower latency or parallel processing or higher throughput.
0
1
0
0
2014-07-07T21:31:00.000
1
1.2
true
24,620,225
0
0
1
1
I am trying to execute a python script from a webpage through a Django view. Other questions related to a known script from within the Django project directory. I need to be able to execute a script anywhere on the system given the file path. Eventually, multiple scripts will be run in parallel using Celery or a similar method. Should I be using some permutation of popen or sub-processing?
Run .exe from a python script called by mercurial in cmd shell
24,623,014
0
0
136
0
python,mercurial,cmd,executable
So you have mercurial calling a hook that runs a python script that launches an executable that is a python script compiled to an exe? Likely the 3-layer deep script is being run w/o a "terminal" (headless), but it sounds like if you un-snarled a few of those layers you might be better off.
0
1
0
0
2014-07-08T02:03:00.000
1
1.2
true
24,622,614
0
0
0
1
I have tied a python (2.7) script to a commit in mercurial. In this script, a .exe is called (via the subprocess module), which has previously been generated via the cx_freeze. This .exe basically opens a cmd prompt for receiving user inputs. When I run a commit through the hg workbench, everything works as intended... the Python script runs, calls the executable, and does its stuff, and the commit works without a hitch. However, when running a commit via "hg commit" in an initial cmd prompt, the executable portion of this setup never appears. I know the python script still runs. No errors are ever displayed/returned. Am I missing something obvious, and is there a simple way to get this executable to run properly even when called from a commit in cmd prompt?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
34,416,503
20
231
338,555
0
python,pip
python -m pip really works for the problem Fatal error in launcher: Unable to create process using '"'.Worked on Windows 10
0
1
0
0
2014-07-08T08:51:00.000
28
1
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
36,456,213
1
231
338,555
0
python,pip
i solve my problem in Window if u install both python2 and python3 u need enter someone \Scripts change all file.exe to file27.exe,then it solve my D:\Python27\Scripts edit django-admin.exe to django-admin27.exe so it done
0
1
0
0
2014-07-08T08:51:00.000
28
0.007143
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
41,300,328
2
231
338,555
0
python,pip
I had the same issue on windows 10, after trying all the previous solution the problem persists so I decided to uninstall my python 2.7 and install the version 2.7.13 and it works perfectly.
0
1
0
0
2014-07-08T08:51:00.000
28
0.014285
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
25,314,022
5
231
338,555
0
python,pip
Here's how I solved it: open pip.exe in 7zip and extract __main__.py to Python\Scripts folder. In my case it was C:\Program Files (x86)\Python27\Scripts Rename __main__.py to pip.py Run it! python pip.py install something EDIT: If you want to be able to do pip install something from anywhere, do this too: rename pip.py to pip2.py (to avoid import pip errors) make C:\Program Files (x86)\Python27\pip.bat with the following contents: python "C:\Program Files (x86)\Python27\Scripts\pip2.py" %1 %2 %3 %4 %5 %6 %7 %8 %9 add C:\Program Files (x86)\Python27 to your PATH (if is not already) Run it! pip install something
0
1
0
0
2014-07-08T08:51:00.000
28
0.035699
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
32,795,747
1
231
338,555
0
python,pip
Please add this address : C:\Program Files (x86)\Python33 in Windows PATH Variable Though first make sure this is the folder where Python exe file resides, then only add this path to the PATH variable. To append addresses in PATH variable, Please go to Control Panel -> Systems -> Advanced System Settings -> Environment Variables -> System Variables -> Path -> Edit -> Then append the above mentioned path & click Save
0
1
0
0
2014-07-08T08:51:00.000
28
0.007143
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
32,889,820
-2
231
338,555
0
python,pip
Instead of calling ipython directly, it is loaded using Python such as $ python "full path to ipython.exe"
0
1
0
0
2014-07-08T08:51:00.000
28
-0.014285
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
72,440,994
0
231
338,555
0
python,pip
I had this problem when using django rest framework and simplejwt. All I had to was upgrade pip and reinstall the packages
0
1
0
0
2014-07-08T08:51:00.000
28
0
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
60,451,992
0
231
338,555
0
python,pip
You can remove previous python folder and also environment variable path from you pc then Reinstall python .it will be solve
0
1
0
0
2014-07-08T08:51:00.000
28
0
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
49,551,968
0
231
338,555
0
python,pip
For me this problem appeared when I changed the environment path to point to v2.7 which was initially pointing to v3.6. After that, to run pip or virtualenv commands, I had to python -m pip install XXX as mentioned in the answers below. So, in order to get rid of this, I ran the v2.7 installer again, chose change option and made sure that, add to path option was enabled, and let the installer run. After that everything works as it should.
0
1
0
0
2014-07-08T08:51:00.000
28
0
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
49,562,184
1
231
338,555
0
python,pip
I have chosen to install Python for Windows (64bit) not for all users, but just for me. Reinstalling Python-x64 and checking the advanced option "for all users" solved the pip problem for me.
0
1
0
0
2014-07-08T08:51:00.000
28
0.007143
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
51,133,921
3
231
338,555
0
python,pip
i had same issue and did a pip upgrade using following and now it works fine. python -m pip install --upgrade pip
0
1
0
0
2014-07-08T08:51:00.000
28
0.021425
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
51,287,625
0
231
338,555
0
python,pip
I had this issue and the other fixes on this page didn't fully solve the problem. What did solve the problem was going in to my system environment variables and looking at the PATH - I had uninstalled Python 3 but the old path to the Python 3 folder was still there. I'm running only Python 2 on my PC and used Python 2 to install pip. Deleting the references to the nonexistent Python 3 folders from PATH in addition to upgrading to the latest version of pip fixed the issue.
0
1
0
0
2014-07-08T08:51:00.000
28
0
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
53,663,298
0
231
338,555
0
python,pip
I had a simpler solution. Using @apple way but rename main.py to pip.py then put it in your python version scripts folder and add scripts folder to your path access it globally. if you don't want to add it to path you have to cd to scripts and then run pip command.
0
1
0
0
2014-07-08T08:51:00.000
28
0
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
38,163,927
1
231
338,555
0
python,pip
My exact problem was (Fatal error in launcher: Unable to create process using '"') on windows 10. So I navigated to the "C:\Python33\Lib\site-packages" and deleted django folder and pip folders then reinstalled django using pip and my problem was solved.
0
1
0
0
2014-07-08T08:51:00.000
28
0.007143
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
58,481,723
0
231
338,555
0
python,pip
I have similar problem when I reinstall my python, by uninstalling python3.7 and installing python3.8. But I solved it by removing the previous version of python directory. For me it was located here, C:\Users\your-username\AppData\Local\Programs\Python I deleted the folder named Python37 (for previous version) and keep Python38 (for updated version). This worked because python itself seems having a trouble on finding the right directory for your python scripts.
0
1
0
0
2014-07-08T08:51:00.000
28
0
false
24,627,525
1
0
0
15
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
Using PhoneGap + Google App Engine to Upload and Save Images
24,657,475
0
1
620
1
python,google-app-engine,cordova,google-cloud-storage
Yes, that is a fine use for GAE and GCS. You do not need an <input type=file>, per se. You can just set up POST parameters in your call to your GAE url. Make sure you send a hidden key as well, and work from SSL-secured urls, to prevent spammers from posting to your app.
0
1
0
0
2014-07-09T14:05:00.000
2
0
false
24,655,877
0
0
1
1
Goal: Take/attach pictures in a PhoneGap application and send a public URL for each picture to a Google Cloud SQL database. Question 1: Is there a way to create a Google Cloud Storage object from a base64 encoded image (in Python), then upload that object to a bucket and return a public link? I'm looking to use PhoneGap to send images to a Python Google App Engine application, then have that application send the images to a Google Cloud Storage bucket I have set up, then return a public link back to the PhoneGap app. These images can either be taken directly from the app, or attached from existing photo's on the user's device. I use PhoneGap's FileTransfer plugin to upload the images to GAE, which are sent as base64 encoded images (this isn't something I can control). Based on what I've found in Google Docs, I can upload the images to Blobstore; however, it requires <input type='file'> elements in a form. I don't have 'file' input elements; I just take the image URI returned from PhoneGap's camera object and display a thumbnail of the picture that was taken (or attached). Question 2: Is it possible to have an <input type='file'> element and control it's value? As in, is it possible to set it's value based on whether the user chooses a file, or takes a picture? Thanks in advance!
Kafka Consumer: How to start consuming from the last message in Python
27,436,961
3
8
17,187
0
python,apache-kafka,kafka-consumer-api,kafka-python
kafka-python stores offsets with the kafka server, not on a separate zookeeper connection. Unfortunately, the kafka server apis to support commit/fetching offsets were not fully functional until apache kafka 0.8.1.1. If you upgrade your kafka server, your setup should work. I'd also suggest upgrading kafka-python to 0.9.4. [kafka-python maintainer]
0
1
0
0
2014-07-09T18:46:00.000
5
0.119427
false
24,661,533
0
0
0
1
I am using Kafka 0.8.1 and Kafka python-0.9.0. In my setup, I have 2 kafka brokers setup. When I run my kafka consumer, I can see it retrieving messages from the queue and keeping track of offsets for both the brokers. Everything works great! My issue is that when I restart the consumer, it starts consuming messages from the beginning. What I was expecting was that upon restart, the consumer would start consuming messages from where it left off before it died. I did try keeping track of the message offsets in Redis and then calling consumer.seek before reading a message from the queue to ensure that I was only getting the messages that I hadn't seen before. While this worked, before deploying this solution, I wanted to check with y'all ... perhaps there is something I am misunderstanding about Kafka or the python-Kafka client. Seems like the consumer being able to restart reading from where it left off is pretty basic functionality. Thanks!
Python script requires input in command line
24,663,846
0
0
352
0
python,input,command-line,command,command-prompt
if you pasted the code here that would help but the answer you are most likely looking for is commandline arguements. If I were to guess, in the command line the input would look something like: python name_of_script.py "c:\thefilepath\totheinputfile" {enter} {enter} being the actually key pressed on the keyboard and not typed in as the word Hopefully this starts you on the right answer :)
0
1
0
0
2014-07-09T21:03:00.000
2
0
false
24,663,772
1
0
0
2
I'm new to python and I'm attempting to run a script provided to me that requires to input the name of a text file to run. I changed my pathing to include the Python directory and my input in the command line - "python name_of_script.py" - is seemingly working. However, I'm getting the error: "the following arguments are required: --input". This makes sense, as I need this other text file for the program to run, but I don't know how to input it on the command line, as I'm never prompted to enter any input. I tried just adding it to the end of my command prompt line, but to no avail. Does anybody know how this could be achieved? Thanks tons
Python script requires input in command line
24,665,527
0
0
352
0
python,input,command-line,command,command-prompt
Without reading your code, I guess if I tried just adding it to the end of my command prompt line, but to no avail. it means that you need to make your code aware the command line argument. Unless you do some fancy command line processing, for which you need to import optparse or argparse, try: import sys # do something with sys.argv[-1] (ie, the last argument)
0
1
0
0
2014-07-09T21:03:00.000
2
0
false
24,663,772
1
0
0
2
I'm new to python and I'm attempting to run a script provided to me that requires to input the name of a text file to run. I changed my pathing to include the Python directory and my input in the command line - "python name_of_script.py" - is seemingly working. However, I'm getting the error: "the following arguments are required: --input". This makes sense, as I need this other text file for the program to run, but I don't know how to input it on the command line, as I'm never prompted to enter any input. I tried just adding it to the end of my command prompt line, but to no avail. Does anybody know how this could be achieved? Thanks tons
Sonar python files without .py suffix
24,698,064
2
1
561
0
python,sonarqube,filenames,executable
It is not possible to do so. Empty string as value of property "sonar.python.file.suffixes" is ignored.
0
1
0
1
2014-07-09T23:58:00.000
1
0.379949
false
24,665,515
0
0
0
1
When I specify a Python executable script file that does not end in .py suffx, Sonar runs successfully but the report has no content. I have tried specifying -Dsonar.python.file.suffixes="" but that makes no difference. sonar-runner -Dsonar.sources=/users/av/bin -Dsonar.inclusions=gsave -Dsonar.issuesReport.html.location=/ws/av-rcd/SA_Reports/PY-SA_report-2014-7-2-15-13-44.html -Dsonar.language=py -Dsonar.python.file.suffixes="" How can I make sonar analyze a python executable script that does nothave a .py suffix?
without using fab commandline argument can I use fabric api automated
24,673,514
0
0
74
0
python,ssh,fabric
You can call your functions with importing your fabfile.py. At the end, fabfile is just another python script you can import. I saw a case, where a django project has an api call to a function from fabfile. Just import and call, simple as python :)
0
1
0
1
2014-07-10T09:49:00.000
1
0
false
24,673,386
0
0
0
1
I do not want to use the fab command and don't use the fabfile & command line arguments. I want to make automated remote ssh using fab api by writing a python script.Can I automated this by writig python script?