Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
'python' is not recognized as an internal or external command
| 49,065,296
| 6
| 124
| 662,372
| 0
|
python,cmd
|
i solved this by running CMD in administration mode, so try this.
| 0
| 1
| 0
| 0
|
2013-07-30T17:04:00.000
| 15
| 1
| false
| 17,953,124
| 1
| 0
| 0
| 7
|
So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help?
|
'python' is not recognized as an internal or external command
| 53,706,895
| 12
| 124
| 662,372
| 0
|
python,cmd
|
If you want to see python version then you should use py -V instead of python -V
C:\Users\ghasan>py -V
Python 3.7.1
If you want to go to python's running environment then you should use py instead of python
C:\Users\ghasan>py
Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64
bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Here you can run the python program as:
print('Hello Python')
Hello Python
| 0
| 1
| 0
| 0
|
2013-07-30T17:04:00.000
| 15
| 1
| false
| 17,953,124
| 1
| 0
| 0
| 7
|
So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help?
|
Using python to open a file from an existing running application?
| 17,960,919
| 2
| 0
| 74
| 0
|
python,file
|
That is completely dependent on the application in question. Some applications do support a mechanism for specifying a document to open via COM or DDE, some may allow you to invoke a second copy with the file as an argument which will tell the first to open that file, and some may have no provision for this at all. You will need to check the documentation of the application in question to see which, if any, it supports.
| 0
| 1
| 0
| 0
|
2013-07-31T03:03:00.000
| 1
| 1.2
| true
| 17,960,882
| 1
| 0
| 0
| 1
|
Is it possible to use Python to open a file from an existing running application? For example, I have a notepad application open. If I run os.startfile(newnotepad.txt) it opened up a new notepad application. I would like it to open in the existing one.
|
Reading the app engine logs.db file?
| 18,027,540
| 0
| 0
| 67
| 0
|
python,google-app-engine
|
It's an sqlite file, can just read it with sqlite3 module.
| 0
| 1
| 0
| 0
|
2013-07-31T23:55:00.000
| 1
| 1.2
| true
| 17,982,992
| 0
| 0
| 1
| 1
|
I want to read the logs.db file with an external script, but I don't know what format it's in (it's binary) to know what to read it with. It's a massive rabbit hole to try to figure out the log_service module, and I'm hoping I can shortcut that and just open it into readable text some other way.
Any ideas?
|
Subclipse - ckeckout directory to existing project
| 17,991,013
| 0
| 0
| 142
| 0
|
python,eclipse,pydev,subclipse
|
You must not "check out" the directory, you have to "export" it. You can export anything from svn into any directory.
Also you could simply copy the other directory (from another check out) and delete the hidden .svn directory below it. If the directory contains subdirectories, every .svn directory must be deleted in the subdirectories as well.
| 0
| 1
| 0
| 0
|
2013-08-01T09:42:00.000
| 1
| 0
| false
| 17,990,496
| 0
| 0
| 0
| 1
|
Is it possible to ckeckout directory from SVN repo to existing project? My motivation: I using PyDev and have a directory with python package and I want to check it out. But problem is subpackages don't see root Python package and I can't add to PYTHONPATH directory which is outside the project.
What I need is to create a directory with a project and checkout directory with my python package into project directory. But I can't do it with Sublclipse, because it checkout python package directly to the project directory.
|
What does this php return code (-11) mean (subprocess.call'ed from python)?
| 18,003,558
| 0
| 0
| 287
| 0
|
php,python,call,command-line-interface
|
Seems to have been caused by too much coverage data exhausting PHP_CodeCoverage_Report_HTML. No idea why the php scripts output was suppressed making me believe the script never got running.
After asking for more memory using ini_set('memory_limit', '2048M'); in the start of the php script, the success rate went up dramatically (5/6 successful builds so far).
I guess I'll need to play around with memory management in php/zend to properly handle this.
| 0
| 1
| 0
| 1
|
2013-08-01T19:25:00.000
| 1
| 0
| false
| 18,002,824
| 0
| 0
| 0
| 1
|
The short version
I want to, in python, subprocess.call(['php', '/path/somescript.php']), the first line of the php script is basically "echo 'Here!';". But the subprocess.call returns an error code of -11, and the php script does not get to execute its first line and echo anything to the output. This is all happening on an Ubuntu Server 12.04.2 and a Ubuntu Desktop 12.04.2.
Can anybody point me in the direction of what the -11 return code might mean? (Is it coming from python, the system, or the php command?
A couple of times, I've seen it run deep into the php script and then fail by printing "zend_mm_heap corrupted" and returning 1.
The more descriptive version of the question:
I have a python script that, after running some phpunit tests using subprocess.call(['phpunit', ...]), wants to run another php script to collect the code coverage data gathered while running the tests, by doing subprocess.call(['php', '/path/coverage_collector.php']).
For months, the script worked fine, but today, after adding a couple more files & tests, it started failing (not a 100% of the time, about 5-10% of times it works).
When it fails, subprocess.call returns -11, and the first line of coverage_collector.php has not managed to echo its message to stdout. A couple of times it ran deeper into the php script, and failed with error code 1 and printed "zend_mm_heap corrupted".
I have a directory structure where each folder may contain subfolders, each folder gets its unit tests executed, and then coverage data is collected for that folder + its subfolders.
The script works fine on all the folders and their subfolders (executing all the tests & collecting all of the coverage), and used to work fine on the root level folder too (and is currently working fine for a lot of smaller projects with the same exact structure and scripts) - until today, after it started failing after an innocent enough code checkin, that added some files and tests to one of the php projects using the script.
The weird thing is that it's failing in this weird spot - while trying to call a php command, without even getting to execute the first line of the php script, and this happens just seconds after the same php script has been executed for a number of other folders and worked fine.
I'm suspecting it might be due to the fact that the root level script simply has more data to process - combining its own coverage with that of all of the subfolders (which might explain the zend heap corruption, when that occurs), but that still does not explain why the majority of times the call fails with -11, and does not let the php script even start working on the collecting the coverage data.
Any ideas?
|
Python bottle vs uwsgi/bottle vs nginx/uwsgi/bottle
| 49,163,067
| 2
| 11
| 7,565
| 0
|
python,nginx,uwsgi,bottle
|
I also suggest you look at running bottle via gevent.pywsgi server. It's awesome, super simple to setup, asynchronous, and very fast.
Plus bottle has an adapter built for it already, so even easier.
I love bottle, and this concept that it is not meant for large projects is ridiculous. It's one of the most efficient and well written frameworks, and can be easily molded without a lot of hand wringing.
| 0
| 1
| 0
| 1
|
2013-08-01T22:53:00.000
| 3
| 0.132549
| false
| 18,006,014
| 0
| 0
| 1
| 2
|
I am developing a Python based application (HTTP -- REST or jsonrpc interface) that will be used in a production automated testing environment. This will connect to a Java client that runs all the test scripts. I.e., no need for human access (except for testing the app itself).
We hope to deploy this on Raspberry Pi's, so I want it to be relatively fast and have a small footprint. It probably won't get an enormous number of requests (at max load, maybe a few per second), but it should be able to run and remain stable over a long time period.
I've settled on Bottle as a framework due to its simplicity (one file). This was a tossup vs Flask. Anybody who thinks Flask might be better, let me know why.
I have been a bit unsure about the stability of Bottle's built-in HTTP server, so I'm evaluating these three options:
Use Bottle only -- As http server + App
Use Bottle on top of uwsgi -- Use uwsgi as the HTTP server
Use Bottle with nginx/uwsgi
Questions:
If I am not doing anything but Python/uwsgi, is there any reason to add nginx to the mix?
Would the uwsgi/bottle (or Flask) combination be considered production-ready?
Is it likely that I will gain anything by using a separate HTTP server from Bottle's built-in one?
|
Python bottle vs uwsgi/bottle vs nginx/uwsgi/bottle
| 18,006,120
| 15
| 11
| 7,565
| 0
|
python,nginx,uwsgi,bottle
|
Flask vs Bottle comes down to a couple of things for me.
How simple is the app. If it is very simple, then bottle is my choice. If not, then I got with Flask. The fact that bottle is a single file makes it incredibly simple to deploy with by just including the file in our source. But the fact that bottle is a single file should be a pretty good indication that it does not implement the full wsgi spec and all of its edge cases.
What does the app do. If it is going to have to render anything other than Python->JSON then I go with Flask for its built in support of Jinja2. If I need to do authentication and/or authorization then Flask has some pretty good extensions already for handling those requirements. If I need to do caching, again, Flask-Cache exists and does a pretty good job with minimal setup. I am not entirely sure what is available for bottle extension-wise, so that may still be worth a look.
The problem with using bottle's built in server is that it will be single process / single thread which means you can only handle processing one request at a time.
To deal with that limitation you can do any of the following in no particular order.
Eventlet's wsgi wrapping the bottle.app (single threaded, non-blocking I/O, single process)
uwsgi or gunicorn (the latter being simpler) which is most ofter set up as single threaded, multi-process (workers)
nginx in front of uwsgi.
3 is most important if you have static assets you want to serve up as you can serve those with nginx directly.
2 is really easy to get going (esp. gunicorn) - though I use uwsgi most of the time because it has more configurability to handle some things that I want.
1 is really simple and performs well... plus there is no external configuration or command line flags to remember.
| 0
| 1
| 0
| 1
|
2013-08-01T22:53:00.000
| 3
| 1.2
| true
| 18,006,014
| 0
| 0
| 1
| 2
|
I am developing a Python based application (HTTP -- REST or jsonrpc interface) that will be used in a production automated testing environment. This will connect to a Java client that runs all the test scripts. I.e., no need for human access (except for testing the app itself).
We hope to deploy this on Raspberry Pi's, so I want it to be relatively fast and have a small footprint. It probably won't get an enormous number of requests (at max load, maybe a few per second), but it should be able to run and remain stable over a long time period.
I've settled on Bottle as a framework due to its simplicity (one file). This was a tossup vs Flask. Anybody who thinks Flask might be better, let me know why.
I have been a bit unsure about the stability of Bottle's built-in HTTP server, so I'm evaluating these three options:
Use Bottle only -- As http server + App
Use Bottle on top of uwsgi -- Use uwsgi as the HTTP server
Use Bottle with nginx/uwsgi
Questions:
If I am not doing anything but Python/uwsgi, is there any reason to add nginx to the mix?
Would the uwsgi/bottle (or Flask) combination be considered production-ready?
Is it likely that I will gain anything by using a separate HTTP server from Bottle's built-in one?
|
How to get python interpreter path in uwsgi process
| 18,021,303
| 3
| 3
| 1,200
| 0
|
python,path,environment-variables,interpreter,uwsgi
|
uWSGI is not a python application (it only calls libpython functions) so the effective executable is the uwsgi binary. If you use virtualenvs you can assume the binary is in venv/bin/python
| 0
| 1
| 0
| 1
|
2013-08-02T10:00:00.000
| 1
| 1.2
| true
| 18,014,122
| 0
| 0
| 0
| 1
|
How can I get python interpreter path in uwsgi process (if I started it with -h parameter)? I tryed to use VIRTUAL_ENV and UWSGI_PYHOME environment variables, but they are empty, I do not know why. Also i tryed to use sys.executable, but it points to uwsgi process path.
|
best practice for graph-like entities on appengine ndb
| 18,035,092
| 1
| 1
| 478
| 1
|
python,google-app-engine,app-engine-ndb,graph-databases
|
There's two ways to implement one-to-many relationships in App Engine.
Inside entity A, store a list of keys to entities B1, B2, B3. In th old DB, you'd use a ListProperty of db.Key. In ndb you'd use a KeyProperty with repeated = True.
Inside entity B1, B2, B3, store a KeyProperty to entity A.
If you use 1:
When you have Entity A, you can fetch B1, B2, B3 by id. This can be potentially more consistent than the results of a query.
It could be slightly less expensive since you save 1 read operation over a query (assuming you don't count the cost of fetching entity A). Writing B instances is slightly cheaper since it's one less index to update.
You're limited in the number of B instances you can store by the maximum entity size and number of indexed properties on A. This makes sense for things like conference tracks since there's generally a limited number of tracks that doesn't go into the thousands.
If you need to sort the order of B1, B2, B3 arbitrarily, it's easier to store them in order in a list than to sort them using some sorted indexed property.
If you use 2:
You only need entity A's Key in order to query for B1, B2, B3. You don't actually need to fetch entity A to get the list.
You can have pretty much unlimited # of B entities.
| 0
| 1
| 0
| 0
|
2013-08-02T12:42:00.000
| 2
| 0.099668
| false
| 18,017,150
| 0
| 0
| 1
| 1
|
I'm designing a g+ application for a big international brand. the entities I need to create are pretty much in form of a graph, hence a lot of many-to-many relations (arcs) connecting nodes that can be traversed in both directions. I'm reading all the readable docs online, but I haven't found anything so far specific to ndb design best practices and guidelines. unfortunately I am under nda, and cannot reveal details of the app, but it can match almost one to one the context of scientific conferences with proceedings, authors, papers and topics.
below the list of entities envisioned so far (with context shifted to match the topics mentioned):
organization (e.g. acm)
conference (e.g. acm multimedia)
conference issue (e.g. acm multimedia 13)
conference track (e.g. nosql, machine learning, computer vision, etc.)
author (e.g. myself)
paper (e.g. "designing graph like db for ndb")
as you can see, I can visit and traverse the graph through any direction (or facet, from a frontend point of view):
author with co-authors
author to conference tracks
conference tracks to papers
...
and so on, you fill the list.
I want to make it straight and solid because it will launch with a lot of p.r. and will need to scale consistently overtime, both in content and number of users. I would like to code it from scratch hence designing my own models, restful api to read/write this data, avoiding non-rel django and keeping the presentation layer to a minimum template mechanism. I need to check with the company where I work, but we might be able to release part of the code with a decent open source license (ideally, a restful service for ndb models).
if anyone could point me towards the right direction, that would be awesome.
thanks!
thomas
[edit: corrected typo related to many-to-many relations]
|
Clearing memory between memory-intensive procedures in Python
| 18,025,982
| 2
| 1
| 538
| 0
|
python,memory-management,garbage-collection
|
Usually in this kind of situation, refactoring is the only way out.
You mentioned you're storing a lot in memory, perhaps in a dict or a set, then output onto only one file.
Maybe you can append output to the output file after processing each input, then do a quick clean-up before processing new input file. That way, RAM usage can be reduced.
Appending can even be done line by line from input, so that no storage is needed.
Since I don't know the specific algorithm you're using, given you mentioned no sharing between files is needed, this may help. Remember to flush output too :P
| 0
| 1
| 0
| 0
|
2013-08-02T20:40:00.000
| 2
| 0.197375
| false
| 18,025,763
| 1
| 0
| 0
| 1
|
I need to sequentially read a large text files, storing a lot of data in memory, and then use them to write a large file. These read/write cycles are done one at a time, and there is no common data, so I don't need any of the memory to be shared between them.
I tried putting these procedures in a single script, hoping that the garbage collector would delete the old, no-longer-needed objects when the RAM got full. However, this was not the case. Even when I explicitly deleted the objects between cycles it would take far longer than running the procedures separately.
Specifically, the process would hang, using all available RAM but almost no CPU. It also hung when gc.collect() was called. So, I decided to split each read/write procedure into separate scripts and call them from a central script using execfile(). This didn't fix anything, sadly; the memory still piled up.
I've used the simple, obvious solution, which is to simply call the subscripts from a shell script rather than using execfile(). However, I would like to know if there is a way to make this work. Any input?
|
Python 2.7 - Codenvy - Debugging issues
| 23,685,697
| 1
| 2
| 427
| 0
|
google-app-engine,python-2.7,codenvy
|
I discovered that if you go to the Google App Developper Console, there is a menu on the left. Click on App Engine, then click on Logs. There you can see the internal server error (error 500) log, which pretty much tell you what went wrong.
| 0
| 1
| 0
| 0
|
2013-08-03T02:58:00.000
| 2
| 0.099668
| false
| 18,028,841
| 0
| 0
| 1
| 1
|
I'm working with Codenvy writing a Google Appengine app, and I have found it to be INSANELY difficult to debug. If there's a syntax error I have to find it manually as the web page that loads when testing give me and error:500. Also, I often want to print but Codenvy doesn't support printing for python (that or I don't understand the correct method). Has anyone else experienced this and able to help? Perhaps developing in the cloud isn't as easy as I was hoping...
|
Is there any way to make a Python program that has the user use the terminal (NO GUI) into a stand-alone for Mac?
| 18,039,652
| 1
| 3
| 371
| 0
|
python,macos,compiler-construction,pyinstaller,py2app
|
I know that on a mac you change the extension of the file to .command and that will make it so you can just click on it and it will run through the terminal if that's what it is specified to do. However I'm not sure if it will work if they do not actually have python installed.
| 0
| 1
| 0
| 0
|
2013-08-04T04:07:00.000
| 2
| 1.2
| true
| 18,039,614
| 1
| 0
| 0
| 1
|
I finally got py2app to work, and my program was made. However, it won't open because it relies on the terminal and raw_input. I just found out py2app is more for GUI interfaces.
All I want, is to turn the program into an application my users can click on, and it'll open in Terminal. Without them having to either install Python, or go to the terminal and type python "filename" (also, don't they have to set up the paths and everything to do that?).
Please help; I've been pulling my hair out all day looking for the answer. If this isn't possible, I'm just going to give them the .py file and instruct them to start it with python in the terminal and hope it's already set up so they can do that.
|
How to add custom events to tornado
| 19,019,586
| 1
| 1
| 625
| 0
|
python,tornado
|
If I'm understanding your question correctly, all you need to do is call IOLoop.add_callback from the thread that is reading from the queue. This will run your callback in the IOLoop's thread so you can write your message out on the client websocket connections.
| 0
| 1
| 0
| 0
|
2013-08-04T22:45:00.000
| 1
| 0.197375
| false
| 18,048,357
| 0
| 0
| 0
| 1
|
I have a tornado application that will serve data via websocket.
I have a separate blocking thread which is reading input from another application and pushing an object into a Queue and another thread which has a blocking listener to that Queue.
What I would like is for the reader thread to somehow send a message to tornado whenever it sees a new item in the Queue and then tornado can relay that via websocket to listening clients.
The only way I can think to do this is to have a websocket client in the reader thread and push the information to tornado via websocket. However it seems that I should be able to do this without using websocket and somehow have tornado listen for non websocket async events and then call a callback.
But I can't find anything describing how to do this.
|
How to switch between python 2.7 to python 3 from command line?
| 66,637,678
| 1
| 63
| 192,684
| 0
|
python,windows,command-line,cmd
|
simply add both to the env variables and then you have to move the version you want to the top
| 0
| 1
| 0
| 0
|
2013-08-05T12:39:00.000
| 8
| 0.024995
| false
| 18,058,389
| 1
| 0
| 0
| 4
|
I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
python ex1.py
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line?
|
How to switch between python 2.7 to python 3 from command line?
| 70,420,111
| 0
| 63
| 192,684
| 0
|
python,windows,command-line,cmd
|
Are you using Python version 3+?
Go to your project path
Run py -[version_number_here] and hit Enter
-> This will open the Python's Terminal (sort of)
Happy Coding
| 0
| 1
| 0
| 0
|
2013-08-05T12:39:00.000
| 8
| 0
| false
| 18,058,389
| 1
| 0
| 0
| 4
|
I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
python ex1.py
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line?
|
How to switch between python 2.7 to python 3 from command line?
| 41,410,363
| -5
| 63
| 192,684
| 0
|
python,windows,command-line,cmd
|
You can try to rename the python executable in the python3 folder to python3, that is if it was named python formally... it worked for me
| 0
| 1
| 0
| 0
|
2013-08-05T12:39:00.000
| 8
| -1
| false
| 18,058,389
| 1
| 0
| 0
| 4
|
I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
python ex1.py
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line?
|
How to switch between python 2.7 to python 3 from command line?
| 55,208,194
| 3
| 63
| 192,684
| 0
|
python,windows,command-line,cmd
|
In case you have both python 2 and 3 in your path, you can move up the Python27 folder in your path, so it search and executes python 2 first.
| 0
| 1
| 0
| 0
|
2013-08-05T12:39:00.000
| 8
| 0.07486
| false
| 18,058,389
| 1
| 0
| 0
| 4
|
I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
python ex1.py
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line?
|
Tornado timeouts and server failured
| 18,075,866
| 1
| 0
| 95
| 0
|
python,socket.io,twisted,tornado,sockjs
|
You should send all importent data through queue with delivery confirm. So if you server will crash all data will come to it from queue. Try to use rabbitmq.
| 0
| 1
| 0
| 0
|
2013-08-06T07:11:00.000
| 1
| 0.197375
| false
| 18,073,826
| 0
| 0
| 0
| 1
|
I am working now on real-time game based on tornado, tornado-sockjs.
There are a lot of different timeout strategies in our game application:
TIMEOUT_GAME_IF_NOBODY, TIMEOUT_GAME_IF_SERVER_OFF. These timeouts has callbacks
that can work with storage directly (update, insert, and so on). The question is what is
the right way to organize timeout strategy into a module ?! How can we re-execute callbacks
in case of server failure ? Imagine that three timeouts are hanging, and suddenly server that handles these timeouts, crashed. It means that some information was not updated.
|
Windows named pipes in practice
| 24,032,255
| 1
| 6
| 5,946
| 0
|
python,windows,winapi,named-pipes
|
I have managed to achieve what I wanted. I call CreateNamedPipe and CloseHandle exactly once per session, and I call DisconnectNamedPipe when my write fails, followed by another ConnectNamedPipe.
The trick is to only call DisconnectNamedPipe when the pipe was actually connected. I called it every time I tried to connect "just to be sure" and it gave me strange errors.
See also djgandy's answer for more information about pipes.
| 0
| 1
| 0
| 0
|
2013-08-06T10:07:00.000
| 2
| 1.2
| true
| 18,077,145
| 0
| 0
| 0
| 1
|
With Windows named pipes, what is the proper way to use the CreateNamedPipe, ConnectNamedPipe, DisconnectNamedPipe, and CloseHandle calls?
I am making a server app which is connecting to a client app which connects and disconnects to the pipe multiple times across a session.
When my writes fail because the client disconnected, should I call DisconnectNamedPipe, CloseHandle, or nothing on my handle.
Then, to accept a new connection, should I call CreateNamedPipe and then ConnectNamedPipe, or just ConnectNamedPipe?
I would very much like an explanation of the different states my pipe can be in as a result of these calls, because I have not found this elsewhere.
Additional info:
Language: Python using the win32pipe,win32file and win32api libraries.
Pipe settings: WAIT, no overlap, bytestream.
|
Increase Memory Usage for Python process on Windows
| 18,344,344
| 1
| 0
| 3,740
| 0
|
windows,python-2.7,out-of-memory
|
There is no way to increase memory usage for a process. The problem was with the python module I was using. After updating to a newer version of the module I was not limited to 1 GB of RAM.
| 0
| 1
| 0
| 0
|
2013-08-06T18:29:00.000
| 2
| 1.2
| true
| 18,087,793
| 1
| 0
| 0
| 1
|
I have a python script that loads mp3 music files into memory using NumPY, manipulates certain parts of each song, and renders the multiple music files into one single mp3 file. It can very RAM intensive depending on how many mp3 files the user specifies.
My problem is that the script throws "Memory Error" when I attempt to provide 8 or more mp3 songs (each around 5MB in size).
I am running:
Windows Server 2008 R2 64 bit with 64 GB of RAM and 4 core processors
32 bit version of Python
When I run Task Manager to view the python.exe process I notice that it crashes when it exceeds 1GB of RAM.
Is there a way I can increase the 1GB limit so that python.exe can use more RAM and not crash?
|
How to install twistedweb python library on a web server
| 18,104,107
| 0
| 0
| 817
| 0
|
python,sockets,webserver,twisted.web
|
I have contacted Fatcow.com support. They do not support SSH connections and do not support Python 2.7 with Twisted library, especially python socket application as server. So it is dead end.
Question resolved.
| 0
| 1
| 0
| 0
|
2013-08-06T20:40:00.000
| 2
| 1.2
| true
| 18,090,039
| 0
| 0
| 0
| 1
|
I have a small application written in python using TwistedWeb. It is a chat server.
Everything is configured right now as for localhost.
All I want is to run this python script on a server(shared server provided by Fatcow.com).
I mean that the script will be working all the time and some clients will connect/disconnect to it. Fatcow gives me python 2.5 without any custom libraries. Is there a way and a tutorial how to make it work with TwistedWeb?
Thanks in advice.
|
Google App Engine shows notice to Upgrade Python to 2.7
| 18,107,791
| 3
| 0
| 427
| 0
|
python,google-app-engine
|
Here's the deal.
You have to migrate. They have announced the deprecation of the Python 2.5 runtime and will continue to support it in accordance with the Google App Engine Terms of Service. Here is the section you should concern yourself with...
7.3 Deprecation Policy.
Google will announce if we intend to discontinue or make backwards
incompatible changes to this API or Service. We will use commercially
reasonable efforts to continue to operate that Service without these
changes until the later of: (i) one year after the announcement or
(ii) April 20, 2015, unless (as Google determines in its reasonable
good faith judgment):
required by law or third party relationship (including if there is a
change in applicable law or relationship), or doing so could create a
security risk or substantial economic or material technical burden
(the above policy, the "Deprecation Policy"). This Deprecation Policy
doesn't apply to versions, features, and functionality labeled as
"experimental."
By my reckoning you have until some time in 2014 until they flat out drop support.
In the meantime...
Fork your application. Update the app.yaml to specify the Python27 runtime and for good measure turn thread safety on (you might save some money).
Test your application.
Move on with your life.
| 0
| 1
| 0
| 0
|
2013-08-07T04:31:00.000
| 1
| 1.2
| true
| 18,094,718
| 0
| 0
| 1
| 1
|
I hope this posting is in the right location.
I am very new to Google App Engine, in fact its part of a iOS Application that I purchased from another developer, so bear with me please.
The iOS Application currently has 20,000 active users. There is no way I can break the system and their Application...so my question is, Should I Migrate to Pythion 2.7 since the message says 2.5 will soon be deprecated. Does that mean my users will drop if I dont migrate?
If I do Migrate, is there a chance that something might break and completely destroy the userbase and their use to the Application? What can go wrong if I Migrate?
This is the message at the top of my Dashboard on Google App Engine
A version of this application is using the Python 2.5 runtime, which is deprecated!
The application should be updated to the Python 2.7 runtime as soon as possible, which offers performance improvements and many new features. Learn how simple it is to migrate your application to Python 2.7.
Thanks everyone..
DC
|
Does Tornado close connection at the end of request?
| 18,116,544
| 1
| 1
| 579
| 0
|
python,python-2.7,tornado
|
Yes, unless you use the @asynchronous decorator.
| 0
| 1
| 0
| 0
|
2013-08-07T22:02:00.000
| 1
| 0.197375
| false
| 18,114,584
| 0
| 0
| 0
| 1
|
When I am working with Tornado and on the end of get/post request I have return statement or I don't have anu return at all ( no even self.write ) does it close connections ?
(when I type into command line netstat -tanp | wc -l I got a lot of connections, like not alive, only existing ). Does it close connection at the end of request ?
|
Running python cron script as non-root user
| 18,131,872
| 0
| 1
| 1,721
| 0
|
python,cron,cron-task
|
Cron jobs run with the permissions of the user that the cron job was setup under.
I.E. Whatever is in the cron table of the reports user, will be run as the reports user.
If you're having to so sudo to get the script to run when logged in as reports, then the script likely won't run as a cron job either. Can you run this script when logged in as reports without sudo? If not, then the cron job can't either. Make sense?
Check your logs - are you getting permissions errors?
There are a myriad of reasons why your script would need certain privs, but an easy way to fix this is to set the cron job up under root instead of reports. The longer way is to see what exactly is requiring elevated permissions and fix that. Is it file permissions? A protected command? Maybe adding reports to certain groups would allow you to run it under reports instead of root.
*be ULTRA careful if/when you setup cron jobs as root
| 0
| 1
| 0
| 1
|
2013-08-08T16:18:00.000
| 1
| 0
| false
| 18,131,050
| 0
| 0
| 0
| 1
|
I have a small problem running a python script as a specific user account in my CentOS 6 box.
My cron.d/cronfile looks like this:
5 17 * * * reports /usr/local/bin/report.py > /var/log/report.log 2>&1
The account reports exists and all the files that are to be accessed by that script are chowned and chgrped to reports. The python script is chmod a+r. The python script starts with a #!/usr/bin/env python.
But this is not the problem. The problem is that I see nothing in the logfile. The python script doesn't even start to run! Any ideas why this might be?
If I change the user to root instead of reports in the cronfile, it runs fine. However I cannot run it as root in production servers.
If you have any questions please ask :)
/e:
If I do sudo -u reports python report.py it works fine.
|
If I use my personal machine as a web server am I putting my local data at risk?
| 18,140,216
| 6
| 0
| 105
| 0
|
python,django,macos,security
|
Yes of course it's absolutely possible. Depending on the services you are running, you will always be adding more potential holes for an attacker to find.
| 0
| 1
| 0
| 0
|
2013-08-09T04:25:00.000
| 2
| 1.2
| true
| 18,140,011
| 0
| 0
| 0
| 2
|
I am considering purchasing an imac with the thought of dual-purposing the machine. I'd like to use it as a home computer, but also host a personal website or two using OSX Server.
By using my computer as a server, is there any way that a malicious attack through my website can allow someone access to files that are stored locally on my hard drive? Is it safer to simply use a dedicated machine or service?
NB: I hope that a question regarding website security is appropriate, sorry that this isn't explicitly a coding question.
|
If I use my personal machine as a web server am I putting my local data at risk?
| 18,141,423
| 0
| 0
| 105
| 0
|
python,django,macos,security
|
Yes plus it won't save you much time / money. Proper hosting isn't that expensive. What about the DNS, you're going to point to your own Internet Connection IP address, can you guarantee it won't change or stop working at any time?
Is it safer to simply use a dedicated machine or service?
Go with a service that handles everything for you unless you enjoy system admin stuffs.
| 0
| 1
| 0
| 0
|
2013-08-09T04:25:00.000
| 2
| 0
| false
| 18,140,011
| 0
| 0
| 0
| 2
|
I am considering purchasing an imac with the thought of dual-purposing the machine. I'd like to use it as a home computer, but also host a personal website or two using OSX Server.
By using my computer as a server, is there any way that a malicious attack through my website can allow someone access to files that are stored locally on my hard drive? Is it safer to simply use a dedicated machine or service?
NB: I hope that a question regarding website security is appropriate, sorry that this isn't explicitly a coding question.
|
Is it possible to duplicate a pipe in Python, so that it has one write end but two read ends?
| 18,152,311
| 0
| 2
| 1,227
| 0
|
python,subprocess,multiprocessing
|
So since it's a file descriptor that's returned by a Pipe I regret to say you can't go back; An idea though would be to have either reader process add the data to a multiprocessing.Queue where both can read out of and later drop the data.
You can always have a pipe from the writer process to each of the readers as well. also there are other things such as shared memory or dbus that you could use to ferry around data.
Could you describe your problem more in depth?
Depending on the platform you can also just have the process use multiple streams - e.g. stdout and a 4th one - but this isn't portable between OS's.
| 0
| 1
| 0
| 0
|
2013-08-09T17:12:00.000
| 2
| 0
| false
| 18,152,108
| 1
| 0
| 0
| 1
|
Suppose I have a process that generates some data, and this data is consumed by two different processes which are independent of one another.
One way to solve this problem would be to have the generated data written to a file, and then have the other two processes read from the file. This will work fine if the size of the file is not big, but IO becomes expensive if there is a lot of data.
If I had only one process consuming the data, I can just connect the two processes using os.pipe() and funnel data from the output of one into the input of the other.
However, since I have two consumer processes, I'm not sure if there's a way I can duplicate the read side of the pipe so that both consumers can read from it.
|
Python Tornado Websocket Handler blocks while receiving data
| 18,156,111
| 0
| 1
| 407
| 0
|
python,python-3.x,websocket,tornado
|
Tornado works well with large amount of short concurrent requests.
It does not split long request into smaller ones. So process blocks.
Why you passing big amount of data using sockets? Final solution depends on answer to this question.
If you don't have big requests too often - just use haproxy in front of multiple tornado instances.
| 0
| 1
| 0
| 0
|
2013-08-09T21:19:00.000
| 1
| 1.2
| true
| 18,155,811
| 0
| 0
| 1
| 1
|
I have two pretty simple Tornado-based websocket handlers running in the same process, each of which function properly on their own. However, when one is receiving a large amount of data (>8MB) the process blocks and the other is unable to process messages until all of the data has been received. Is there any way I can get around this and prevent tornado from blocking here?
|
How do I accept user input on command line for python script instead of prompt
| 18,156,059
| 1
| 0
| 3,737
| 0
|
python,python-2.7,python-3.x
|
You can use sys.argv[1] to get the first command line argument. If you want more arguments, you can reference them with sys.argv[2], etc.
| 0
| 1
| 0
| 0
|
2013-08-09T21:37:00.000
| 3
| 0.066568
| false
| 18,156,044
| 0
| 0
| 0
| 1
|
I have python code which asks user input. (e.g. src = input('Enter Path to src: '). So when I run code through command prompt (e.g. python test.py) prompt appears 'Enter path to src:'. But I want to type everything in one line (e.g. python test.py c:\users\desktop\test.py). What changes should I make? Thanks in advance
|
debug script with pydev in eclipse
| 18,157,426
| 0
| 0
| 459
| 0
|
python,eclipse,pydev
|
Yes, there is.
Just start debugging - as far as I know, you have to set breakpoint, otherwise program just run to the end. And when stopped at breakpoint, in console window, click open console icon -> choose pydev console -> PyDev Debug Console.
Let me know if it works for you.
| 0
| 1
| 0
| 1
|
2013-08-09T23:14:00.000
| 2
| 0
| false
| 18,157,029
| 0
| 0
| 0
| 1
|
I have used standart ide for python - IDLE for a long time. It has convinient debug. I can write script, press F5 and it is possible to use all objects in terminal.
Now i want working with eclipse and pydev plugin. Is there any similar way to debug in eclipse?
|
python.exe version 3.3.2 64 & 32 crash while creating .exe file on win 7 64 & 32 with cx_Freeze
| 18,216,038
| 2
| 1
| 415
| 0
|
python,windows,windows-7
|
Finally! A few days ago I managed to find a solution!!! The problem was with the icon. I don't know why but when I removed the icon from my setup file things got charmingly ok. But I needed the icon so after I created my exe file I packed everything in a rar file. I mean SFX rar file, and I set it's icon to what I wanted. So it is solved for me. Still, the error I was facing happens in many other cases, I have no solution for any of those.
| 0
| 1
| 0
| 0
|
2013-08-10T17:07:00.000
| 1
| 0.379949
| false
| 18,164,354
| 1
| 0
| 0
| 1
|
I have created a simple python application to detect changes in a set of words. Now I need an executable file of my script. Since I use python 3.3 the only way I found was using cx_Freeze. I have created my setup file according to the documentation presented by cx_Freeze website, and it seems to work. The thing is while it is creating the files in the bin folder python.exe crashes, there is only a windows error saying python.exe stopped working. In the lines printed to command prompt I can see that the crash has occurred after copying python33.dll. This I can confirm by comparing the copied file and the original file. Still, an exe file is created which also crashes when I run it. Tracing it, I found out that the exe file crashes when it tries to get a zipimporter instance, giving the error "cannot get zipimporter instance". I have a windows 7 64 bit, python 3.3.2 64 bit, and cx_Freeze 4.3.1 64 bit. I also have a windows 7 32 bit on a virtual machine with python 3.3.2 32 bit and cx_Freeze 4.3.1 32 bit. To my knowledge both Linux and windows users have this problem but only Linux users seem to have a solution! Maybe I didn't find the solution to my problem, but I have spent two days looking. I would be really grateful if you can help.
|
Installing Biopython from academic Enthought Canopy 32bit Windows 7
| 19,250,481
| 1
| 1
| 469
| 0
|
installation,enthought,biopython
|
No, I think you do not need to pay for that if you dont wish to. All you need to do is to install biopython independent of canopy some where in your machine say "/path/to/biopython"
and in canopy add import sys and sys.path.append('/path/to/biopython') will do the job!
| 0
| 1
| 0
| 0
|
2013-08-11T12:04:00.000
| 1
| 0.197375
| false
| 18,171,732
| 1
| 0
| 0
| 1
|
When I try to use the Canopy Package Manager and "subscribe" to Biopython I am asked to pay for it. Can I use the package manager w/o paying?
|
Why would running scheduled tasks with Celery be preferable over crontab?
| 18,190,019
| 49
| 47
| 15,773
| 0
|
python,django,celery,django-celery
|
I've been using cron for a production website, and have switched to celery on a current project.
I'm far more into celery than cron, here is why:
Celery + Celerybeat has finer granularity than cron. Cron cannot run more than once a minute, while celery can (I have a task run every 90 seconds which checks an email queue to send messages, and another which cleans the online users list).
A cron line has to call a script or a unique command, with absolute path and user info. Celery calls python functions, no need to write more than code.
With celery, to deploy to another machine, you generally just have to pull/copy your code, which is generally in one place. Deploying with cron would need more work (you can automate it but...)
I really find celery better suited than cron for routine cleaning (cache, database), and in general, for short tasks. Dumping a database is more a work for cron, however, because you don't want clutter the event queue with too long tasks.
Not the least, Celery is easily distributed across machines.
| 0
| 1
| 0
| 0
|
2013-08-12T13:05:00.000
| 2
| 1.2
| true
| 18,187,751
| 0
| 0
| 1
| 2
|
Considering Celery is already a part of the stack to run task queues (i.e. it is not being added just for running crons, that seems an overkill IMHO ).
How can its "periodic tasks" feature be beneficial as a replacement for crontab ?
Specifically looking for following points.
Major pros/cons over crontab
Use cases where celery is better choice than crontab
Django specific use case: Celery vs crontab to run django based periodic tasks, when celery has been included in the stack as django-celery for queing django tasks.
|
Why would running scheduled tasks with Celery be preferable over crontab?
| 18,451,537
| 4
| 47
| 15,773
| 0
|
python,django,celery,django-celery
|
Celery is indicated any time you need to coordinate jobs across multiple machines, ensure jobs run even as machines are added or dropped from a workgroup, have the ability to set expiration times for jobs, define multi-step jobs with graph-style rather than linear dependency flow, or have a single repository of scheduling logic that operates the same across multiple operating systems and versions.
| 0
| 1
| 0
| 0
|
2013-08-12T13:05:00.000
| 2
| 0.379949
| false
| 18,187,751
| 0
| 0
| 1
| 2
|
Considering Celery is already a part of the stack to run task queues (i.e. it is not being added just for running crons, that seems an overkill IMHO ).
How can its "periodic tasks" feature be beneficial as a replacement for crontab ?
Specifically looking for following points.
Major pros/cons over crontab
Use cases where celery is better choice than crontab
Django specific use case: Celery vs crontab to run django based periodic tasks, when celery has been included in the stack as django-celery for queing django tasks.
|
Detecting multiple sessions from same user on Google App Engine
| 18,197,132
| 0
| 0
| 84
| 0
|
python,google-app-engine,user-management
|
Yes you can, but you'll have to build the session tracking functionality yourself.
| 0
| 1
| 0
| 0
|
2013-08-12T18:23:00.000
| 1
| 0
| false
| 18,193,967
| 0
| 0
| 1
| 1
|
I am writing an application which uses the Google user API and anyone having a google account can login. I would want to prevent from multiple users using the same google account to login simultaneously. Basically, I would like to allow only 1 user / account to be using my application. As I am running a subscription service, I need to restrict users sharing accounts and simultaneously logging in.
Can I accomplish this somehow in App Engine using Users module? If not, can someone please suggest an alternate mechanism?
I am using Python on App Engine.
|
Python 2.6.6 doesn't work properly
| 18,205,127
| 0
| 0
| 194
| 0
|
python
|
Install readline-devel from yum and then recompile Python. Command line editing magics require this library.
| 0
| 1
| 0
| 0
|
2013-08-13T08:06:00.000
| 2
| 0
| false
| 18,203,781
| 1
| 0
| 0
| 1
|
I am working with RedHat Linux 5.6 (in case that matters).
My team is working with python 2.6.6. I installed it from source (configure, make, make install) from the official Python site. It seems to not work properly:
When I type python in the terminal to enter the Python CLI, for some reason I can't delete what I type (backspace prints character marks to screen)
Modules like psutils are missing (this should be a standard part of Python, no?)
Python 2.4, which was previously installed, works fine.
Any ideas?
|
I want to share my python app as dmg or package?
| 18,209,869
| 0
| 1
| 1,590
| 0
|
python,macos,package,dmg
|
If you create the .dmg, you can setup a background image that tells users to move your application to the /Applications folder. If your application needs no extra setup, this is preferred, or a (Mac OS X created) .zip file with it.
The package option is better if some additional setup, or scripts checking for Python dependencies, are required.
| 1
| 1
| 0
| 0
|
2013-08-13T12:05:00.000
| 1
| 0
| false
| 18,208,650
| 1
| 0
| 0
| 1
|
I have developed my first .app for mac, written in python and would like to share this .app with some friends.
I have converted the python scripts via py2app. Then I have one .app and compress it to an .dmg file.
I share this .dmg file with the guys and for one, this is working fine. (He has already python installed)
The other people can´t open the .app file, they get error messages. After an intensive search I got it. They have no python installed.
Now my question: How can I include a "one click python installation" in my .dmg file (or as package?!)
|
Python logrotate options
| 18,210,985
| 0
| 0
| 498
| 0
|
python,logging,python-2.4,logrotate
|
I would advise that you copy the source of WatchedFileHandler from a later version and adapt it, if needed, so that it works on 2.4.
| 0
| 1
| 0
| 0
|
2013-08-13T12:36:00.000
| 2
| 0
| false
| 18,209,264
| 0
| 0
| 0
| 2
|
What is the correct process for having logrotate manage files written to by the python logging module? Usually I would use a WatchedFileHandler but I need to target 2.4, which does not have this class. Is there a function in the logging module which i can attach to a HUP handler, perhaps, to have it reopen the logfile?
|
Python logrotate options
| 20,074,842
| 0
| 0
| 498
| 0
|
python,logging,python-2.4,logrotate
|
The logrotate utility needs to be told which files to rotate, and with what options. You might want to override the standard WatchedFileHandler class to make entries required in /etc/logrotate.d as part of your module load sequence before logging begins.
| 0
| 1
| 0
| 0
|
2013-08-13T12:36:00.000
| 2
| 0
| false
| 18,209,264
| 0
| 0
| 0
| 2
|
What is the correct process for having logrotate manage files written to by the python logging module? Usually I would use a WatchedFileHandler but I need to target 2.4, which does not have this class. Is there a function in the logging module which i can attach to a HUP handler, perhaps, to have it reopen the logfile?
|
Go Pro Hero 3 - Streaming video over wifi
| 20,150,294
| 2
| 2
| 9,059
| 0
|
python,video-streaming,wifi,gopro
|
I've been working on creating a GoPro API recently for Node.js and found the device very glitchy too. Its much more stable after installing the latest gopro firmware (3.0.0).
As for streaming, I couldnt get around the wifi latency and went for a record and copy approach.
| 0
| 1
| 1
| 0
|
2013-08-14T09:22:00.000
| 1
| 0.379949
| false
| 18,227,789
| 0
| 0
| 0
| 1
|
I recently acquired a Go Pro Hero 3. Its working fine but when i attempt to stream live video/audio it gitches every now and then.
Initially i just used vlc to open the m3u8 file, however when that was glitchy i downloaded the android app and attempted to stream over that.
It was a little better on the app.
I used wireshark and i think the cause of it is its simply not transferring/buffering fast enough. Tried just to get everything with wget in loop, it got through 3 loops before it either: caught up (possible but i dont think so ... though i may double check that) or fell behind and hence timed out/hung.
There is also delay in the image, but i can live with that.
I have tried lowering the resolution/frame rate but im not sure if it is actually doing anything as i can't tell any difference. I think it may be just the settings for recording on the go pro. Either way, it didn't work.
Essentially i am looking for any possible methods for removing this 'glitchiness'
My current plan is to attempt writing something in python to get the files over UDP (no TCP overhead).
Ill just add a few more details/symptoms:
The Go Pro is using the Apple m3u8 streaming format.
At anyone time there are 16 .ts files in the folder. (26 Kb each)
These get overwritten in a loop (circular buffer)
When i stream on vlc:
Approx 1s delay - streams fine for ~0.5s, stops for a little less than that, then repeats.
What i think is happening is the file its trying to transfer gets overwritten which causes it to timeout.
Over the android App:
Less delay and shorter 'timeouts' but still there
I want to write a python script to try get a continuous image. The files are small enough that they should fit in a single UDP packet (i think ... 65Kb ish right?)
Is there anything i could change in terms of wifi setting on my laptop to improve it too?
Ie some how dedicate it to that?
Thanks,
Stephen
|
twistd using usage.options in a *.tac file
| 18,245,245
| 4
| 4
| 511
| 0
|
python,command,arguments,twisted
|
A tac file is configuration. It doesn't accept configuration.
If you want to pass command line arguments, you do need to write a plugin.
| 0
| 1
| 0
| 0
|
2013-08-14T23:31:00.000
| 1
| 1.2
| true
| 18,244,050
| 0
| 0
| 0
| 1
|
I'm writing a server with Twisted that is based on a *.tac file that starts the services and the application. I'd like to get one additional command line argument to specify a yaml configuration file. I've tried using usage.Options by building a class that inherits from it, but is choking because of the additional, twistd command line arguments (-y for example) not being specified in my class Options(...) class.
How can get one additional argument and still pass the rest to twistd? Do I have to do this using the plugin system?
Thanks in advance for your help!
Doug
|
Popen new process group on linux
| 18,255,933
| 2
| 3
| 3,257
| 0
|
python,linux
|
bash does not handle signals while waiting for your foreground child process to complete. This is why sending it SIGINT does not do anything. This behaviour has nothing to do with process groups.
There are a couple of options to let your child process receive your SIGINT:
When spawning a new process with Shell=True try prepending exec to the front of your command line, so that bash gets replaced with your child process.
When spawning a new process with Shell=True append the command line with & wait %-. This will cause bash to react to signals while waiting for your child process to complete. But it won't forward the signal to your child process.
Use Shell=False and specify full paths to your child executables.
| 0
| 1
| 0
| 1
|
2013-08-15T15:10:00.000
| 1
| 0.379949
| false
| 18,255,730
| 0
| 0
| 0
| 1
|
I am spawning some processes with Popen (Python 2.7, with Shell=True) and then sending SIGINT to them. It appears that the process group leader is actually the Python process, so sending SIGINT to the PID returned by Popen, which is the PID of bash, doesn't do anything.
So, is there a way to make Popen create a new process group? I can see that there is a flag called subprocess.CREATE_NEW_PROCESS_GROUP, but it is only for Windows.
I'm actually upgrading some legacy scripts which were running with Python2.6 and it seems for Python2.6 the default behavior is what I want (i.e. a new process group when I do Popen).
|
App Engine deserializing records in python: is it really this slow?
| 18,281,029
| 7
| 6
| 264
| 0
|
python,google-app-engine,app-engine-ndb
|
Short answer: yes.
I find deserialization in Python to be very slow, especially where repeated properties are involved. Apparently, GAE-Python deserialization creates boatloads of objects. It's known to be inefficient, but also apparently, no one wants to touch it because it's so far down the stack.
It's unfortunate. We run F4 Front Ends most of the time due to this overhead (i.e., faster CPU == faster deserialization).
| 0
| 1
| 0
| 0
|
2013-08-15T18:58:00.000
| 1
| 1.2
| true
| 18,259,697
| 0
| 0
| 1
| 1
|
In profiling my python2.7 App Engine app, I find that it's taking an average of 7ms per record to deserialize records fetched from ndb into python objects. (In pb_to_query_result, pb_to_entity and their descendants—this does not include the RPC time to query the database and receive the raw records.)
Is this expected? My model has six properties, one of which is a LocalStructuredProperty with 15 properties, which also includes a repeated StructuredProperty with four properties, but the average object should have less than 30 properties all told, I think.
Is it expected to be this slow? I want to fetch a couple of thousand records to do some simple aggregate analysis, and while I can tolerate a certain amount of latency, over 10 seconds is a problem. Is there anything I can do to restructure my models or my schema to make this more viable? (Other than the obvious solution of pre-calculating my aggregate analysis on a regular basis and caching the results.)
If it's unusual for it to be this slow, it would be helpful to know that so I can go and look for what I might be doing that impairs it.
|
Terminating python script through emacs
| 18,262,847
| 3
| 0
| 230
| 0
|
python,emacs,terminate
|
Try using keyboard interrupt which comint send to the interpreter through C-cC-c.
I generally hold down the C-c until it the prompt returns.
| 0
| 1
| 0
| 0
|
2013-08-15T22:12:00.000
| 1
| 1.2
| true
| 18,262,760
| 0
| 0
| 0
| 1
|
I am running a python interpreter through emacs. I often find myself running python scripts and wishing I could terminate them without killing the entire buffer. That is because I do not want to import libraries all over again...
Is there a way to tell python to stop executing a script and give me a prompt?
|
raspbian python virtualenv not working on thumb drive
| 18,550,607
| 0
| 0
| 121
| 0
|
python,virtualenv,raspberry-pi
|
Needed to format my drive with linux partition - not fat partition.
| 0
| 1
| 0
| 1
|
2013-08-16T20:58:00.000
| 1
| 0
| false
| 18,282,042
| 0
| 0
| 0
| 1
|
In raspian I can make virtual envs in my home directory but when I try to make a virtual env in a folder on my thumb drive it says the os prevents it ("operation not permitted"). Is this a known issue?
|
Python: running process in the background with ability to kill them
| 18,293,596
| 1
| 0
| 124
| 0
|
python,subprocess,gevent
|
It depends on your application logic. If you just feed the data into the database without any CPU intensive tasks, then most of your application time will be spent on IO and threads would be sufficient. If you are doing some CPU intensive suff then you should use the multiprocessing module so you can use all your CPU cores, which threads wont allow you because of the GIL.
Using subprocess would just add an additional task of implementing the same stuff that's already implemented in the multiprocessing module so I would skip that (why reinvent the wheel). And gevents is just an event loop I don't see how will that be better than using threads. But if I'm wrong please correct me, I never used gevent.
| 0
| 1
| 0
| 0
|
2013-08-17T21:05:00.000
| 1
| 0.197375
| false
| 18,293,345
| 0
| 0
| 1
| 1
|
I need to constantly load a number of data feeds. The data feeds can take 20-30 seconds to load. I know what feeds to load by checking a MySQL database every hour.
I could have up to 20 feeds to load at the same time. It's important that non of the feeds block each other as I need to refresh them constantly.
When I no longer need to load the feeds the database that I'm reading gets updated and I thus need to stop loading the feed which I would like to do from my main program so I don't need multiple connections to the db.
I'm aware that I could probably do this using this using threading, subprocess or gevents. I wanted to ask if any of these would be best.
Thanks
|
Is celery's apply_async thread or process?
| 18,319,581
| 4
| 7
| 3,889
| 0
|
python,celery
|
Can someone tell me whether Celery executes a task in a thread or in a
separate child process?
Neither, the task will be executed in a separate process possibly on a different machine. It is not a child process of the thread where you call 'delay'. The -C and -P options control how the worker process manages it's own threading. The worker processes get tasks through a message service which is also completely independent.
How would you compare celery's async with Twisted's reactor model? Is
celery using reactor model after all?
Twisted is an event queue. It is asynchronous but it's not designed for parallel processing.
| 0
| 1
| 0
| 0
|
2013-08-19T06:09:00.000
| 2
| 0.379949
| false
| 18,307,366
| 1
| 0
| 0
| 1
|
Can someone tell me whether Celery executes a task in a thread or in a separate child process? The documentation doesn't seem to explain it (read it like 3 times). If it is a thread, how does it get pass the GIL (particularly whom and how an event is notified)?
How would you compare celery's async with Twisted's reactor model? Is celery using reactor model after all?
Thanks,
|
Named pipe race condition?
| 18,320,287
| 2
| 3
| 1,407
| 0
|
python,c,named-pipes,fifo,mkfifo
|
A pipe is a stream.
The number of write() calls on the sender side does not necessarily need to correspond to the number of read()s on the receiver's side.
Try to implement some sort of synchronisation protocol.
If sending plain text you could do so for example by adding new-lines between each token and make the receiver read up until one of such is found.
Alternatively you could prefix each data sent, with a fixed length number representing the amount of the data to come. The receiver then can parse this format.
| 0
| 1
| 0
| 1
|
2013-08-19T18:03:00.000
| 1
| 0.379949
| false
| 18,320,199
| 0
| 0
| 0
| 1
|
I have two processes one C and one python. The C process spends its time passing data to a named pipe which the python process then reads. Should be pretty simple and it works fine when I'm passing data (currently a time stamp such as "Mon Aug 19 18:30:59 2013") once per second.
Problems occur when I take out the sleep(1); command in the C process. When there's no one second delay the communication quickly gets screwed up. The python process will read more than one message or report that it has read data even though its buffer is empty. At this point the C process usually bombs.
Before I go posting any sample code I'm wondering if I need to implement some sort of synchronisation on both sides. Like maybe telling the C process not to write to the fifo if it's not empty?
The C process opens the named pipe write only and the python process opens as read only.
Both processes are intended to be run as loops. The C process continually reads data as it comes in over a USB port and the python process takes each "message" and parses it before sending it to a SQL Db.
If I'm going to be looking at up to 50 messages per second, will named pipes be able to handle that level of transaction rate? The size of each transaction is relatively small (20 bytes or so) but the frequency makes me wonder if I should be looking at some other form of inter-process communication such as shared memory?
Any advice appreciated. I can post code if necessary but at the moment I'm just wondering if I should be syncing between the two processes somehow.
Thanks!
|
dump from OpenERP Python is harmless?
| 18,322,809
| 2
| 0
| 354
| 0
|
python,openerp
|
This just means that the underlying TCP connection was abruptly dropped. In this case it means that you are trying to write data to a socket that has already been closed on the other side (by the client). It is harmless, it means that while your server was sending an HTTP response to the client (browser) she stopped the request (closed the browser for example).
| 0
| 1
| 1
| 0
|
2013-08-19T20:36:00.000
| 1
| 1.2
| true
| 18,322,667
| 0
| 0
| 0
| 1
|
I am getting this dump occasionally from OpenERP, but it seems harmless. The code serves HTTP; is this dump what happens when a connection is dropped?
Exception happened during processing of request from ('10.100.2.71', 42799)
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 582, in process_request_thread
self.finish_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python2.7/SocketServer.py", line 640, in __init__
self.finish()
File "/usr/lib/python2.7/SocketServer.py", line 693, in finish
self.wfile.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
|
Celery - how to susbstitute 'None' in logs
| 18,336,614
| 1
| 0
| 281
| 0
|
python,django,celery
|
Are you defining your tasks with ignore_result=True (or did you set CELERY_IGNORE_RESULT to True)? If you did, you should try disabling it.
| 0
| 1
| 0
| 0
|
2013-08-20T12:19:00.000
| 1
| 1.2
| true
| 18,334,877
| 0
| 0
| 1
| 1
|
In Celery's logs there are
Task blabla.bla.bla[arguments] succeeded in 0.757446050644s: None
How to replace this None with something more meaningfull? I tried to set return value in tasks but no luck.
|
Calling c++ function, from Python script, on a Mac OSX
| 18,342,743
| 1
| 2
| 3,206
| 0
|
c++,python,c,opencv,compilation
|
How to compile my project so it can be used from python? I've read
that I should create a *.so file but how to do so?
That depends on your compiler. By example with g++:
g++ -shared -o myLib.so myObject.o
Should it work like a lib, so python calls some specific functions,
chosen in python level?
Yes it is, in my opinion. It seems do be the "obvious" way, since it's great for the modularity and the evolution of the C++ code.
| 1
| 1
| 0
| 1
|
2013-08-20T18:33:00.000
| 5
| 0.039979
| false
| 18,342,535
| 0
| 0
| 0
| 1
|
I've a c++ code on my mac that uses non-standard lybraries (in my case, OpenCV libs) and need to compile this so it can be called from other computers (at least from other mac computers). Runned from python. So I've 3 fundamental questions:
How to compile my project so it can be used from python? I've read
that I should create a *.so file but how to do so?
Should it work like a lib, so python calls some specific functions,
chosen in python level?
Or should it contain a main function that is executed from
command line?
Any ideas on how to do so? PS: I'm using the eclipse IDE to compile my c++ project.
Cheers,
|
How to install python package in a specific directory
| 18,362,592
| 3
| 1
| 2,148
| 0
|
python,google-app-engine,twitter,package,twython
|
If you put the module files in a directory, for example external_modules/, and then use sys.path.insert(0, 'external_modules') you can include the module as it would be an internal module.
You would have to call sys.path.insert before the first import of the module.
Example: If you placed a "module.pyd" in external_modules/ and want to include it with import module, then place the sys.path.insert before.
The sys.path.insert() is an app-wide call, so you have to call it only once. It would be the best to place it in the main file, before any other imports (except import sys of course).
| 0
| 1
| 0
| 1
|
2013-08-21T13:46:00.000
| 4
| 1.2
| true
| 18,359,184
| 1
| 0
| 0
| 1
|
I'm developing a twitter app on google appengine - for that I want to use the Twython library. I tried installing it using pip - but it either installs it in the main python dir, or doesn't import all dependencies.
I can simply copy all the files of Twython to the appengine root dir, and also import manually all the dependency libraries, but that seems awfully wrong. How do I install a package in a specific folder including all it's dependencies?
Thanks
|
Python cron job file access
| 18,375,496
| 2
| 2
| 390
| 0
|
python,linux,file-io,cron
|
Please use absolute path in your script when using crontab to run it
| 0
| 1
| 0
| 1
|
2013-08-22T08:31:00.000
| 2
| 1.2
| true
| 18,375,308
| 0
| 0
| 0
| 1
|
I have a tiny Python script that needs to read/write to a file. It works when I run it from the command line (since I am root, it will) , but when the cron job runs it cannot access the file.
The file is in the same folder as the script and is (should) be created from the script.
I'm not sure if this is really a programming question...
|
Google App engine change parent of entity that is not stored
| 18,377,368
| 1
| 0
| 35
| 0
|
python,google-app-engine
|
create a new model with the data from the existing one..
or don't create the model until you have all the facts.
| 0
| 1
| 0
| 0
|
2013-08-22T09:57:00.000
| 2
| 0.099668
| false
| 18,377,222
| 0
| 0
| 1
| 2
|
I know we cant change the parent of an entity that is stored , but can we change the parent of the entity that is not stored? For example i am declaring a model as
my_model = MyModel(parent = ParentModel1.key)
but after some checks i may have to change the parent of my_model (i have not run my_model.put() ) to ParentModel2. How can i do this ?
|
Google App engine change parent of entity that is not stored
| 18,377,371
| 1
| 0
| 35
| 0
|
python,google-app-engine
|
You still can't do it. You should probably delay instantiation of the MyModel object until you know its parent. Perhaps you could collect the attributes in a dictionary, then when it comes to instantiation you can do my_instance = MyModel(parent=parent_instance, **kwargs).
| 0
| 1
| 0
| 0
|
2013-08-22T09:57:00.000
| 2
| 1.2
| true
| 18,377,222
| 0
| 0
| 1
| 2
|
I know we cant change the parent of an entity that is stored , but can we change the parent of the entity that is not stored? For example i am declaring a model as
my_model = MyModel(parent = ParentModel1.key)
but after some checks i may have to change the parent of my_model (i have not run my_model.put() ) to ParentModel2. How can i do this ?
|
I need a script that searches files for SSI and replaces the include with the actual HTML
| 22,432,278
| 0
| 1
| 417
| 0
|
python,html,ruby,bash,ssi
|
On your dev machine, use your browser to display the web page, and then save the 'result' with an appropriate file name/in an output directory.
Thus, if you had mainfile.html which executed various time/last-mod directives and which included fileA.inc and fileB.inc at appropriate places, the resulting display (and save-able HTML file) will comprise all four/five components.
=dn
| 0
| 1
| 0
| 0
|
2013-08-22T10:11:00.000
| 2
| 0
| false
| 18,377,549
| 0
| 0
| 1
| 1
|
I am developing the front end code of a website which I will be handing over to some developers for them to integrate it with the backend. The site will be written in .NET but I'm developing the front end code with static HTML files (and a bit of javascript).
Because the header, footer and a few other elements are the same across all pages I am using Server Side Includes in my development environment. However, every time I hand the code to the developers I need to manually replace each SSI with the actual HTML by copying and pasting. This is starting to get tedious.
I have tried writing a bash script to do this but my bash knowledge is extremely limited so I have failed miserably (I'm not really sure where to start).
What I tried to achieve was:
Loop through all the HTML files in my project
Look for an include ( <!--#include file="myfile.html"--> )
If one is found, replace the include with the HTML from the file specified in the include
Keep doing this until there are no more includes and move on to the next file
Does anyone know of a script that can do this, or can point me in the right direction for achieving this myself? I'm happy for it to be in any language as long as I can run it on my Mac.
Thanks.
EDIT
It is safe to assume that all instances of <!--#include file="myfile.html"--> are on their own line.
|
PyDev-Eclipse Python not configured
| 18,382,626
| 4
| 2
| 7,044
| 0
|
python,windows,eclipse
|
Don't know if it is global or local (project-related).
Globally you can set the interpreter via the path Window-Menu → Preferences → PyDev → Interpreter - Python.
Project-related this can be done via right-click on project → Properties → PyDev - Interpreter/Grammar.
Have a look at both and make sure that both are set to correct values.
| 0
| 1
| 0
| 0
|
2013-08-22T13:46:00.000
| 2
| 0.379949
| false
| 18,382,287
| 1
| 0
| 0
| 2
|
After forced restart due to frozen laptop (windows 7 pro, 32 bit) Eclipse is providing the following message:
It seems that the Python interpreter is not currently configured.
How do you want to proceed?"
Clicking the Auto config option and then ok I get the Python Interpreters window with the right name (Python), Location (c:\Program Files\Python27\python.exe) and system libs.
It all looks ok but clicking OK or Apply doesn't seem to do anything and the whole thing starts from the beginning (the message about Python not currently configured...).
I've checked my .pydevproject permissions and I have full control over the file.
I also have dropbox sync-ing the project files but it has been ok for a while now.
What is wrong, what should I check/do?
|
PyDev-Eclipse Python not configured
| 18,395,343
| 2
| 2
| 7,044
| 0
|
python,windows,eclipse
|
Checking the log files under workspace/.metadata shows exception wwith the following message:
!MESSAGE For input string: "0 (xxxx xxxxx's conflicted copy 2013-08-16)"
These are files created by dropbox due to conflicts. Eclipse was not expecting these sort of files and when trying to read them raised and exception.
Deleting all such files restored Eclipse's configuration.
| 0
| 1
| 0
| 0
|
2013-08-22T13:46:00.000
| 2
| 1.2
| true
| 18,382,287
| 1
| 0
| 0
| 2
|
After forced restart due to frozen laptop (windows 7 pro, 32 bit) Eclipse is providing the following message:
It seems that the Python interpreter is not currently configured.
How do you want to proceed?"
Clicking the Auto config option and then ok I get the Python Interpreters window with the right name (Python), Location (c:\Program Files\Python27\python.exe) and system libs.
It all looks ok but clicking OK or Apply doesn't seem to do anything and the whole thing starts from the beginning (the message about Python not currently configured...).
I've checked my .pydevproject permissions and I have full control over the file.
I also have dropbox sync-ing the project files but it has been ok for a while now.
What is wrong, what should I check/do?
|
Running Python script from Ubuntu terminal NameError
| 18,387,374
| 2
| 0
| 546
| 0
|
python,ubuntu
|
The pytables on my ubuntu system is 2.3.1. I think that open_file is a version 3 thing. I'm not sure where you can pick up the latest package, but you could always install the latest with pip.
| 0
| 1
| 0
| 1
|
2013-08-22T17:30:00.000
| 1
| 1.2
| true
| 18,387,093
| 0
| 0
| 0
| 1
|
I have recently moved from Python on Windows to Python on Ubuntu. In Windows I could just hit F5 in the IDLE editor to run the script. However, in Ubuntu I have to run the script by typing python /path/to/file.py to execute.
The thing is it seems the imports within the file are not working when I run from command line.
It gives me the error:
NameError: global name 'open_file' is not defined
This is the open_file method of Pytables. In the python file I have:
from tables import *
I have made the file executable and all.
Appreciate your help.
|
Is it possible to use both cheaper and emperor with uWSGI
| 18,396,906
| 2
| 1
| 304
| 0
|
python,django,wsgi,uwsgi
|
There are no problems as each "vassal" can be configured with its special cheaper mode. In this way you can have QoS for your customers.
| 0
| 1
| 0
| 0
|
2013-08-22T21:48:00.000
| 1
| 0.379949
| false
| 18,391,405
| 0
| 0
| 1
| 1
|
I need to host multiple Django sites (quite a lot of sites actually) and currently I am using Apache+mod_wsgi but I want to switch to uWSGI.
One of the nice features of uWSGI is cheaper mode that spawns processes as needed and shuts them down as needed as well. On the other hand, it seems that the way to make it run multiple sites is to use emperor mode.
Can emperor mode be used together with cheaper subsystem? Are there any quirks/problems I should be aware of? Has anyone ever done this?
|
Celery flower's Broker tab is blank
| 25,943,620
| 4
| 9
| 6,554
| 0
|
python,redis,celery
|
For AMQP this is an example.
/usr/bin/celery -A app_name --broker=amqp://user:pw@host//vhost --broker_api=http://user:pw@host:host_port/api flower
The broker_api is the rabbitmq web ui endpoint with /api
| 0
| 1
| 0
| 0
|
2013-08-22T22:02:00.000
| 3
| 0.26052
| false
| 18,391,563
| 0
| 0
| 0
| 1
|
I'm running celery and celery flower with redis as a broker. Everything boots up correctly, the worker can find jobs from redis, and the celery worker completes the jobs successfully.
The issue I'm having is the Broker tab in the celery flower web UI doesn't show any of the information from Redis. I know the Redis url is correct, because it's the same URL that celeryd is using. I also know that the celery queue has information in it, because I can manually confirm that via redis-cli.
I'm wondering if celery flower is trying to monitor a different queue in the Broker tab? I don't see any settings in the flower documentation to override or confirm. I'm happy to provide additional information upon request, but I'm not certain what is relevant.
|
Connecting iOS app to Windows/Linux apps
| 18,773,687
| 0
| 4
| 1,265
| 0
|
python,ios,windows,web,bonjour
|
I would definitely suggest a webapp. And the answer to your questions are given below:
How would I receive and send notifications over a local network.
Use a REST based web service to communicate with the server.
You have to use polling to receive data:-(
How could I connect to the server using NSURLConnection if it does not have a static ip?
If possible configure a domain name in your network which points to server ip. (Configure local DHCP to give same IP to your server every time based on mac address!)
Have a IP Range and when the app starts, try to reach a specific URL and check if it is responding.
Ask the user to enter the server IP every time the app starts!
| 1
| 1
| 0
| 0
|
2013-08-23T14:46:00.000
| 2
| 0
| false
| 18,405,726
| 0
| 0
| 1
| 1
|
Background:
I am just about to start development on an mobile and desktop app. They will both be connected to a local wifi network (no internet connection) and will need to communicate with one another. At the outset we are targeting iOS and Windows as the two platforms, with the intention of adding Linux, OSX, and Android support in that order. The desktop app will largely be a database server/notification center for receiving updates from the iOS apps and sending out the data to other iOS apps. There may be a front end to the desktop app, but we could also incorporate it into the iOS app if needed.
For the moment we just want the iOS app to automatically detect when it is on the same network as the server and then display the data that is sent by that server (bonjour like).
As far as I see it there are two paths we could take to implement this
Create a completely native app for each platform (Windows, Linux, OSX).
Pro: We like the ideas of having native apps for performance and ease of install.
Con: I know absolutely nothing about Windows or Linux development.
Create an app that is built using web technologies (probably python) and create an easy to use installer that will create a local server out of the desktop machine which the mobile apps can communicate with.
Pro: Most of the development would be cross-platform and the installer should be easy enough to port.
Con: If we do want to add a front-end to the server app it will not be platform native and would be using a css+html+javascript GUI.
Question:
My question is how would implement the connection between the iOS app and server app in each circumstance.
How would I receive and send notifications over a local network.
How could I connect to the server using NSURLConnection if it does not have a static ip?
I hope this is clear. If not please ask and I will clarify.
Update 09/06/2013
Hopefully this will clear things up. I need to have a desktop app that will manage a database, this app will connect to iOS devices on a local wireless network that is not connected to the internet. I can do this with either the http protocol (preferably with a flask app) or by using a direct socket connection between the apps and the server. My question is which of the above two choices is best? My preference would be for a web-based app using Python+Flask, but I would have no idea how to connect the iOS app to a flask app running on a local network with out a static ip. Any advice on this would be appreciated.
|
Google App Engine, Change which python version
| 18,955,756
| 1
| 2
| 1,807
| 0
|
python,google-app-engine,path,google-cloud-storage,sys.path
|
In GAE change the python path via Preferences settings, set Python Path to match your python 27 path.
| 0
| 1
| 0
| 0
|
2013-08-23T16:05:00.000
| 1
| 0.197375
| false
| 18,407,249
| 0
| 0
| 1
| 1
|
I'm trying to use the GCS client library with my app engine app and I ran into this -
"In order to use the client library in your app, put the /src/cloudstorage directory in your sys.path so Python can find it."
First, does this mean I need to move the directory into my sys.path OR does it need to add the ~/src/cloudstorage/ to my PATH environment variable?
Second, when I print sys.version and sys.path from the App Engine Interactive Console, I see a Python Version of 2.7.2, but when I print from my Terminal (on a Mac), I get the Python I want to use and installed via Homebrew - 2.7.5. The sys.path in the Console shows all App Engine paths and the default Python installation - /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7
On my terminal - /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/
I need help understanding how to change this.
** UPDATE **
Okay, I figured out part of this answer. "In order to use the client library in your app, put the /src/cloudstorage directory in your sys.path so Python can find it." means moving the actual directory to the App Engine project directory.
The second piece still remains - why is my Mac PATH environment variable not used in APP Engine. How can I change the default version of Python used by the App Engine (from 2.7.2 to 2.7.5)? This is not related to changing the version in the YAML file.
|
Using adb sendevent in python
| 18,408,154
| 0
| 0
| 1,330
| 0
|
android,python,adb
|
sendevent takes 4 parameters
args for Popen should be ['adb',
'shell', 'sendevent /dev/input/eventX type code value'] - do not split the remote command
timings are important for sendevent sequences and adb shell call itself is kind of expensive - so using shell script on the device works better
pay attention to the newline characters in your shell scripts - make sure it's unix style (single \n instead of the \r\n)
| 0
| 1
| 0
| 0
|
2013-08-23T16:17:00.000
| 1
| 0
| false
| 18,407,470
| 0
| 0
| 0
| 1
|
I am running into a strange issue, running adb shell sendevent x x x commands from commandline works fine, but when I use any of the following:
subprocess.Popen(['adb', 'shell', 'sendevent', 'x', 'x','x'])
subprocess.Popen('adb shell sendevent x x x', shell=True)
subprocess.call(['adb', 'shell', 'sendevent', 'x', 'x','x'])
They all fail - the simulated touch even that works in a shell script does not work properly when called through python. Furthermore I tried adb push the shell script to the device, and using adb shell /system/sh /sdcard/script.sh I was able to run it successfully, but when I try to run that commandline through python, the script fails.
What's even stranger, is that he script runs, but for example, it does not seem to execute the command sleep 1 half way through the script, echo commands work, sendevent commands don't seem to work.
Doesn't even seem possible, but there it is. How do I run a set of adb shell sendevent x x x commands through python?
|
How to make Mac OS use the python installed by Homebrew
| 47,548,607
| 0
| 35
| 78,233
| 0
|
python,macos,python-2.7
|
From $ brew info python:
This formula installs a python2 executable to /usr/local/bin.
If you wish to have this formula's python executable in your PATH then add
the following to ~/.bash_profile:
export PATH="/usr/local/opt/python/libexec/bin:$PATH"
Then confirm your python executable corresponds to the correct installation:
$ which python or
$ python --version
| 0
| 1
| 0
| 0
|
2013-08-24T14:16:00.000
| 6
| 0
| false
| 18,419,500
| 0
| 0
| 0
| 1
|
I have searched online for a while for this question, and what I have done so far is
installed python32 in homebrew
changed my .bash_profile and added the following line to it:
export PATH=/usr/local/bin:/usr/local/sbin:~/bin:$PATH
but when I close the terminal and start again, I type 'which python', it still prints:
/usr/bin/python
and type 'python --version' still got:
Python 2.7.2
I also tried the following instruction:
brew link --overwrite python
or try to remove python installed by homebrew by running this instruction:
brew remove python
but both of the above two instructions lead to this error:
Error: No such keg: /usr/local/Cellar/python
can anybody help, thanks
|
Kill python interpeter in linux from the terminal
| 33,800,400
| 10
| 71
| 134,684
| 0
|
python,linux,terminal,command
|
pgrep -f <your process name> | xargs kill -9
This will kill the your process service.
In my case it is
pgrep -f python | xargs kill -9
| 0
| 1
| 0
| 0
|
2013-08-25T12:01:00.000
| 9
| 1
| false
| 18,428,750
| 1
| 0
| 0
| 3
|
I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files).
obviously the processes should be closed.
Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine).
I need it from the terminal because i write c code and i use linux commands...
Hope for help
|
Kill python interpeter in linux from the terminal
| 31,704,546
| 6
| 71
| 134,684
| 0
|
python,linux,terminal,command
|
pgrep -f youAppFile.py | xargs kill -9
pgrep returns the PID of the specific file will only kill the specific application.
| 0
| 1
| 0
| 0
|
2013-08-25T12:01:00.000
| 9
| 1
| false
| 18,428,750
| 1
| 0
| 0
| 3
|
I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files).
obviously the processes should be closed.
Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine).
I need it from the terminal because i write c code and i use linux commands...
Hope for help
|
Kill python interpeter in linux from the terminal
| 66,626,681
| 0
| 71
| 134,684
| 0
|
python,linux,terminal,command
|
to kill python script while using ubuntu 20.04.2 intead of Ctrl + C just push together
Ctrl + D
| 0
| 1
| 0
| 0
|
2013-08-25T12:01:00.000
| 9
| 0
| false
| 18,428,750
| 1
| 0
| 0
| 3
|
I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files).
obviously the processes should be closed.
Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine).
I need it from the terminal because i write c code and i use linux commands...
Hope for help
|
Does Google App Engine's git Push-to-Deploy also update backends?
| 18,455,561
| 2
| 1
| 157
| 0
|
git,google-app-engine,python-2.7
|
No -- It doesn't update backends.
(My cron jobs ran last night and failed because they were running old code.)
Nothin' like good ol' appcfg.py update ./ --backends
| 0
| 1
| 0
| 0
|
2013-08-25T23:15:00.000
| 1
| 1.2
| true
| 18,434,685
| 0
| 0
| 1
| 1
|
When using appcfg.py, I had to specify backends to update them.
What about when I'm using Push-to-Deploy?
I ask because I see two of my Versions don't have the same "deployed" date -- the backend still says "6 days ago". I didn't change backends.yaml, but I did change the code that runs on that backend.
Should I see a new "deployed" date? Is git Push-to-Deploy working?
|
Showing last command in IDLE
| 18,435,371
| 2
| 0
| 853
| 0
|
python
|
Check IDLE key preferences. For Mac it is CTRL+P. look for history-previous key mapping
| 0
| 1
| 0
| 0
|
2013-08-26T01:09:00.000
| 2
| 0.197375
| false
| 18,435,328
| 1
| 0
| 0
| 1
|
I am using IDLE 3 in windows . My question is simply , is there any way we can get the last thing entered by pressing the up arrow key ( like in case of ipython ) .
It is very problematic to copy the last command and again to paste it !
|
What is the most stable and Pythonic cross-platform way to separate a string of path,filename,ext into three separate variables?
| 18,446,406
| 1
| 2
| 356
| 0
|
python,string,filenames,filepath,file-extension
|
You want os.path.split + os.path.splitext. Please take some time reading the doc next time, it would have been waaaayyyy faster than posting here.
| 0
| 1
| 0
| 0
|
2013-08-26T14:01:00.000
| 3
| 0.066568
| false
| 18,446,004
| 1
| 0
| 0
| 1
|
I'm trying to separate a string into three variables where C:\Example\readme.txt could be read as C:\Example, readme, and .txt for the sake of a script I'm writing. It may be deployed in both Windows and Unix environments and may deal with both Windows or Unix paths, so I need to find a way that complies to both standards; I've read about several functions that achieve similar to this, but I'm looking for some input on how to best handle the single string inside a function.
*Note, I'm running IronPython 2.6 in this environment, and I'm not sure if that varies so greatly with standard Python 2.7 that I would need to adapt my usage.
EDIT: I'm aware of using os.path.splitext to get the extension from the filename, but finding a platform-independent way to get both the path and the filename (which I later use splitext on) is what boggles me.
|
how to kill specific process using %cpu over given time in python on linux?
| 18,460,519
| 0
| 4
| 2,672
| 0
|
python,linux,time,cpu,kill
|
ps aux | grep 'gnome-panel ' | awk '{if ($3>80)print $2}' | xargs kill -9
| 0
| 1
| 0
| 0
|
2013-08-27T08:04:00.000
| 4
| 0
| false
| 18,460,147
| 0
| 0
| 0
| 1
|
I have this script in python on linux which deploys vnc locally, does some graphical job on this vnc screen, and kills vnc. Sometimes after the job is done process named gnome-panel hangs and stays with 100% cpu usage. Then I need to log in through putty and kill all those processes manually (sometime lots of them actually). I would like to add few lines to my python script when it finishes its job, which will not only kill vnc (it does it already), but also kill gnome-panel if it consumes certain amount of cpu over given time period. I cant simply kill all gnome-panels, as some of them are working fine (im deploying 4 vnc screens at the same time).
So I need this condition in python:
if process name is gnome-panel and consumes over 80% of cpu and runs over 1 minute, kill process id
thank you!
|
How debug or trace DBus?
| 18,476,891
| 2
| 5
| 1,405
| 0
|
python,c,dbus
|
dbus-monitor "sender=org.freedesktop.Telepathy.Connection.******"
| 0
| 1
| 0
| 0
|
2013-08-27T11:14:00.000
| 1
| 1.2
| true
| 18,463,836
| 0
| 0
| 0
| 1
|
I writing Dbus service implementing some protocol. My service sends to client message with unexpected data (library i used has some bugs, that i want to overwrite).
How to inspect, trace client calls? I want to determine what client wants and locate buggy method.
Or how to trace all calls in service? I has much of logger.debug() inserted.
Service is python, client is c.
How to specify path or service to monitor in dbus-monitor with sender and reciever?
|
Reducing Google App Engine costs
| 18,468,740
| 3
| 2
| 638
| 0
|
javascript,python,google-app-engine
|
I'm assuming you're paying a lot in instance hours. Reading from the GAE filesystem is rather slow. So the easiest way to optimize is only read from the static file once on your instance startup, keep the js file in memory (ie a global variable), and print it.
Secondly, make sure your js is being cached by the customers so when they reload your page, you don't have to serve the js to them again unnecessarily.
Next way is to serve the js file as a static file if possible. This would save you some money if the js file is big and you're consuming CPU cycles just printing it. In this case have your handler that generates the HTML insert the appropriate URL to the appropriate js file instead of regenerating the entire js each time. You'll save money because you won't get charged instance hours for files served as static files, plus they can get cached in the edge cache (GAE's CDN), and you won't get billed anything at all for them.
| 0
| 1
| 0
| 0
|
2013-08-27T13:43:00.000
| 3
| 0.197375
| false
| 18,467,222
| 0
| 0
| 1
| 1
|
We have a piece of Javascript which is served to millions of browsers daily.
In order to handle the load, we decided to go for Google App Engine.
One particular thing about this piece of Javascript is that it is (very) slightly different per company using our service.
So far we are handling this by serving everything through main.py which basically goes:
- Read the JS static file and print it
- Print custom code
We do this on every load, and costs are starting to really add-up.
Apart from having a static version of the file per customer, is there any other way that you could think about in order to reduce our bill? Would using memcache instead of reading a file reduce the price in any way?
Thanks a lot.
|
How can I determine if the operating system a Python script is running on is Unix-like?
| 18,473,032
| 6
| 5
| 2,875
| 0
|
python,unix
|
The Pythonic way to do it is not to care what platform you are on.
If there are multiple different facilities to accomplish something depending on the platform, then abstract them behind a function or class, which should try a facility and move on to another if that facility is not available on the current platform.
| 0
| 1
| 0
| 1
|
2013-08-27T18:32:00.000
| 2
| 1.2
| true
| 18,472,956
| 0
| 0
| 0
| 1
|
I'm trying to determine if the operating system is Unix-based from a Python script. I can think of two ways to do this but both of them have disadvantages:
Check if platform.system() is in a tuple such as ("Linux", "Darwin"). The problem with this is that I don't want to provide a list of every Unix-like system every made, in particular there are many *BSD varieties.
Check if the function os.fchmod exists, as this function is only available on Unix. This doesn't seem like a clean or "Pythonic" way to do it.
|
Python installation missing a lot of things on Mountain Lion
| 18,476,362
| -1
| 0
| 130
| 0
|
python,macos,directory,osx-lion,osx-mountain-lion
|
/usr and several other system paths are not visible in the Finder.
| 0
| 1
| 0
| 0
|
2013-08-27T22:13:00.000
| 4
| -0.049958
| false
| 18,476,353
| 1
| 0
| 0
| 1
|
On doing 'which Python' it says '/usr/local/bin/python'. But when I go there through 'finder' there's nothing there. I can see /Library/Python through finder and on clicking Library/Python I see 2.3, 2.5, 2.6, 2.7.
The default Python currently is 2.7which I can see with --version. But all it has is /site-packages. How is this possible? I am not sure if it is the one that came with the OS or if it was installed later by someone. I am so so confused.
OSX 10.8.4
|
Forking Django DB connections
| 18,496,589
| 0
| 2
| 941
| 1
|
python,django,postgresql
|
The libpq driver, which is what the psycopg2 driver usually used by django is built on, does not support forking an active connection. I'm not sure if there might be another driver does not, but I would assume not - the protocol does not support multiplexing multiple sessions on the same connection.
The proper solution to your problem is to make sure each forked processes uses its own database connection. The easiest way is usually to wait to open the connection until after the fork.
| 0
| 1
| 0
| 0
|
2013-08-28T15:42:00.000
| 2
| 0
| false
| 18,492,467
| 0
| 0
| 1
| 2
|
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database.
My issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions:
DatabaseError: SSL error: decryption failed or bad record mac
InterfaceError: connection already closed
What is the best solution to make the forked TCP server work?
Can I ensure the forked process uses its own database connection?
Should I be looking at other modules for writing to the postgres database?
|
Forking Django DB connections
| 18,531,322
| 1
| 2
| 941
| 1
|
python,django,postgresql
|
So one solution I found is to create a new thread to spawn from. Django opens a new connection per thread so spawning from a new thread ensures you pass a new connection to the new process.
In retrospect I wish I'd used psycopg2 directly from the beginning rather than Django. Django is great for the web front end but not so great for a standalone app where all I'm using it for is the model layer. Using psycopg2 would have given be greater control over when to close and open connections. Not just because of the forking issue but also I found Django doesn't keep persistent postgres connections - something we should have better control of in 1.6 when released and should for my specific app give a huge performance gain. Also, in this type of application I found Django intentionally leaks - something that can be fixed with DEBUG set to False. Then again, I've written the app now :)
| 0
| 1
| 0
| 0
|
2013-08-28T15:42:00.000
| 2
| 0.099668
| false
| 18,492,467
| 0
| 0
| 1
| 2
|
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database.
My issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions:
DatabaseError: SSL error: decryption failed or bad record mac
InterfaceError: connection already closed
What is the best solution to make the forked TCP server work?
Can I ensure the forked process uses its own database connection?
Should I be looking at other modules for writing to the postgres database?
|
Python - Application Logic
| 18,525,402
| 0
| 0
| 135
| 0
|
python,class,methods,tkinter,logic
|
You just have to make an object of type vntProcessor, if this class exists, or import the module vntProcessor in the GUI module, so you can use its functions and process the data (path and subfolder list).
| 0
| 1
| 0
| 0
|
2013-08-28T17:55:00.000
| 1
| 0
| false
| 18,494,959
| 1
| 0
| 0
| 1
|
i am making an application that has GUI and some classes. I ran into a problem of getting an idea for the logic.
here is a brief description of the program:
Structure:
3 modules
Module 1 - dataPreparation.py -responsible for string processing - made of several classes and methods that receive PATH to directory, collect all files in a LIST, after that for each file based on type of file name it sorts it out to appropriate categories that can be accessed through class instances.
Module 2 - gui.py - Responsible for GUI. It crates a simple GUi-layout that offer BROWSE button (to get the PATH), QUIT button to exit application, LISTBOX that lists subfolders from the PATH, and BATCH button that must execute the main processor.
Module 3 - vntProcessor.py - Responsible for processing of collected data. This module is based of an API of another application. It receives the values from the BATCH-button and invokes specific methods based on sorting that was performed using MODULE 1.
So, here is the logic problem that i encountered, and i wanted to ask what is the best way to handle it.
My approach:
I crated scene7_vntAssembler.py .
This file imports Module 1(dataSorting), Module 2(GUI), Module 3(Processor)
i create an instance of GUI
and call it to start interface ( have a window open)
in the interface, i browse for specific folder, so my PATH variable is set.
my list box is populated with subfolders.
my next step should be to press the BATCH folder and forward all of the values (PATH and ARRAY of SUBFOLDERS) to my Module 3 (processor).
Problem:
I cannot figure out the way to do that. How to pass the PATH and SUBFOLDER-LIST to module 3? and invoke operations on collected data?
|
Using Python to Know When a File Has Completely Been Received From an FTP Source
| 18,500,555
| 4
| 5
| 5,218
| 0
|
python,ftp
|
The best way to solve this would be to have the sending process SFTP to a holding area, and then (presumably using SSH) execute a mv command to move the file from the holding area to the final destination area. Then, once the file appears in the destination area, your script knows that it is completely transferred.
| 0
| 1
| 0
| 1
|
2013-08-29T00:32:00.000
| 2
| 0.379949
| false
| 18,500,496
| 0
| 0
| 0
| 1
|
I am using Python to develop an application that does the following:
Monitors a particular directory and watches for file to be
transferred to it. Once the file has finished its transfer, run some
external program on the file.
The main issue I have developing this application is knowing when the file has finished transferring. From what I know the file will be transferred via SFTP to a particular directory. How will Python know when the file has finished transferring? I know that I can use the st_size attribute from the object returned by os.stat(fileName) method. Are there more tools that I need to use to accomplish these goals?
|
Running python script on crontab is causing permissions errors but running via terminal is fine
| 18,503,894
| -1
| 1
| 510
| 0
|
python,linux,macos,permissions,crontab
|
@Lucas Ou-Yang @Hyperboreus
as Hyperboreus said it depends on the user privilege who run it . i think that if you give the /tmp/ dir a 777 permission from the root account it'll be fixed :
chmod 777 -R /tmp/
okay try with : chmod 777 /tmp/ if the error still check if the directory /tmp/ exist !
| 0
| 1
| 0
| 1
|
2013-08-29T06:18:00.000
| 1
| -0.197375
| false
| 18,503,561
| 0
| 0
| 0
| 1
|
I hear that the permissions level via crontab and terminal is totally different.
More specifically, my python script has a command to write a file into the /tmp/ directory. On a linux machine, everything works, both cron and regular shell.
However on OSX, the terminal runs fine but when this command is set on the crontab, an error appears saying that we don't have permissions to write to the /tmp directory.
How should I handle this?
Thanks.
|
Is there a efficient way to override get() and put() method in an appengine entity to make it use memcache?
| 18,520,162
| 2
| 1
| 148
| 0
|
python,google-app-engine,optimization,memcached,entity
|
If you're not using NDB, use NDB. Your data won't change, just the way you interface with the datastore will. NDB entities are automatically cached so any requests by key are searched for in memcache first and then the datastore if the entity is not found.
NDB is the new standard anyways, so you might as well switch now instead of later.
| 0
| 1
| 0
| 0
|
2013-08-29T14:14:00.000
| 1
| 1.2
| true
| 18,513,411
| 0
| 0
| 1
| 1
|
I have several appengine entities that are frequently read at different places of my applications and not so frequently updated.
I'd like to use memcache to reduce the number of datastore reads of my app, but i don't really want to update my code everywhere.
I was wondering if there is a decent way to override the get() method of my entity to check if we stored it in memcache before doing a datastore read, and use put() to delete this memcache entry.
Does someone have a good solution for that ?
|
Permission denied error installing Pillow (PIL) on mac
| 19,933,131
| 1
| 2
| 1,554
| 0
|
python,macos,python-imaging-library,pillow
|
I'm assuming you've solved the problem by now. If you haven't, you can now install pillow using brew by going
brew install samueljohn/python/pillow.
Why it isn't just brew install pillow is beyond me.
| 0
| 1
| 0
| 0
|
2013-08-29T14:37:00.000
| 3
| 0.066568
| false
| 18,513,932
| 1
| 0
| 0
| 1
|
I'm attempting to install the Pillow fork of PIL, and every method I've tried results in this error:
unable to execute /: Permission denied
error: command '/' failed with exit status 1
This occurs
using pip install Pillow
using easy_install Pillow-master.zip
using python setup.py install
using python setup.py build
The last is the most confusing to me, honestly. I can't even build the module in the same directory it's been extracted to.
I've made sure to install all of the prereqs using homebrew, just as the Pillow readme suggests.
This error has not occurred when installing other python modules.
Edit: I have run all of these commands with and without sudo.
|
File extension validation with python
| 18,541,480
| 0
| 0
| 1,133
| 0
|
python,linux,file-extension,file-type
|
Ultimately, there is no absolute way of knowing. For several reasons:
Some file format use simple identifiers, but others don't.
For those that don't, the only way is analyzing the behavior of a program able to able the format. If the program can successfully open the file, then it belongs to it.
But if not, the file could belong to hundreds of formats you don't have a program to open with.
I'm afraid you will need to be content with a partial answer like the ones you already have.
| 0
| 1
| 0
| 0
|
2013-08-30T21:02:00.000
| 2
| 0
| false
| 18,541,427
| 1
| 0
| 0
| 1
|
I want to check if a given file's extension is correct or not. For example, someone give me a file with an extension .zip but actually it may be an executable.
Using mimetypes I could not determine a file's real type. As far as I see, mimetypes needs an extension.
I can map the output of unix file command with some extensions. Even if you change the extension, you cannot deceive file command. However, this solution needs a subprocess.
I thought, there may be a more pythonic solution of this problem. Does anyone know?
|
How to run a function as a child process in python?
| 18,543,541
| 1
| 0
| 160
| 0
|
python,python-2.7,python-3.x,subprocess,fork
|
Take a look at multiprocess.Process
| 0
| 1
| 0
| 0
|
2013-08-31T01:03:00.000
| 1
| 0.197375
| false
| 18,543,447
| 1
| 0
| 0
| 1
|
I have a function dosomething() inside which am doing a os.chroot(). A process running chroot() cannot get out of chroot jail. So i want this dosomething() to run as a child process in the main program and i need to wait till the child process gets over . What is the simplest way to do this ?
|
How to avoid to type this and set that python 2.7 is default instead of python 2.6?
| 18,576,513
| 3
| 0
| 63
| 0
|
python,python-2.7
|
Specify Python 2.7's executable in the shebang of your scripts. Don't screw with the system Python.
| 0
| 1
| 0
| 0
|
2013-09-02T15:21:00.000
| 2
| 0.291313
| false
| 18,576,492
| 1
| 0
| 0
| 1
|
I have access to old linux server(debian) and default Python is 2.6. To start my scripts I need Python 2.7. When I type python to console always 2.6 starts
(I have installed 2.7 and when I want to run I use command pythonbrew use 2.7.2)
How to avoid to type this and set that Python 2.7 is default?
|
Unable to run any Python program using code runner
| 21,959,976
| 1
| 1
| 605
| 0
|
python,coderunner
|
Coderunner is more useful in testing single python files instead of larger projects, however it can run such projects. A larger project should have a setup.py or a main.py which should be run. Also, what errors are you getting?
| 0
| 1
| 0
| 0
|
2013-09-02T23:19:00.000
| 1
| 0.197375
| false
| 18,581,973
| 1
| 0
| 0
| 1
|
I'm brand new to coding. I managed to figure out how to use github and I have been forking projects over to my machine in an attempt to play around with them and learn python. My problem is every single project I fork over, when I run any of the .py files in Coderunner it pops up with errors and doesn't run correctly.
Is this because coderunner is not capable of running these programs? Or do I have to run the programs through terminal to get them functioning correctly?
|
Pip doesn't install packages to activated virtualenv, ignores requirements.txt
| 25,869,799
| 3
| 11
| 6,391
| 0
|
python,django,git,virtualenv,virtualenvwrapper
|
Struggled with some variation of this issue not long ago; it ended up being my cluttered .bash_profile file.
Make sure you don't have anything that might mess up your virtualenv inside your .bash_profile/.bashrc, such as $VIRTUAL_ENV or $PYTHONHOME or $PYTHONPATH environment variables.
| 0
| 1
| 0
| 0
|
2013-09-03T23:57:00.000
| 3
| 0.197375
| false
| 18,603,302
| 1
| 0
| 1
| 2
|
I am attempting to setup a development environment on my new dev machine at home. I have just installed Ubuntu and now I am attempting to clone a remote repo from our web-server and install its dependencies so I can begin work.
So far I have manually installed virtualenv and virtualenvwrapper from pypi and edited my bash.rc appropriately to source my virtualenvs when i start my terminal. I then cloned my repo to ~/projects/project-name/websitename.com. Then I used virtualenvwrapper to mkvirtualenv env-name from ~/projects/project-name/websitename.com. This reflects exactly the file-structure/setup of the web-server I am cloning from. So far so good.
I logged into the dev server and activate the virtualenv there and use pip freeze -l > req.txt to render a dependencies list and scp to my local machine. I activate the virtualenv on my local machine, navigate to the ~/projects/project-name/websitename.com and execute pip install -r path-to-req.txt and it runs through all of the dependencies as if nothing is wrong. However, when i attempt to manage.py syncdb i get an error about not finding core django packages. What the hell? So i figure somehow Django failed to install, i run pip install Django==1.5.1 and it completes successfully. I got to setup my site again and get another error about no module named django_extensions. Okay, what the hell with it, i just installed all of these packages with pip?!
So i pip freeze -l > test.txt and cat test.txt, what does it list? Django==1.5.1, the one package I just manually installed. Why isn't pip installing my dependencies from my specified list into my virtualenv? What am I messing up here?
-EDIT-------------
Which pip gives me the path to pip in my virtualenv
I have only 1 virtualenv and it is activated
|
Pip doesn't install packages to activated virtualenv, ignores requirements.txt
| 32,925,897
| 2
| 11
| 6,391
| 0
|
python,django,git,virtualenv,virtualenvwrapper
|
I know this is an old post, but I just encountered a similar problem. In my case the cause was that I was running the pip install command using sudo. This made the command run globally and the packages install in the global python path.
Hope that helps somebody.
| 0
| 1
| 0
| 0
|
2013-09-03T23:57:00.000
| 3
| 0.132549
| false
| 18,603,302
| 1
| 0
| 1
| 2
|
I am attempting to setup a development environment on my new dev machine at home. I have just installed Ubuntu and now I am attempting to clone a remote repo from our web-server and install its dependencies so I can begin work.
So far I have manually installed virtualenv and virtualenvwrapper from pypi and edited my bash.rc appropriately to source my virtualenvs when i start my terminal. I then cloned my repo to ~/projects/project-name/websitename.com. Then I used virtualenvwrapper to mkvirtualenv env-name from ~/projects/project-name/websitename.com. This reflects exactly the file-structure/setup of the web-server I am cloning from. So far so good.
I logged into the dev server and activate the virtualenv there and use pip freeze -l > req.txt to render a dependencies list and scp to my local machine. I activate the virtualenv on my local machine, navigate to the ~/projects/project-name/websitename.com and execute pip install -r path-to-req.txt and it runs through all of the dependencies as if nothing is wrong. However, when i attempt to manage.py syncdb i get an error about not finding core django packages. What the hell? So i figure somehow Django failed to install, i run pip install Django==1.5.1 and it completes successfully. I got to setup my site again and get another error about no module named django_extensions. Okay, what the hell with it, i just installed all of these packages with pip?!
So i pip freeze -l > test.txt and cat test.txt, what does it list? Django==1.5.1, the one package I just manually installed. Why isn't pip installing my dependencies from my specified list into my virtualenv? What am I messing up here?
-EDIT-------------
Which pip gives me the path to pip in my virtualenv
I have only 1 virtualenv and it is activated
|
How to deserialize xml created by to_xml() in google appengine
| 20,461,649
| 0
| 4
| 190
| 0
|
python,xml,google-app-engine
|
Just to clarify, I'm going to assume that you're asking about the Model.to_xml() method, and that by efficient you mean a single method which you can call which will provide you with a model object.
As you noted, there is no such method on the Model class for the datastore API. I think the intention is that the purpose of the to XML method is to make the model easily exportable to another application, such as a javascript client or for importing into another database or storage mechanism, similar to using the remote API.
It should be possible to create a function or static method of a specific Model class which would generate a new model of a particular type from parsed XML. You will then most likely want to perform a get_or_insert() to write the resulting object.
If you're looking for a native Python to Python serialization method, you could consider pickle.
| 0
| 1
| 0
| 0
|
2013-09-04T17:33:00.000
| 1
| 0
| false
| 18,620,311
| 0
| 0
| 1
| 1
|
In Google App Engine, I can serialize an object by calling its to_xml() method.
There doesnt appear to be an equivalent from_xml() method to deserialize the xml.
Is there an efficient way to deserialize back to an object?
|
GAE Python Server initiating Angular Routes from Responses
| 18,626,669
| 0
| 0
| 298
| 0
|
python,google-app-engine,angularjs,routes,webapp2
|
I'm working on the same stack and the way we do it is that we have the main pages (index, login, signup) configured as regular individual pages where we use angular without routing. Any page that anonymous users will access will be such pages which work over server side routing. But once the user login is successful we serve a page which will start serving other views through client side routing.
| 0
| 1
| 0
| 0
|
2013-09-04T23:31:00.000
| 2
| 0
| false
| 18,625,446
| 0
| 0
| 1
| 2
|
I'm currently building a web application using AngularJS, Webapp2, and the Python Google App Engine environment. This app is supposed to have all the features of modern social networks (users, posts, likes, comments). I want the page hierarchy to look like this, the main pages are from the server and the sub pages are supposed to be angular routes:
Index
Learn More
Sign up
Log in
Feed Page
Popular Feed
Following Feed
Profile
Interactions
Posts
Settings
Profile
Account
The problem is that when a user wants to signup I want them to be able to go to /signup and get the index page with the signup route loaded. How can I get the server to preload an angular route from the response
|
GAE Python Server initiating Angular Routes from Responses
| 21,590,989
| 0
| 0
| 298
| 0
|
python,google-app-engine,angularjs,routes,webapp2
|
Make both GAE and Angular understand your routes. You will need to define them for one, why not both?
You just have to organise your markup and structure so it can support complete page loading and ajax loading. For example, initial load is done on any route by GAE, then Angular can take over, loading each page "content" as it goes.
This has the additional advantage of public pages being crawler friendly while real users get ajax loading (which should reduce bandwidth once you scale).
You may need to load user state in via the server, and or force a page reload on log in or out to do so.
I have done the above on a few apps, and it works well.
| 0
| 1
| 0
| 0
|
2013-09-04T23:31:00.000
| 2
| 0
| false
| 18,625,446
| 0
| 0
| 1
| 2
|
I'm currently building a web application using AngularJS, Webapp2, and the Python Google App Engine environment. This app is supposed to have all the features of modern social networks (users, posts, likes, comments). I want the page hierarchy to look like this, the main pages are from the server and the sub pages are supposed to be angular routes:
Index
Learn More
Sign up
Log in
Feed Page
Popular Feed
Following Feed
Profile
Interactions
Posts
Settings
Profile
Account
The problem is that when a user wants to signup I want them to be able to go to /signup and get the index page with the signup route loaded. How can I get the server to preload an angular route from the response
|
Running upgraded version of SQLite (3.8) on Mac when Terminal still defaults to old version 3.6
| 18,629,528
| 0
| 1
| 1,449
| 1
|
python,linux,macos,sqlite
|
To figure out exactly which sqlite3 binaries your system can find type which -a sqlite3. This will list the apps in the order that they are found according to your PATH variable, this also shows what order the thes ystem would use when figuring out which to run if you have multiple versions.
Homebrew should normally links binaries into your /usr/local/bin, but as sqlite3 is provided by MAC OS, it is only installed into /usr/local/Cellar/sqlite3, and not linked into /usr/local/bin. As the Cellar path is not in your PATH variable, the system doesn't know that the binaries exist to run.
Long story short, you can just run the Homebrew binary directly with /usr/local/Cellar/sqlite/3.8.0/bin/sqlite3.
| 0
| 1
| 0
| 0
|
2013-09-05T00:54:00.000
| 2
| 0
| false
| 18,626,114
| 0
| 0
| 0
| 1
|
I have a Mac running OS X 10.6.8, which comes pre-installed with SQLite3 v3.6. I installed v3.8 using homebrew. But when I type "sqlite3" in my terminal it continues to run the old pre-installed version. Any help? Trying to learn SQL as I'm building my first web app.
Not sure if PATH variable has anything to do with it, but running echo $PATH results in the following: /usr/local/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
And the NEW version of SQLite3 is in the following directory: /usr/local/Cellar/sqlite
I should add that I also downloaded the binary executable to my desktop, and that works if I click from my desktop, but doesn't work from the terminal.
Any help would be greatly appreciated?
|
JavaScript client failed to send WebSocket frame to WebSocket server on Amazon EC2
| 20,952,263
| 1
| 0
| 1,419
| 0
|
python,amazon-web-services,amazon-ec2,websocket
|
This could be caused by to a Chrome Extension.
| 0
| 1
| 1
| 0
|
2013-09-05T22:46:00.000
| 1
| 0.197375
| false
| 18,647,154
| 0
| 0
| 1
| 1
|
I have a web socket server running on an Ubuntu 12.04 EC2 Instance. My web socket server is written in Python, I am using Autobahn WebSockets.
I have a JavaScript client that uses WebRTC to capture webcam frames and send them to the web socket server.
My webserver (where the JavaScript is hosted) is not deployed on EC2. The python web socket server only do video frame processing and is running over TCP and port 9000.
My Problem:
The JS client can connect to the web socket and the server receives and processes the webcam frames. However, after 5 or 6 minutes the client stops sending the frames and displaying the following message:
WebSocket connection to 'ws://x.x.x.x:9000/' failed: Failed to send
WebSocket frame.
When I print the error data I got "undefined".
Of course, this never happens when I run the server on my local testing environment.
|
Python script displays output in command window but nothing shows in Windows
| 18,662,659
| 2
| 0
| 2,336
| 0
|
python,windows,pythonw
|
pythonw supresses the console window that is created when a python script is executed. Its intended for programs that open their own GUI. Without pythonw, a python gui app would have its regular windows plus an extra console floating around. I'm not sure what pythonw does to your stdout, but os.isatty() returns False and I assume stdout/err are just dumped into oblivion. If you run a DOS command like os.system("pause"), Windows creates a new console window for that command only. That's what you see on the screen.
If you want to see your script output, you should either run with python.exe and add an input prompt at the end of the script as suggested by @GaryWalker, or use a gui toolkit like tcl/tk, Qt or wxPython.
| 0
| 1
| 0
| 0
|
2013-09-06T14:00:00.000
| 2
| 0.197375
| false
| 18,659,656
| 0
| 0
| 0
| 1
|
I've written a script that works great at a command prompt, but it only displays the last line if I double click on the script.py icon on my desktop.
The function in the script runs perfectly but once it finds a match it's supposed to display the output on the screen. At the end, I have an os.pause statement and THAT displays when the script is done, but nothing else displays on the screen.
I AM executing it using pythonw.exe. What else should I check?
Thank you.
|
How can I use Homebrew to install both Python 2 and 3 on Mac?
| 18,674,756
| 5
| 149
| 81,770
| 0
|
python,homebrew
|
Alternatively, you probably can just enter "python3" to run your most current version of python3.x and "python" or "python2" to run the latest installed 2.x version.
| 0
| 1
| 0
| 0
|
2013-09-07T08:11:00.000
| 9
| 0.110656
| false
| 18,671,253
| 1
| 0
| 0
| 2
|
I need to be able to switch back and forth between Python 2 and 3. How do I do that using Homebrew as I don't want to mess with path and get into trouble.
Right now I have 2.7 installed through Homebrew.
|
How can I use Homebrew to install both Python 2 and 3 on Mac?
| 49,878,692
| 1
| 149
| 81,770
| 0
|
python,homebrew
|
I thought I had the same requirement - to move between Python versions - but I achieved all I needed with only Python3.6 by building from source instead of using homebrew.
git clone https://git.<theThingYouWantToInstall>
Depending on the repo, check if there is MAKE file already setup for this option.
| 0
| 1
| 0
| 0
|
2013-09-07T08:11:00.000
| 9
| 0.022219
| false
| 18,671,253
| 1
| 0
| 0
| 2
|
I need to be able to switch back and forth between Python 2 and 3. How do I do that using Homebrew as I don't want to mess with path and get into trouble.
Right now I have 2.7 installed through Homebrew.
|
Embedded CPython, thread interaction using named pipe
| 22,251,245
| 1
| 0
| 157
| 0
|
python,c,pthreads,multiprocessing,named-pipes
|
Answering my own question:
I've implemented this (a while back) using option 4. Works good, very stable.
Releasing the GIL wasn't happening in my first attempt, because I didn't initialize threading.
After that, smooth sailing.
| 0
| 1
| 0
| 0
|
2013-09-08T17:51:00.000
| 1
| 1.2
| true
| 18,686,796
| 1
| 0
| 0
| 1
|
I'd like people's opinion on which direction to choose between different solutions to implement inter-thread named-pipe communication.
I'm working on a solution for the following:
A 3rd party binary on AIX calls a shared object.
I build this shared object using the python 2.7.5 api, so I have a python thread (64 bit).
So the stack is:
3rd p binary -> my shared object / dll 'python-bridge' -> python 2.7.5 interpreter (persistent)
From custom code inside the 3rd party binary (in a propriatary language), I initialize the python interpreter through the python-bridge, precompile python code blocks through the python-bridge, and execute these bits of code using PyEval_EvalCode in the bridge.
The python interpreter stays alive during the session, and is closed just before the session ends.
Simple sequential python code works fine, and fast. After the call to the shared object method, python references are all decreased (inside the method) and no garbage remains. The precompiled python module stays in memory, works fine. However, I also need to interact with streaming data of the main executable. That executable (of which I don't have the source code) supports fifo through a named pipe, which I want to use for inter-thread communication.
Since the named pipe is blocking, I need a separate thread.
I came up with 3 or 4 alternatives (feel free to give more suggestions)
Use the multiprocess module within python
Make my own C thread, using pthread_create, and use python in there (carefully, I know about the non-threadsafe issues)
Make my own C thread, using pthread_create, parse the named pipe from C, and calling the python interpreter main thread from there
(maybe possible?) use the simpler Threading module of python (which isn't 'pure' threading), and release the GIL at the end of the API call to the bridge. (haven't dared to do this, need someone with insight here. Simple test with Threading and sleep shows it's working within the python call, but the named pipe Thread does nothing after returning to the main non-python process)
What do you suggest?
I'm trying option 1 at the moment, with some success, but it 'feels' a bit bloated to spawn a new process just for parsing a named pipe.
Thanks for your help, Tijs
|
Installing numpy on Amazon EC2
| 18,743,924
| 8
| 11
| 14,449
| 0
|
python,numpy,amazon-ec2,pip,easy-install
|
I ended up just installing numpy through yum, sudo yum install numpy. I guess this is the best I can do for now. When working with virtualenv and I need numpy, I will tell it to use site packages.
Thanks for the suggestion @Robert.
| 0
| 1
| 0
| 0
|
2013-09-11T03:36:00.000
| 4
| 1.2
| true
| 18,732,250
| 1
| 0
| 0
| 1
|
I am having trouble installing numpy on an Amazon EC2 server. I have tried using easy_install, pip, pip inside a virtual env, pip inside another virtual env using python 2.7...
Every time I try, it fails with the error: gcc: internal compiler error: Killed (program cc1), and then further down the line I get a bunch of python errors, with easy_install I get: ImportError: No module named numpy.distutils, and with pip I get: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 72: ordinal not in range(128).
The EC2 instance is running kernel 3.4.43-43.43.amzn1.x86_64. Has anybody solved this problem? Numpy has always been hard for me to install, but I can usually figure it out... at this point I don't care whether it is in it's own virtualenv, I just want to get it installed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.