Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Accessing Google Groups Services on Google Apps Script in Google App Engine?
| 12,881,937
| 2
| 1
| 221
| 0
|
python,google-app-engine,google-apps-script
|
No you can't. AS and GAE are totally different things and won't work together.
What you can really do is (abstract):
write an AS that does what you need
redirect from GAE to the AS url to make sure that the user logs-in/grants permissions.
perform what you needed to do with AS
send back the user to GAE with a bunch of parameters if you need to
| 0
| 1
| 0
| 0
|
2012-10-14T11:35:00.000
| 2
| 1.2
| true
| 12,881,834
| 0
| 0
| 1
| 1
|
I'm trying to work out if it is possible to use Google Apps Scripts inside Google App Engine?
And if there is a tutorial in doing this out there ?
Through reading Google's App Script site I get the feeling that you can only use app scripts inside Google Apps like Drive, Docs etc?
What I would like to do is to be able to use the Groups Service that is in GAS inside GAE to Create, delete and only show the groups that a person is in, all inside my GAE App.
Thanks
|
Get notified when celery workers die
| 12,898,127
| 1
| 1
| 699
| 0
|
python,celery
|
A worker can't trigger signals after it has been killed.
The best way would be to write a plugin for Nagios, Munin or similar to notify when
the number of workers decrease. See the Monitoring and Management guide in the Celery documentation.
| 0
| 1
| 0
| 0
|
2012-10-15T12:25:00.000
| 1
| 1.2
| true
| 12,895,653
| 0
| 0
| 0
| 1
|
Where's the best place to get notified when celery workers die?
I'm aware of the worker_shutdown signal but does it get called also in sudden death of workers?
I have these pdf rendering workers and they suddenly died or at least became unresponsive to remote control commands so I'm looking for ways so I can get notified when things like this happen.
|
How can I run a celery periodic task from the shell manually?
| 12,900,160
| 8
| 89
| 60,194
| 0
|
python,django,celery,django-celery,celery-task
|
I think you'll need to open two shells: one for executing tasks from the Python/Django shell, and one for running celery worker (python manage.py celery worker). And as the previous answer said, you can run tasks using apply() or apply_async()
I've edited the answer so you're not using a deprecated command.
| 0
| 1
| 0
| 1
|
2012-10-15T16:34:00.000
| 3
| 1
| false
| 12,900,023
| 0
| 0
| 1
| 1
|
I'm using celery and django-celery. I have defined a periodic task that I'd like to test. Is it possible to run the periodic task from the shell manually so that I view the console output?
|
How to Transfer Files from Client to Server Computer by using python script?
| 66,757,126
| -1
| 2
| 9,452
| 0
|
python
|
I used the same script, but my host failed to respond. My host is in different network.
WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
| 0
| 1
| 1
| 1
|
2012-10-16T07:14:00.000
| 2
| -0.099668
| false
| 12,909,334
| 0
| 0
| 0
| 1
|
I am writing a python script to copy python(say ABC.py) files from one directory to another
directory with the same folder name(say ABC) as script name excluding .py.
In the local system it works fine and copying the files from one directory to others by
creating the same name folder.
But actually I want copy these files from my local system (windows XP) to the remote
system(Linux) located in other country on which I execute my script. But I am getting
the error as "Destination Path not found" means I am not able to connect to remote
that's why.
I use SSH Secure client.
I use an IP Address and Port number to connect to the remote server.
Then it asks for user id and password.
But I am not able to connect to the remote server by my python script.
Can Any one help me out how can I do this??
|
Aliases for commands with Python cmd module
| 12,911,826
| 1
| 10
| 3,225
| 0
|
python,python-3.x
|
The docs mention a default method, which you can override to handle any unknown command. Code it to prefix scan a list of commands and invoke them as you suggest for do_alias.
| 0
| 1
| 0
| 0
|
2012-10-16T09:16:00.000
| 3
| 0.066568
| false
| 12,911,327
| 0
| 0
| 0
| 1
|
How can I create an alias for a command in a line-oriented command interpreter implemented using the cmd module?
To create a command, I must implement the do_cmd method. But I have commands with long names (like constraint) and I want to provide aliases (in fact, shortcuts) for these commands (like co). How can I do that?
One possibility that came to my mind is to implement the do_alias (like do_co) method and just calling do_cmd (do_constraint) in this method. But this brings me new commands in the help of the CLI.
Is there any other way to achieve this? Or may be is there a way to hide commands from the help output?
|
PWM signal out of serial port with linux
| 12,919,968
| 1
| 4
| 1,468
| 0
|
c++,python,embedded
|
doubt you can do this you are using a uart interface...just get an arduino or someat and send serial commands to the arduino (serial pins) which then puts the correct pwm signal out its pins ... probably 5 lines of arduino code and another 5 of python code ...
all that said you may be able to find some very difficult and hacky way to output a PWM signal over serial ... but you need to think about if thats really appropriate ...
| 0
| 1
| 0
| 1
|
2012-10-16T16:50:00.000
| 2
| 0.099668
| false
| 12,919,644
| 0
| 0
| 0
| 1
|
How to to send out a pwm signal from the serial port with linux? (With python or c++)
I want to connect a motor directly to change the speed rotation.
|
GAE: Instance shutdown from source code
| 12,921,291
| 1
| 0
| 144
| 0
|
python,google-app-engine
|
If can disable an entire application (from Application Settings page) for sometime and then reenable it (or you can delete it from that point onwards).
There is no way you can "shutdown" a particular instance. You can have different version of your application, but at any moment in time, you can only have only one instance as the active version of your application. You can however split traffic between different versions, but that does not change active versions.
In terms of performance, you can change the Max Idle Instances value to one so that only one of the instance is preloaded or active.
| 0
| 1
| 0
| 0
|
2012-10-16T18:28:00.000
| 2
| 1.2
| true
| 12,921,120
| 0
| 0
| 1
| 2
|
In user interface of Google App Engine, in "Instances" I can shutdown selected instances by press button "Shutdown".
Can I do shutdown by program from source code?
|
GAE: Instance shutdown from source code
| 12,944,920
| 0
| 0
| 144
| 0
|
python,google-app-engine
|
Actually you can force an instance to shutdown within your code, but it's not pretty.
Just allocate more memory than you instance has, it will be then shutdown for you.
I have used this technique in some python2.5 M/S apps where a DeadlineExceeded during startup could cause problems with incomplete imports. If the next handled request gave me an ImportError somewhere I knew the instance was toast, so I would redirect the user to the site, and then create a really big string exhausting memory, and then that instance would be shutdown.
You could in theory do something similiar.
| 0
| 1
| 0
| 0
|
2012-10-16T18:28:00.000
| 2
| 0
| false
| 12,921,120
| 0
| 0
| 1
| 2
|
In user interface of Google App Engine, in "Instances" I can shutdown selected instances by press button "Shutdown".
Can I do shutdown by program from source code?
|
CPython and GCC
| 12,932,872
| 5
| 1
| 295
| 0
|
python
|
Python is implemented in C, and records what version of the C compiler was used to compile it (to aid tracking down compiler-specific bugs).
The implementation itself does not vary based on the compiler. It can vary based on the platform it is compiled for, and the available external libraries, but there is nothing altering Python behaviour based on the compiler used.
| 0
| 1
| 0
| 0
|
2012-10-17T10:46:00.000
| 1
| 1.2
| true
| 12,932,610
| 1
| 0
| 0
| 1
|
If I start python under MacOS X 10.8 in the console it starts with "Python 2.7.2 (default, Jun 20 2012, 16:23:33)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin".
In what way does the implementation of python depend on GCC?
|
Can I run py2app on Windows?
| 72,331,122
| 0
| 10
| 5,722
| 0
|
python,windows,py2app
|
No, There's no way on earth you can run py2app on windows,py2app cannot create a Mac app bundle in windows because it's impossible to do so... Either hackintosh your pc or emulate a mac using QEMU,VMware or VirtualBox!
| 0
| 1
| 0
| 0
|
2012-10-17T15:33:00.000
| 2
| 0
| false
| 12,937,916
| 1
| 0
| 0
| 1
|
I recently discovered that an outdated version of Python was causing my Wx app to run into errors.
I can't install Python 2.7.3 on my Mac, and when I tried it in a virtual machine, py2app was still "compiling" the app after running overnight (my Windows/Linux box has an ≈1GHz processor).
Is there a version of py2app that runs on Windows?
|
Close TCP port 80 and 443 after forking in Django
| 12,947,246
| 2
| 5
| 256
| 0
|
python,django,linux,apache2
|
If you use the subprocess module to execute the script, the close_fds argument to the Popen constructor will probably do what you want:
If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed.
Assuming they weren't simply closed, the first three file descriptors are traditionally stdin, stdout and stderr, so the listening sockets in the Django application will be among those closed in the child process.
| 0
| 1
| 0
| 0
|
2012-10-18T03:38:00.000
| 1
| 1.2
| true
| 12,946,708
| 0
| 0
| 1
| 1
|
I am trying to fork() and exec() a new python script process from within a Django app that is running in apache2/WSGI Python. The new python process is daemonized so that it doesn't hold any association to apache2, but I know the HTTP ports are still open. The new process kills apache2, but as a result the new python process now holds port 80 and 443 open, and I don't want this.
How do I close port 80 and 443 from within the new python process? Is there a way to gain access to the socket handle descriptors so they can be closed?
|
iPython filepath autocompletion
| 15,759,634
| 2
| 4
| 910
| 0
|
python,ipython
|
Just now I have a try with ipython on my mac, auto-complete works well when dealing with space-contained directory. The ipython version I used is 0.13.1. Perhaps simple upgrade your ipython can solve.
| 0
| 1
| 0
| 0
|
2012-10-18T10:57:00.000
| 1
| 1.2
| true
| 12,953,040
| 1
| 0
| 0
| 1
|
I am running iPython v0.13 on windows 7 64-bit. (qtconsole --pylab=inline)
When I type %cd 'C:/Users/My Name/Downloads' it takes me to the desired location.
When I tab auto-complete between directories the auto-complete fails if directories have a space in their names (as in the example).
Is there a reason for this and a solution to overcome it (besides migrating to Linux or using underscores as filename/directory name separators.
Thanks.
|
Merge (append) GAE datastores from different apps
| 12,958,199
| 0
| 1
| 52
| 0
|
python,google-app-engine,google-cloud-datastore
|
You can use UrlFetch in App2 to request all the data you need from App1 and proces it to create your merged result. It is quite easy to serve and serialize the entities (with a cursor) using JSON for the data exchange in App1.
| 0
| 1
| 0
| 0
|
2012-10-18T14:06:00.000
| 2
| 0
| false
| 12,956,582
| 0
| 0
| 1
| 1
|
I have 2 different apps that are basically identical but I had to setup a 2nd app because of GAE billing issues.
I want to merge the datastore data from the 1st app with the 2nd apps data store. By merge, I simply want to append the 2 data stores. To help w/visualise
App1:
SomeModel
AnotherModel
App2:
SomeModel
Another Model
I want app 2's datastore to be the sum of app2 and app1. The only way I see to transfer data from one app to another on the app engine administration page will overwrite the target destination data... I don't want to overwrite. thx for any help
|
Preserving bash redirection in a python subprocess
| 12,960,474
| 0
| 0
| 1,449
| 0
|
python,redirect,subprocess
|
You can use existing file descriptors as the stdout/stderr arguments to subprocess.Popen. This should be exquivalent to running from with redirection from bash. That redirection is implemented with fdup(2) after fork and the output should never touch your program. You can probably also pass fopen('/dev/null') as a file descriptor.
Alternatively you can redirect the stdout/stderr of your controller program and pass None as stdout/stderr. Children should print to your controllers stdout/stderr without passing through python itself. This works because the children will inherit the stdin/stdout descriptors of the controller, which were redirected by bash at launch time.
| 0
| 1
| 0
| 1
|
2012-10-18T17:23:00.000
| 3
| 0
| false
| 12,960,276
| 0
| 0
| 0
| 2
|
To begin with, I am only allowed to use python 2.4.4
I need to write a process controller in python which launches and various subprocesses monitors how they affect the environment. Each of these subprocesses are themselves python scripts.
When executed from the unix shell, the command lines look something like this:
python myscript arg1 arg2 arg3 >output.log 2>err.log &
I am not interested in the input or the output, python does not need to process. The python program only needs to know
1) The pid of each process
2) Whether each process is running.
And the processes run continuously.
I have tried reading in the output and just sending it out a file again but then I run into issues with readline not being asynchronous, for which there are several answers many of them very complex.
How can I a formulate a python subprocess call that preserves the bash redirection operations?
Thanks
|
Preserving bash redirection in a python subprocess
| 12,961,812
| 0
| 0
| 1,449
| 0
|
python,redirect,subprocess
|
The subprocess module is good.
You can also do this on *ix with os.fork() and a periodic os.wait() with a WNOHANG.
| 0
| 1
| 0
| 1
|
2012-10-18T17:23:00.000
| 3
| 0
| false
| 12,960,276
| 0
| 0
| 0
| 2
|
To begin with, I am only allowed to use python 2.4.4
I need to write a process controller in python which launches and various subprocesses monitors how they affect the environment. Each of these subprocesses are themselves python scripts.
When executed from the unix shell, the command lines look something like this:
python myscript arg1 arg2 arg3 >output.log 2>err.log &
I am not interested in the input or the output, python does not need to process. The python program only needs to know
1) The pid of each process
2) Whether each process is running.
And the processes run continuously.
I have tried reading in the output and just sending it out a file again but then I run into issues with readline not being asynchronous, for which there are several answers many of them very complex.
How can I a formulate a python subprocess call that preserves the bash redirection operations?
Thanks
|
Installing Chrome Native Client SDK
| 13,452,816
| 0
| 1
| 1,326
| 0
|
python,google-nativeclient
|
I got a solution-
not a direct one, though.
managed to use a program to redirect the HTTPS traffic through the HTTP proxy.
I used the program called "proxifier". Works great.
| 0
| 1
| 1
| 0
|
2012-10-18T22:21:00.000
| 2
| 1.2
| true
| 12,964,666
| 0
| 0
| 0
| 1
|
For the last few days I have been trying ti install the Native Client SDK for chrome in Windows and/or Ubuntu.
I'm behind a corporate network, and the only internet access is through an HTTP proxy with authentication involved.
When I run "naclsdk update" in Ubuntu, it shows
"urlopen error Tunnel connection failed: 407 Proxy Authentication Required"
Can anyone please help ?
|
Copy file from UNIX to Windows using script
| 12,973,389
| 1
| 2
| 1,745
| 0
|
python,windows,shell,unix
|
Have you considered using Cygwin together with rsync? You could write a small bash script that uses rsync to fetch the files you need, and run this as a daily cron job.
| 0
| 1
| 0
| 0
|
2012-10-19T10:38:00.000
| 2
| 0.099668
| false
| 12,972,530
| 0
| 0
| 0
| 2
|
I would like to write a script to automate a task which I do manually everyday. This task requires me to download some files from a UNIX server (Solaris) to my desktop (Windows XP) using WinSCP. Is there any way to copy/move the files from a path in the UNIX server to a path in my Windows XP PC using Python or shell script?
|
Copy file from UNIX to Windows using script
| 12,972,627
| 2
| 2
| 1,745
| 0
|
python,windows,shell,unix
|
If you are planning to use python, then you can use paramiko library. It has sftp support. Once you have the file on windows, use shutils library to move it to your path on windows
| 0
| 1
| 0
| 0
|
2012-10-19T10:38:00.000
| 2
| 0.197375
| false
| 12,972,530
| 0
| 0
| 0
| 2
|
I would like to write a script to automate a task which I do manually everyday. This task requires me to download some files from a UNIX server (Solaris) to my desktop (Windows XP) using WinSCP. Is there any way to copy/move the files from a path in the UNIX server to a path in my Windows XP PC using Python or shell script?
|
Efficient way to do large IN query in Google App Engine?
| 12,980,347
| 0
| 3
| 176
| 1
|
python,google-app-engine
|
I misunderstood part of your problem, I thought you were issuing a query that was giving you 250 entities.
I see what the problem is now, you're issuing an IN query with a list of 250 phone numbers, behind the scenes, the datastore is actually doing 250 individual queries, which is why you're getting 250 read ops.
I can't think of a way to avoid this. I'd recommend avoiding searching on long lists of phone numbers. This seems like something you'd need to do only once, the first time the user logs in using that phone. Try to find some way to store the results and avoid the query again.
| 0
| 1
| 0
| 0
|
2012-10-19T14:43:00.000
| 3
| 0
| false
| 12,976,652
| 0
| 0
| 1
| 1
|
A user accesses his contacts on his mobile device. I want to send back to the server all the phone numbers (say 250), and then query for any User entities that have matching phone numbers.
A user has a phone field which is indexed. So I do User.query(User.phone.IN(phone_list)), but I just looked at AppStats, and is this damn expensive. It cost me 250 reads for this one operation, and this is something I expect a user to do often.
What are some alternatives? I suppose I can set the User entity's id value to be his phone number (i.e when creating a user I'd do user = User(id = phone_number)), and then get directly by keys via ndb.get_multi(phones), but I also want to perform this same query with emails too.
Any ideas?
|
Modify a Google App Engine entity id?
| 13,069,255
| 3
| 2
| 1,404
| 0
|
python,google-app-engine
|
The entity ID forms part of the primary key for the entity, so there's no way to change it. Changing it is identical to creating a new entity with the new key and deleting the old one - which is one thing you can do, if you want.
A better solution would be to create a PhoneNumber kind that provides a reference to the associated User, allowing you to do lookups with get operations, but not requiring every user to have exactly one phone number.
| 0
| 1
| 0
| 0
|
2012-10-19T15:05:00.000
| 1
| 1.2
| true
| 12,977,110
| 0
| 0
| 1
| 1
|
I'm using Google App Engine NDB. Sometimes I will want to get all users with a phone number in a specified list. Using queries is extremely expensive for this, so I thought I'll just make the id value of the User entity the phone number of the user so I can fetch directly by ids.
The problem is that the phone number field is optional, so initially a User entity is created without a phone number, and thus no value for id. So it would be created user = User() as opposed to user = User(id = phone_number).
So when a user at a later point decides to add a phone number to his account, is there anyway to modify that User entity's id value to the new phone number?
|
How can I defer the execution of Celery tasks?
| 36,787,909
| 2
| 18
| 25,567
| 0
|
python,django,celery,django-celery
|
I think you are trying to avoid race condition of your own scripts, not asking for a method to delay a task run.
Then you can create a task, and in that task, call each of your task with .apply(), not .apply_async() or .delay(). So that these tasks run sequentially
| 0
| 1
| 0
| 0
|
2012-10-22T06:42:00.000
| 3
| 0.132549
| false
| 13,006,151
| 0
| 0
| 1
| 1
|
I have a small script that enqueues tasks for processing. This script makes a whole lot of database queries to get the items that should be enqueued. The issue I'm facing is that the celery workers begin picking up the tasks as soon as it is enqueued by the script. This is correct and it is the way celery is supposed to work but this often leads to deadlocks between my script and the celery workers.
Is there a way I could enqueue all my tasks from the script but delay execution until the script has completed or until a fixed time delay?
I couldn't find this in the documentation of celery or django-celery. Is this possible?
Currently as a quick-fix I've thought of adding all the items to be processed into a list and when my script is done executing all the queries, I can simply iterate over the list and enqueue the tasks. Maybe this would resolve the issue but when you have thousands of items to enqueue, this might be a bad idea.
|
Writing raw IP data to an interface (linux)
| 13,040,908
| 2
| 3
| 2,540
| 0
|
python,linux,networking,packet
|
No; there is no /dev/eth1 device node -- network devices are in a different namespace from character/block devices like terminals and hard drives. You must create an AF_PACKET socket to send raw IP packets.
| 0
| 1
| 1
| 0
|
2012-10-23T23:25:00.000
| 2
| 0.197375
| false
| 13,040,834
| 0
| 0
| 0
| 1
|
I have a file which contains raw IP packets in binary form. The data in the file contains a full IP header, TCP\UDP header, and data. I would like to use any language (preferably python) to read this file and dump the data onto the line.
In Linux I know you can write to some devices directly (echo "DATA" > /dev/device_handle). Would using python to do an open on /dev/eth1 achieve the same effect (i.e. could I do echo "DATA" > /dev/eth1)
|
Is using Celery for task management in cluster good idea?
| 13,068,496
| 4
| 1
| 858
| 0
|
python,cloud,celery,distributed-computing
|
IMHO It's a very good idea. I have used it few times in Amazon EC2 in this manner and it was great each time.
One of the big advantages is that it can handle failure of worker servers, so the dynamic nature of the infrastructure is not a problem and you still get things done.
I'm sorry that this answer is so brief, but I believe it answers OPs question. There's not much more to it. Celery is great, does the job, has good docs. Go with it :)
| 0
| 1
| 0
| 0
|
2012-10-25T12:12:00.000
| 1
| 1.2
| true
| 13,068,300
| 0
| 0
| 0
| 1
|
I'm going to use Celery to manage tasks in cluster. There will be one master server and some worker servers. Master sends tasks to the worker servers (any number) and gets the result. Task state should be trackable. Backend is RabbitMQ.
Is using Celery in this case a good Idea? Or are there better solutions?
|
Start Another Program From Python >Separately<
| 13,078,126
| 2
| 8
| 13,835
| 0
|
python,subprocess
|
When I use subprocess.Popen, it starts the separate program, but does so under the original program's Python instance...
Incorrect.
... so that they share the first Python console.
This is the crux of your problem. If you want it to run in another console then you must run another console and tell it to run your program instead.
... I'm aiming for cross-platform compatibility ...
Sorry, there's no cross-platform way to do it. You'll need to run the console/terminal appropriate for the platform.
| 0
| 1
| 0
| 0
|
2012-10-25T22:08:00.000
| 2
| 0.197375
| false
| 13,078,071
| 1
| 0
| 0
| 1
|
I'm trying to run an external, separate program from Python. It wouldn't be a problem normally, but the program is a game, and has a Python interpreter built into it. When I use subprocess.Popen, it starts the separate program, but does so under the original program's Python instance, so that they share the first Python console. I can end the first program fine, but I would rather have separate consoles (mainly because I have the console start off hidden, but it gets shown when I start the program from Python with subprocess.POpen).
I would like it if I could start the second program wholly on its own, as though I just 'double-clicked on it'. Also, os.system won't work because I'm aiming for cross-platform compatibility, and that's only available on Windows.
|
What is a good storage candidate for soft-realtime data acquisition under Linux?
| 13,102,233
| -1
| 7
| 877
| 0
|
python,linux,storage,hdf5,data-acquisition
|
In your case, you could just create 15 files and save each sample sequentially into the corresponding file. This will make sure the requested samples are stored continuous on disk and hence reduce the number of disk seeks while reading.
| 0
| 1
| 0
| 0
|
2012-10-26T09:55:00.000
| 3
| -0.066568
| false
| 13,084,686
| 0
| 0
| 0
| 2
|
I'm building a system for data acquisition. Acquired data typically consists of 15 signals, each sampled at (say) 500 Hz. That is, each second approx 15 x 500 x 4 bytes (signed float) will arrive and have to persisted.
The previous version was built on .NET (C#) using a DB4O db for data storage. This was fairly efficient and performed well.
The new version will be Linux-based, using Python (or maybe Erlang) and ... Yes! What is a suitable storage-candidate?
I'm thinking MongoDB, storing each sample (or actually a bunch of them) as BSON objects. Each sample (block) will have a sample counter as a key (indexed) field, as well as a signal source identification.
The catch is that I have to be able to retrieve samples pretty quickly. When requested, up to 30 seconds of data have to be retrieved in much less than a second, using a sample counter range and requested signal sources. The current (C#/DB4O) version manages this OK, retrieving data in much less than 100 ms.
I know that Python might not be ideal performance-wise, but we'll see about that later on.
The system ("server") will have multiple acquisition clients connected, so the architecture must scale well.
Edit: After further research I will probably go with HDF5 for sample data and either Couch or Mongo for more document-like information. I'll keep you posted.
Edit: The final solution was based on HDF5 and CouchDB. It performed just fine, implemented in Python, running on a Raspberry Pi.
|
What is a good storage candidate for soft-realtime data acquisition under Linux?
| 13,143,593
| 2
| 7
| 877
| 0
|
python,linux,storage,hdf5,data-acquisition
|
Using the keys you described, you should able to scale via sharding if necesssary. 120kB / 30sec ist not that much, so i think you do not need to shard too early.
If you compare that to just using files you'll get more sophisticated queries and build in replication for high availability, DS or offline processing (Map Reduce etc).
| 0
| 1
| 0
| 0
|
2012-10-26T09:55:00.000
| 3
| 0.132549
| false
| 13,084,686
| 0
| 0
| 0
| 2
|
I'm building a system for data acquisition. Acquired data typically consists of 15 signals, each sampled at (say) 500 Hz. That is, each second approx 15 x 500 x 4 bytes (signed float) will arrive and have to persisted.
The previous version was built on .NET (C#) using a DB4O db for data storage. This was fairly efficient and performed well.
The new version will be Linux-based, using Python (or maybe Erlang) and ... Yes! What is a suitable storage-candidate?
I'm thinking MongoDB, storing each sample (or actually a bunch of them) as BSON objects. Each sample (block) will have a sample counter as a key (indexed) field, as well as a signal source identification.
The catch is that I have to be able to retrieve samples pretty quickly. When requested, up to 30 seconds of data have to be retrieved in much less than a second, using a sample counter range and requested signal sources. The current (C#/DB4O) version manages this OK, retrieving data in much less than 100 ms.
I know that Python might not be ideal performance-wise, but we'll see about that later on.
The system ("server") will have multiple acquisition clients connected, so the architecture must scale well.
Edit: After further research I will probably go with HDF5 for sample data and either Couch or Mongo for more document-like information. I'll keep you posted.
Edit: The final solution was based on HDF5 and CouchDB. It performed just fine, implemented in Python, running on a Raspberry Pi.
|
Running a Python Application at Startup
| 13,097,273
| 0
| 1
| 791
| 0
|
python,linux
|
If it is the login program for X11, you can put it into ~/.xinitrc. It is X session startup script.
| 0
| 1
| 0
| 0
|
2012-10-27T00:11:00.000
| 2
| 0
| false
| 13,095,994
| 0
| 0
| 0
| 1
|
I am trying to write a custom login program for a linux system using Python 3. what is the best way to have an application automatically run at startup?
|
Can I deploy (update) Single Python file to existing Google App Engine application?
| 13,102,795
| 5
| 4
| 1,137
| 0
|
python,google-app-engine,deployment
|
No, there isn't. If you change one file, you need to package and upload the whole application.
| 0
| 1
| 0
| 0
|
2012-10-27T06:52:00.000
| 1
| 1.2
| true
| 13,097,975
| 0
| 0
| 1
| 1
|
Is it possible to update single py file in existing GAE app.something like we update cron.yaml using, appcfg.py update_cron
Is there any way to update .py file?
Regrads.
|
Fastest way to write to a log in python
| 13,099,158
| 2
| 1
| 2,514
| 0
|
python,logging,uwsgi,gevent
|
If latency is a crucial factor for your app, undefinitely writing to disk could make things really bad.
If you want to survive a reboot of your server while redis is still down i see no other solutions than writing to disk, otherwise you may want to try with a ramdisk.
Are you sure having a second server with a second instance of redis would not be a better choice ?
Regarding logging, i would simply use low-level I/O functions as they have less overhead (even if we are talking of very few machine cycles)
| 0
| 1
| 0
| 0
|
2012-10-27T09:50:00.000
| 2
| 1.2
| true
| 13,099,032
| 0
| 0
| 1
| 1
|
I am using the gevent loop in uWSGI and I write to a redis queue. I get about 3.5 qps. On occasion, there will be an issue with the redis connection so....if fail, then write to a file where I will have a separate process do cleanup later. Because my app very latency aware, what is the fastest way to dump to disk in python? Will python logging suffice?
|
Can someone help walk me through installing PyOpenCL using Cygwin?
| 13,387,079
| 0
| 0
| 1,353
| 0
|
python,python-2.7,cygwin,opencl,pyopencl
|
Did you install Python into Cygwin?
If not, launch setup.exe, get to the packages screen, and do a search for python.
You can install 2.6.8 and 3 side by side if you want.
After that it's like using python anywhere else. You can do a $ python my.py to run my.py. You can install easy_install or pip, etc. if they'll help. Otherwise, follow the directions for PyOpenCL and you should be good!
| 0
| 1
| 0
| 0
|
2012-10-27T21:19:00.000
| 2
| 0
| false
| 13,104,279
| 1
| 0
| 0
| 1
|
I cannot figure out how to install pyopencl with Cygwin. Never used Cygwin before so I am very lost as to how I initiate python and use it to run my .py setup files.
|
What is good software design practice for taking multiple pairs of files on the command line?
| 13,114,163
| 1
| 0
| 65
| 0
|
python,software-design
|
If files logically belong together in pairs, the least error prone method is probably to require them to be entered together, e.g.
mycommand -Pair FileA1,FileA1 -Pair FileB1, FileB2
That way, you can enforce the contract that files must be entered in pairs (any -Pair argument without two input files can generate an error), and it is obvious to the user that the files must be entered together.
| 0
| 1
| 0
| 0
|
2012-10-28T23:30:00.000
| 1
| 1.2
| true
| 13,114,116
| 1
| 0
| 0
| 1
|
I'm writing a Python script that takes five pairs of files as arguments. I would like to allow the user to input these files as command-line arguments, but I'm worried he will put the files in the wrong order or not put a file right after the file it's paired with. How can I design my command-line arguments to avoid this problem in the least clunky way possible?
For example, if the files are "U1", "M1", "U2", "M2", "U3", "M3", "U4", "M4", "U5", "M5", I'm afraid the person might put the files in the order "U1 U2 U3 U4 U5 M1 M2 M3 M4 M5", or "U1 M2 U3 M4 M5 ..."
|
Twisted Conch - Flow control
| 13,274,700
| 3
| 5
| 485
| 0
|
python,ssh,twisted
|
I haven't used Twisted and don't know Conch at all, but with nobody else answering, I'll give it a shot.
As a general principle, you probably want to buffer very little if any in the middle of the network. (Jim Gettys' notes on "buffer bloat" are enlightening.) So it's clear that you're asking a sensible question.
I assume Conch calls a function in your code when data arrives from the client. Does it suffice to simply not return from that call until you can deliver the data to the backend server? The kernel is still going to buffer data in both the inbound and outbound sockets, so the condition won't be signalled to the downstream client immediately, but I'd expect it to settle into a steady state.
As an alternative, of course, you could tunnel across this router at a different layer than SSH. If you tunnel at a lower layer so you have one end-to-end TCP connection, then the TCP stack should figure out a good window size.
If you tunnel at a higher layer, by doing a git push to the intermediate server and then using a post-receive hook to push the objects the rest of the way, then you get maximal buffering (it's all spooled to disk) and faster response time to the client, though longer total latency. It has the distinct advantage of being much simpler to implement.
| 0
| 1
| 0
| 0
|
2012-10-29T07:28:00.000
| 1
| 0.53705
| false
| 13,117,502
| 0
| 0
| 0
| 1
|
I have a Twisted Conch SSH server and the typical scenario is this:
git via OpenSSH client >>--- WAN1 --->> Twisted conch svr >>--- WAN2 -->> Git server
There will be occassions that the 'git push' is sending data faster over WAN1 than I can proxy it over WAN2, so I need to tell the client to slow down (well before any TCP packet loss causes adjustments the TCP window size) to avoid buffering too much on the Twisted server. Reading the RFC for SSH this is accomplished with not acknowledging via adj window this will then cause the git push to block on syscall write to the pipe backed by openssh.
Looking at conch/ssh/connection.py:L216 in the method def ssh_CHANNEL_DATA(self, packet):
I can accomplish this with setting localWindowSize to 0, and inflight data will still land as the predicate on 230 should still pass (give localWindowLeft). I am wondering if this is the correct approach or am I missing something blindly obvious with regards to flow control with Twisted SSH Conch? *
Note: I acknowledge there are methods placeholders for stopWriting and startWriting on (channel) that I can override so I have hooks to control the other side of the transmission 'git pull', but Im interested in the other side. Also IPush/IPull producer dont seem applicable at this level and I cant see how I can tie in these higher abstraction without butchering conch?
|
Python - get file path programmatically?
| 13,140,093
| 1
| 1
| 598
| 0
|
python,windows,path,os.system
|
I think you can add the location of the files in the PATH environment variable. Follow the steps: Go to My Computer->Right click->Properties->Advanced System Settings->Click Environmental Variables. Now click PATH and then click EDIT. In the variable value field, go to the end and append ';' (without quotes) and then add the absolute path of the .exe file which you want to run via your program.
| 0
| 1
| 0
| 0
|
2012-10-30T00:41:00.000
| 1
| 1.2
| true
| 13,131,699
| 1
| 0
| 0
| 1
|
I am trying to create a Python program that uses the os.system() function to create a new process (application) based on user input... However, this only works when the user inputs "notepad.exe". It does not work, for instance, when a user inputs "firefox.exe". I know this is a path issue because the error says that the file does not exist. I assume then that Windows has some default path setup for notepad that does allow notepad to run when I ask it to? So this leads to my question: is there any way to programmatically find the path to any application a user inputs, assuming it does in fact exist? I find it hard to believe the only way to open a file is by defining the entire path at some point. Or maybe there's a way that Windows does this for me that I do not know how to access? Any help would be great, thanks!
|
How to run a command on an app and get the exit code on cloudfoundry
| 13,139,010
| 1
| 2
| 127
| 0
|
python,ssh,cloud-foundry
|
I assume you mean get the result from outside of CloudFoundry (i.e. not one app launching another app and getting result, stdout and stderr).
You can only access CloudFoundry apps over http(s), so you would have to find a way to wrap your invocation into something that exposes everything you need as http.
| 0
| 1
| 0
| 0
|
2012-10-30T10:18:00.000
| 1
| 0.197375
| false
| 13,136,885
| 0
| 0
| 1
| 1
|
We need to run arbitrary commands on cloudfoundry. (The deployed apps are Python/Django, but the language for this solution does not matter). Ideally over ssh, but the protocol does not matter.
We need a reliable way to get the exit code of the command that was run, as well as its stderr and stdout. If possible, the command running should be synchronous (as in, blocks the client until the command finished on the cloudfoundry app).
Is there a solution out there that allows us to do this, or what would be a good way to approach this issue?
|
Why do pythonics prefer pip over their OS's package managers?
| 13,138,198
| 7
| 5
| 207
| 0
|
python,pip,package-managers
|
There are two main reasons that Pythonistas generally recommend pip. The first is that it is the one package manager that is pretty much guaranteed to come with a Python installation, and therefore, independent of the OS on which you are using it. This makes it easier to provide instructions that work on Windows and OS X, as well as your favourite Linux.
Perhaps more importantly, pip works very nicely with virtualenv, which allows you to easily have multiple conflicting package configurations and test them without breaking the global Python installation. If I remember correctly, this is because pip is itself a Python program and it automatically runs within the current virtualenv sandbox. OS-level package managers obviously do not do this, since it isn't their job.
Also, as @Henry points out below, it's easier to just list your package in a single place (PyPI) rather than depending on the Debian/Ubuntu/Fedora maintainers to include it in their package list. Less popular packages will almost never make it into a distribution's package list.
That said, I often find that installing global libraries (numpy, for example) is easier and less painful with apt-get or your favourite alternative.
| 0
| 1
| 0
| 0
|
2012-10-30T11:30:00.000
| 1
| 1.2
| true
| 13,138,077
| 1
| 0
| 0
| 1
|
When reading tutorials and readmes I often see people advertising to install python packages with pip, even when they are using an operating system, which has a nice package manager like apt. Yet in real life I only met people who would only install things with their OS's package manager, reasoning that this package manager will treat all packages the same, no matter if python or not.
|
Appengine SDK 1.7.3 not detecting updated files
| 13,638,216
| 3
| 3
| 294
| 0
|
python,google-app-engine
|
A similiar issue happens with appcfg.py in SDK 1.73, where it skips uploading some files sometimes. It looks like this only happens if appcfg.py is run under python 2.7.
The workaround is to simply run appcfg.py under python 2.5. Then the upload works reliably.
The code uploaded can still be 2.7 specific - it is only necessary to revert 2.5 in the step of running the uploader function in appcfg.py.
| 0
| 1
| 0
| 0
|
2012-10-30T22:29:00.000
| 2
| 0.291313
| false
| 13,148,512
| 0
| 0
| 1
| 1
|
I just updated to SDK 1.7.3 running on Linux. At the same time I switched to the SQLite datastore stub, suggested by the depreciation message.
After this, edits to source files are not always detected, and I have to stop and restart the SDK after updating, probably one time in ten. Is anyone else seeing this? Any ideas on how to prevent it?
UPDATE: Changes to python source files are not being detected. I haven't made any modifications to yaml files, and I believe that jinja2 template file modifications are being detected properly.
UPDATE: I added some logging to the dev appserver and found that the file I'm editing is not being monitored. Continuing to trace what is happening.
|
Efficient way to store comments in Google App Engine?
| 13,151,123
| 2
| 2
| 171
| 0
|
python,google-app-engine
|
If comments are threaded, storing them as separate entities might make sense.
If comments can be the target of voting, storing them as separate entities makes sense.
If comments can be edited, storing them as separate entities reduces contention, and avoids having to either do pessimistic locking on all comments, or risk situations where the last edit overwrites prior edits.
If you can page through comments, storing them as separate entities makes sense for multiple reasons, indexing being one.
| 0
| 1
| 0
| 0
|
2012-10-31T00:41:00.000
| 2
| 0.197375
| false
| 13,149,663
| 0
| 0
| 1
| 1
|
With Google App Engine, an entity is limited to 1 MB in size. Say I have a blog system, and expect thousands of comments on each article, some paragraphs in lengths. Typically, without a limit, you'd just store all the comments in the same entity as the blog post. But here, there would be concerns about reaching the 1 MB limit.
The other possible way, though far less efficient, is to store each comment as a separate entity, but that would require several, several reads to get all comments instead of just 1 read to get the blog post and its comments (if they were in the same entity).
What's an efficient way to handle a case like this?
|
async db access between requests on GAE/Python
| 13,181,950
| 0
| 0
| 82
| 0
|
python,google-app-engine,asynchronous,google-cloud-datastore
|
You cannot start an async API call in one request and get its result in another. The HTTP serving infrastructure will wait for all API calls started in a request to complete before the HTTP response is sent back; the data structure representing the async API call will be useless in the second request (even if it hits the same instance).
You might try Appstats to figure out what API calls your request is making and see if you can avoid some, use memcache for some, or parallellize.
You might also use NDB which integrates memcache in the datastore API.
| 0
| 1
| 0
| 0
|
2012-10-31T13:25:00.000
| 1
| 0
| false
| 13,159,051
| 0
| 0
| 1
| 1
|
I'm trying to optimize my GAE webapp for latency.
The app has two requests which usually come one after another.
Is it safe to start an async db/memcache request during the first request and then use its results inside the following request?
(I'm aware that the second request might hit another instance. It would be handled as a cache miss)
|
How can i call robocopy within a python script to bulk copy multiple folders?
| 26,222,233
| 4
| 1
| 16,661
| 0
|
python,windows-7,command-line,copy,robocopy
|
Like halfs13 said use subprocess but you might need to format it like so
from subprocess import call
call(["robocopy",'fromdir', 'todir',"/S"])
Or else it may read the source as everything
| 0
| 1
| 0
| 0
|
2012-10-31T15:41:00.000
| 5
| 0.158649
| false
| 13,161,659
| 0
| 0
| 0
| 1
|
I am trying to move multiple large folders (> 10 Gb , > 100 sub folders, > 2000 files ) between network drives. I have tried using shutil.copytree command in python which works fine except that it fails to copy a small percentage (< 1 % of files ) for different reasons.
I believe robocopy is the best option for me as i can create a logfile documenting the transfer process. However as i need to copy > 1000 folders manual work is out of question.
So my question is essentially how can i call robocopy (i.e. command line ) from within a python script making sure that logfile is written in an external file.
I am working on a Windows 7 environment and Linux/Unix is out of question due to organizational restrictions. If someone has any other suggestions to bulk copy so many folders with a lot of flexibility they are welcome.
|
Why use Tornado and Flask together?
| 13,219,183
| 2
| 43
| 46,076
| 0
|
python,web,webserver,flask,tornado
|
instead of using Apache as your server, you'll use Tornado (of course as blocking server due to the synchronous nature of WSGI).
| 0
| 1
| 0
| 0
|
2012-10-31T17:59:00.000
| 4
| 0.099668
| false
| 13,163,990
| 0
| 0
| 1
| 1
|
As far as I can tell Tornado is a server and a framework in one. It seems to me that using Flask and Tornado together is like adding another abstraction layer (more overhead). Why do people use Flask and Tornado together, what are the advantages?
|
Python to .bat conversion
| 13,179,540
| 2
| 5
| 19,619
| 0
|
python,windows,batch-file
|
No, I don't think you can reasonably expect to do this.
Batch files are executed by the Windows command interpreter, which is way way more primitive.
Python is a full-blown programming language with a rich and powerful library of standard modules for all sorts of tasks. All the Windows command interpreter can do is act like a broken shell.
On the other hand, Python is available on Windows, so just tell the user to install it and run your program directly.
| 0
| 1
| 0
| 0
|
2012-11-01T14:46:00.000
| 4
| 0.099668
| false
| 13,179,515
| 1
| 0
| 0
| 1
|
I would like a user on Windows to be able to run my Python program, so I want to convert it to a .bat file. Is there a way to convert it? I've tried searching, but didn't find anything.
|
Python not running from cmd in Windows 8 after upgrade
| 13,200,981
| 2
| 0
| 3,542
| 0
|
python,windows,cmd
|
The reason why it works from the Menu Item but not from the command prompt is that the menu item specifies the "Start in" directory where the Python executable can be found.
Chances are the Win 7 -> Win 8 upgrade failed to preserve the PATH environmental variable, where the path the Python was previously specified, allowing you to invoke Python from any command prompt console.
| 0
| 1
| 0
| 0
|
2012-11-02T18:17:00.000
| 3
| 0.132549
| false
| 13,200,900
| 1
| 0
| 0
| 2
|
I can run the command line version of Python but I cannot seem to run it from the command prompt. I have recently upgraded from Windows 7 to Windows 8 and it worked fine with Windows 7. Now Windows 8 will not recognize Python. Thanks, William
|
Python not running from cmd in Windows 8 after upgrade
| 15,230,805
| 0
| 0
| 3,542
| 0
|
python,windows,cmd
|
Go to C:\python33 or wherever you installed it.
Right click on "pythonw" and pin to taskbar,
Run from taskbar.
| 0
| 1
| 0
| 0
|
2012-11-02T18:17:00.000
| 3
| 0
| false
| 13,200,900
| 1
| 0
| 0
| 2
|
I can run the command line version of Python but I cannot seem to run it from the command prompt. I have recently upgraded from Windows 7 to Windows 8 and it worked fine with Windows 7. Now Windows 8 will not recognize Python. Thanks, William
|
Google App Engine Update Issue
| 13,236,236
| 1
| 0
| 937
| 0
|
python,google-app-engine
|
After 3 days of endless searching, I have figured out the problem, if you are facing this issue, first thing you have to check is your system time, mine was incorrect due to daylight saving changes.
Thanks
| 0
| 1
| 0
| 0
|
2012-11-03T05:53:00.000
| 2
| 1.2
| true
| 13,206,438
| 0
| 0
| 1
| 1
|
I am working on an application (python based), which is deployed on GAE, it was working fine till last day, but I can't seem to update any code on app engine since this morning, it is complaining about some sort of issue with password, I have double checked and email id and password are correct.
here is the stack trace which I receive:
10:47 PM Cloning 706 static files.
2012-11-03 22:47:07,913 WARNING appengine_rpc.py:542 ssl module not found.
Without the ssl module, the identity of the remote host cannot be verified, and
connections may NOT be secure. To fix this, please install the ssl module from
http://pypi.python.org/pypi/ssl .
To learn more, see https://developers.google.com/appengine/kb/general#rpcssl
Password for user@gmail.com: 2012-11-03 22:47:07,913 ERROR appcfg.py:2266 An unexpected error occurred. Aborting.
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2208, in DoUpload
missing_files = self.Begin()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1934, in Begin
CloneFiles('/api/appversion/cloneblobs', blobs_to_clone, 'static')
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1929, in CloneFiles
result = self.Send(url, payload=BuildClonePostBody(chunk))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1841, in Send
return self.rpcserver.Send(url, payload=payload, **self.params)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 403, in Send
self._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 543, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 293, in _Authenticate
credentials = self.auth_function()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2758, in GetUserCredentials
password = self.raw_input_fn(password_prompt)
EOFError: EOF when reading a line
10:47 PM Rolling back the update.
2012-11-03 22:47:08,818 WARNING appengine_rpc.py:542 ssl module not found.
Without the ssl module, the identity of the remote host cannot be verified, and
connections may NOT be secure. To fix this, please install the ssl module from
http://pypi.python.org/pypi/ssl .
To learn more, see https://developers.google.com/appengine/kb/general#rpcssl
Password for user@gmail.com: Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 171, in
run_file(__file__, globals())
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 167, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4322, in
main(sys.argv)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4313, in main
result = AppCfgApp(argv).Run()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2599, in Run
self.action(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4048, in __call__
return method()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3065, in Update
self.UpdateVersion(rpcserver, self.basepath, appyaml)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3047, in UpdateVersion
lambda path: self.opener(os.path.join(basepath, path), 'rb'))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2267, in DoUpload
self.Rollback()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2150, in Rollback
self.Send('/api/appversion/rollback')
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1841, in Send
return self.rpcserver.Send(url, payload=payload, **self.params)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 403, in Send
self._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 543, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 293, in _Authenticate
credentials = self.auth_function()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2758, in GetUserCredentials
password = self.raw_input_fn(password_prompt)
EOFError: EOF when reading a line
2012-11-03 22:47:09 (Process exited with code 1)
You can close this window now.
Any Help will be appreciated.
P.S, I have tried from command line as well as Google app engine launcher.
|
How small is *too small* for an opensource project?
| 13,214,376
| 1
| 2
| 119
| 0
|
python,open-source,driver,libraries
|
The smaller the better.
A 10 line function to convert HSV to RGB or find the closest point to a triangle or something like a CAN/GPIB driver is far more likely to be read and used than a massive complicated poorly documented framework
| 0
| 1
| 0
| 0
|
2012-11-03T22:58:00.000
| 3
| 1.2
| true
| 13,214,349
| 0
| 0
| 0
| 3
|
I have a fair number of smaller projects / libraries that I have been using over the past 2 years. I am thinking about moving them to Google Code to make it easier to share with co-workers and easier to import them into new projects on my own environments. The are things like a simple FSMs, CAN (Controller Area Network) drivers, and GPIB drivers. Most of them are small (less than 500 lines), so it makes me wonder are these types of things too small for a stand alone open-source project?
Note that I would like to make it opensource because it does not give me, or my company, any real advantage.
|
How small is *too small* for an opensource project?
| 13,214,378
| 0
| 2
| 119
| 0
|
python,open-source,driver,libraries
|
500 lines? That's a lot in my opinion.
Sounds just fine to publish them as a project. I mean how many blog posts have you read with just some code that saved you hours?
Now imagine that but 500 lines of code and a permanent host designed for the purpose
| 0
| 1
| 0
| 0
|
2012-11-03T22:58:00.000
| 3
| 0
| false
| 13,214,349
| 0
| 0
| 0
| 3
|
I have a fair number of smaller projects / libraries that I have been using over the past 2 years. I am thinking about moving them to Google Code to make it easier to share with co-workers and easier to import them into new projects on my own environments. The are things like a simple FSMs, CAN (Controller Area Network) drivers, and GPIB drivers. Most of them are small (less than 500 lines), so it makes me wonder are these types of things too small for a stand alone open-source project?
Note that I would like to make it opensource because it does not give me, or my company, any real advantage.
|
How small is *too small* for an opensource project?
| 13,214,412
| 1
| 2
| 119
| 0
|
python,open-source,driver,libraries
|
Do not think about number of lines of code, think about utility of your code. If your code is useful for somebody, upload your code to a repository or repositories, write wiki, examples, etc. I saw a useful Python library that was less than 100 lines.
| 0
| 1
| 0
| 0
|
2012-11-03T22:58:00.000
| 3
| 0.066568
| false
| 13,214,349
| 0
| 0
| 0
| 3
|
I have a fair number of smaller projects / libraries that I have been using over the past 2 years. I am thinking about moving them to Google Code to make it easier to share with co-workers and easier to import them into new projects on my own environments. The are things like a simple FSMs, CAN (Controller Area Network) drivers, and GPIB drivers. Most of them are small (less than 500 lines), so it makes me wonder are these types of things too small for a stand alone open-source project?
Note that I would like to make it opensource because it does not give me, or my company, any real advantage.
|
Is there a way to atomically run a function as a transaction within google app engine when the function modifies more than five entity groups?
| 13,228,117
| 0
| 1
| 66
| 0
|
python,google-app-engine
|
XG transactions are limited to a few entity groups for performance reasons. Running an XG transaction across hundreds of entity groups would be incredibly slow.
Can you break your function up into many sub-functions, one for each entity group? If so, you should have no trouble running them individually, or on the task queue.
| 0
| 1
| 0
| 0
|
2012-11-05T03:25:00.000
| 2
| 0
| false
| 13,225,540
| 0
| 0
| 1
| 1
|
I'm developing on google app engine. The focus of this question is a python function that modifies hundreds of entity groups. The function takes one string argument. I want to execute this function as a transaction because there are instances right now when the same function with the same string argument are simultaneously run, resulting in unexpected results. I want the function to execute in parallel if the string arguments are different, but not if the string arguments are the same, they should be run serially.
Is there a way to run a transaction on a function that modifies so many entity groups? So far, the only solution I can think of is flipping a database flag for each unique string parameter, and checking for the flag (deferring execution if the flag is set as True). Is there a more elegant solution?
|
Run python script from CruiseControl.NET
| 13,293,387
| 2
| 0
| 386
| 0
|
python,cruisecontrol.net,nant
|
I don't think there is a built in python task, but you should be able to execute it by crafting an exec task.
| 0
| 1
| 0
| 1
|
2012-11-05T08:50:00.000
| 1
| 1.2
| true
| 13,228,677
| 0
| 0
| 0
| 1
|
Is is posible to run a python script from Cruisecontrol.Net ? Is there a CCNEt task or a nant task that can be used?
|
Why does calling get() on memcache increase item count in Google App Engine?
| 13,389,191
| 1
| 4
| 195
| 0
|
python,google-app-engine,memcached
|
memcache.get should not increase the item count in memcache statistic, and I'm not able to reproduce that behavior in production.
The memcache statistic page is global, so if you happen to have other requests (live, or through task queue) going to your application at the same time you're using the remote api, that could increase that count.
| 0
| 1
| 0
| 0
|
2012-11-05T14:26:00.000
| 1
| 0.197375
| false
| 13,234,094
| 0
| 0
| 1
| 1
|
I'm looking at the Memcache Viewer in my admin console on my deployed Google App Engine NDB app. For testing, I'm using the remote api. I'm doing something very simple: memcache.get('somekey'). For some reason, everytime I call this line and hit refresh on my statistics, the item count goes up by 2. This happens whether or not the key exists in memcache.
Any ideas why this could happen? Is this normal?
|
Python Argparse without command keyword
| 13,238,122
| -1
| 0
| 813
| 0
|
python,python-3.x,argparse
|
try looking into using sys.argv[]
| 0
| 1
| 0
| 0
|
2012-11-05T18:16:00.000
| 2
| -0.099668
| false
| 13,237,955
| 1
| 0
| 0
| 1
|
So, what i am trying to do is LS Mimic in Python code.
I have completed most of the commands, but i have one thing that i cant find the solution
So, if the ls command gets this command line,
ls test1
It should find the test1 directory and then do the ls in that directory.
However, since I can only find ways of creating arguments that needs a keyword before the
actual usage, I cant find the way of doing this..
it cant be something like this
ls -move_dir test1
it is fine if for the program to think if there is no command, it will treat as above way.
( it will find that directory and run ls )
Please help me !!
|
Which GAE Database Property Fits a Tag Property?
| 13,250,634
| 1
| 0
| 72
| 0
|
python,google-app-engine,google-cloud-datastore
|
A repeated string property is your best option.
| 0
| 1
| 0
| 0
|
2012-11-05T22:34:00.000
| 2
| 0.099668
| false
| 13,241,503
| 0
| 0
| 1
| 1
|
I want to have a property on a database model of mine in Google App Engine and I am not sure which category works the best. I need it to be a tag cloud similar to the Tags on SO. Would a text property be best or should I use a string property and make it repeated=True.
The second seems best to me and then I can just divide the tags up with a comma as a delimiter. My goal is to be able to search through these tags and count the total number of each type of tag.
Does this seem like a reasonable solution?
|
Python/Google Apps Script integration
| 13,249,247
| 0
| 0
| 164
| 0
|
python,google-apps-script
|
Yes. You would need to authorize it the first time and implement oAuth from the script though. I strongly suggest that you switch to the Google Drive API.
| 0
| 1
| 0
| 0
|
2012-11-06T10:34:00.000
| 1
| 0
| false
| 13,249,147
| 0
| 0
| 0
| 1
|
Would it be possible for some type of Python script to check services running on a linux box, and integrate with a google app script, so it would then populate a google doc spreadsheet stating whther a service is running or not ?
|
what does the cursor() method of GAE Query class return if the last result was already retrieved?
| 13,252,901
| 3
| 2
| 119
| 0
|
python,google-app-engine
|
There's still a cursor, even if the last result is retrieved. The query class doesn't know that, in any case: it knows what you've had already, but it doesn't know what else is still to come. The cursor doesn't represent any actual result, it's simply a way of resuming the query later. In fact, it's possible to use a cursor even in the case where you reach the end of the data set on your initial query, but later updates mean that new items are now found on a subsequent request: for example, if you're ordering by last update time.
(Good username, btw: gotta love some PKD.)
| 0
| 1
| 0
| 0
|
2012-11-06T14:03:00.000
| 2
| 0.291313
| false
| 13,252,683
| 0
| 0
| 1
| 1
|
From the Google App Engine documentation:
"cursor() returns a base64-encoded cursor string denoting the position in the query's result set following the last result retrieved."
What does it return if the the last result retrieved IS the last result in the query set? Wouldn't this mean that there is no position that can 'follow' the last result retrieved? Therefore, is 'None' returned?
|
Using VIM for Python IDE in Windows?
| 19,894,858
| 0
| 1
| 4,165
| 0
|
python,django,windows,vim,ide
|
One possible compromise is to use your favorite IDE with a vim emulator plugin. For example, in Eclipse you can use Vrapper, PyCharm has IdeaVim and so forth. Lighttable also has vim key-bindings. The plug-ins (or key-binding options) give you some of the benefits of editing in Vim while still having the powerful debugging / navigation features, etc. of a full-blown IDE. BTW, Vrapper works with PyDev.
Using an emulator in an IDE allows you to gain the "muscle-memory" necessary for effective vim editing, without getting bogged down in "configuration hell" associated with turning an editor into an IDE (which auto-complete plugin do I use?..etc.?). Once you have mastered the vim keystrokes for normal and visual mode, used along with insert mode, you may decide to continue on into pure Vim and face those issues.
| 0
| 1
| 0
| 0
|
2012-11-06T14:50:00.000
| 5
| 0
| false
| 13,253,510
| 1
| 0
| 0
| 1
|
I am turning to Python from .NET world. And Visual Studio was something a great tool i used.
In python world we do have basic IDLE and another one is VIM. I have seen that a lot of developers have configured VIM to a great IDE. Using basic VIM in Windows 7 seems of less use.
So i want to moderate my VIM to a level which has file explorer, syntax highlighting, search, error highlighting etc. So that it gives feel of Visual Studio and more productive.
But all hacks/tips available are for Linux/Ubuntu users mostly, which i may use later but as of now i need to make my VIM in Windows more productive, visual.
Please Suggest some Tips/Hacks/Resources to look around for VIM configuration?
Thanks
|
Python deployment for distributed application
| 13,296,458
| 2
| 4
| 821
| 0
|
python,web-applications,deployment
|
It all depends on your application.
You can:
use Puppet to deploy servers,
use Fabric to remotely connect to the servers and execute specific tasks,
use pip for distributing Python modules (even non-public ones) and install dependencies,
use other tools for specific tasks (such as use boto to work with Amazon Web Services APIs, eg. to start new instance),
It is not always that simple and you will most likely need something customized. Just take a look at your system: it is not so "standard", so do not expect it to be handled in a "standard" way.
| 0
| 1
| 0
| 0
|
2012-11-08T19:29:00.000
| 1
| 1.2
| true
| 13,296,320
| 0
| 0
| 0
| 1
|
We are developing a distributed application in Python. Right now, we are about to re-organize some of our system components and deploy them on separate servers, so I'm looking to understand more about deployment for an application such as this. We will have several back-end code servers, several database servers (of different types) and possibly several front-end servers.
My question is this: what / which are good deployment patterns for distributed applications (in Python or in general)? How can I manage pushing code to several servers (whose IP's should be parameterized in the deployment system), static files to several front ends, starting / stopping processes in the servers, etc.? We are looking for possibly an easy-to-use solution, but mostly, something that once set-up will get out of our way and let us deploy as painlessly as possible.
To clarify: we are aware that there is no one standard solution for this particular application, but this question is rather more geared towards a guide of best practices for different types / parts of deployment than a single, unified solution.
Thanks so much! Any suggestions regarding this or other deployment / architecture pointers will be very appreciated.
|
Apache/PHP to Nginx/Tornado/Python
| 13,304,821
| 6
| 5
| 2,544
| 0
|
php,python,django,nginx,tornado
|
I'll go point by point:
Yes. It's ok to run tornado and nginx on one server. You can use nginx as reverse proxy for tornado also.
Haproxy will give you benefit, if you have more than one server instances. Also it will allow you to proxy websockets directly to tornado.
Actually, nginx can be used for redirects, with no problems. I haven't heard about using redis for redirects - it's key/value storage... may be you mean something else?
Again, you can write blocking part in django and non-blocking part in tornado. Also tornado has some non-blocking libs for db queries. Not sure that you need powers of django here.
Yes, it's ok to run apache behind nginx. A lot of projects use nginx in front of apache for serving static files.
Actually question is very basic - answer also. I can be more detailed on any of the point if you wish.
| 0
| 1
| 0
| 1
|
2012-11-08T22:31:00.000
| 1
| 1.2
| true
| 13,299,023
| 0
| 0
| 1
| 1
|
Our website has developed a need for real-time updates, and we are considering various comet/long-polling solutions. After researching, we have settled on nginx as a reverse proxy to 4 tornado instances (hosted on Amazon EC2). We are currently using the traditional LAMP stack and have written a substantial amount of code in PHP. We are willing to convert our PHP code to Python to better support this solution. Here are my questions:
Assuming a quad-core processor, is it ok for nginx to be running on the same server as the 4 tornado instances, or is it recommended to run two separate servers: one for nginx and one for the 4 tornado processes?
Is there a benefit to using HAProxy in front of Nginx? Doesn't Nginx handle load-balancing very well by itself?
From my research, Nginx doesn't appear to have a great URL redirecting module. Is it preferred to use Redis for redirects? If so, should Redis be in front of Nginx, or behind?
A large portion of our application code will not be involved in real-time updates. This code contains several database queries and filesystem reads, so it clearly isn't suitable for a non-blocking app server. From my research, I've read that the blocking issue is mitigated simply by having multiple Tornado instances, while others suggest using a separate app server (ex. Gunicorn/Django/Flask) for blocking calls. What is the best way to handle blocking calls when using a non-blocking server?
Converting our code from PHP to Python will be a lengthy process. Is it acceptable to simultaneously run Apache/PHP and Tornado behind Nginx, or should we just stick to on language (either tornado with gunicorn/django/flask or tornado by itself)?
|
python-twisted and SIGKILL
| 13,306,625
| 4
| 5
| 574
| 0
|
python,linux,twisted,sigkill
|
From the signal(2) man page:
The signals SIGKILL and SIGSTOP cannot be caught or ignored.
So there is no way the process can run any cleanup code in response to that signal. Usually you only use SIGKILL to terminate a process that doesn't exit in response to SIGTERM (which can be caught).
| 0
| 1
| 0
| 0
|
2012-11-09T10:32:00.000
| 2
| 0.379949
| false
| 13,306,359
| 0
| 0
| 0
| 2
|
I have a python application that uses twisted framework.
I make use of value stored in the pidfile generated by twistd. A launcher script checks for it's presence and will not spawn a daemon process if the pidfile already exists.
However, twistd does not remove the .pidfile when it gets SIGKILL signal. That makes the launcher script think that the daemon is already running.
I realize the proper way to stop the daemon would be to use SIGTERM signal, but the problem is that when user who started the daemon logs out, the daemon never gets a SIGTERM signal, so apparently it's killed with SIGKILL. That means once a user logs out, he will never be able to start the daemon again, because the pidfile still exists.
Is there any way I could make that file disappear in such situations?
|
python-twisted and SIGKILL
| 13,310,880
| 0
| 5
| 574
| 0
|
python,linux,twisted,sigkill
|
You could change your launcher (or wrap it up in another launcher) and remove the pid file before trying to restart twistd.
| 0
| 1
| 0
| 0
|
2012-11-09T10:32:00.000
| 2
| 0
| false
| 13,306,359
| 0
| 0
| 0
| 2
|
I have a python application that uses twisted framework.
I make use of value stored in the pidfile generated by twistd. A launcher script checks for it's presence and will not spawn a daemon process if the pidfile already exists.
However, twistd does not remove the .pidfile when it gets SIGKILL signal. That makes the launcher script think that the daemon is already running.
I realize the proper way to stop the daemon would be to use SIGTERM signal, but the problem is that when user who started the daemon logs out, the daemon never gets a SIGTERM signal, so apparently it's killed with SIGKILL. That means once a user logs out, he will never be able to start the daemon again, because the pidfile still exists.
Is there any way I could make that file disappear in such situations?
|
code analysis incomplete for filename with a dash
| 13,312,787
| 2
| 3
| 566
| 0
|
python,eclipse,pydev
|
Python does not allow dashes in identifiers. Module names need to be valid identifiers, so any module file or package directory name with a dash in it is not importable.
On the other hand, script files (python files executed directly by Python, not imported) have no such restrictions. I'd say what you encountered is a bug in PyDev and you should report it as such.
| 0
| 1
| 0
| 0
|
2012-11-09T17:01:00.000
| 1
| 0.379949
| false
| 13,312,588
| 1
| 0
| 0
| 1
|
I had this problem for a while, and finally understanding what caused it was a good relief.
So basically, python files with a dash ('-') in their name are not fully analyzed by PyDev. I only get the errors but not the warnings... (ie: unused variables, unused imports etc...)
Is this a feature? a known bug? Is there any work around?
I know that the dash is not allowed for python folder but does this apply for python files? (in my case, those are python scripts, without the .py extension for convenience).
For instance, in my bin project subfolder:
commit or release script files are analysed A-OK
add-input, select-files: warning are not reported.
Thanks for any hint on that.
|
Appengine Search API - Globally Consistent
| 13,315,587
| 0
| 0
| 174
| 0
|
python,google-app-engine,full-text-search,gae-search
|
This depends on whether or not you have any globally consistent indexes. If you do, then you should migrate all of your data from those indexes to new, per-document-consistent (which is the default) indexes. To do this:
Loop through the documents you have stored in the global index and reindexing them in the new index.
Change references from the global index to the new per-document index.
Ensure everything works, then delete the documents from your global index (not necessary to complete the migration, but still a good idea).
You then should remove any mention of consistency from your code; the default is per-document consistent, and eventually we will remove the ability to specify a consistency at all.
If you don't have any data in a globally consistent index, you're probably getting the warning because you're specifying a consistency. If you stop specifying the consistency it should go away.
Note that there is a known issue with the Python API that causes a lot of erroneous deprecation warnings about consistency, so you could be seeing that as well. That issue will be fixed in the next release.
| 0
| 1
| 0
| 0
|
2012-11-09T17:37:00.000
| 1
| 0
| false
| 13,313,118
| 0
| 0
| 1
| 1
|
I've been using the appengine python experimental searchAPI. It works great. With release 1.7.3 I updated all of the deprecated methods. However, I am now getting this warning:
DeprecationWarning: consistency is deprecated. GLOBALLY_CONSIST
However, I'm not sure how to address it in my code. Can anyone point me in the right direction?
|
How to use `subprocess` command with pipes
| 13,359,172
| 4
| 326
| 305,963
| 0
|
python,linux,subprocess,pipe
|
Also, try to use 'pgrep' command instead of 'ps -A | grep 'process_name'
| 0
| 1
| 0
| 0
|
2012-11-11T14:55:00.000
| 9
| 0.088656
| false
| 13,332,268
| 0
| 0
| 0
| 1
|
I want to use subprocess.check_output() with ps -A | grep 'process_name'.
I tried various solutions but so far nothing worked. Can someone guide me how to do it?
|
Setting python3.2 as default instead of python2.7 on Mac OSX Lion 10.7.5
| 53,291,437
| 3
| 8
| 16,097
| 0
|
python,macos
|
If you have python 2 and 3 on brew. Following worked for me.
brew unlink python@2
brew link python@3 (if not yet linked)
| 0
| 1
| 0
| 0
|
2012-11-12T00:04:00.000
| 3
| 0.197375
| false
| 13,336,852
| 1
| 0
| 0
| 2
|
Currently running Mac OS X Lion 10.7.5 , and it has python2.7 as default. In the terminal, i type 'python' and it automatically pulls up python2.7. I don't want that.
from terminal I have to instead type 'python3.2' if i want to use python3.2.
How do i change that?
|
Setting python3.2 as default instead of python2.7 on Mac OSX Lion 10.7.5
| 13,336,983
| 4
| 8
| 16,097
| 0
|
python,macos
|
You could edit the default python path and point it to python3.2
Open up ~/.bash_profile in an editor and edit it so it looks like
PATH="/Library/Frameworks/Python.framework/Versions/3.2/bin:${PATH}"
export PATH
| 0
| 1
| 0
| 0
|
2012-11-12T00:04:00.000
| 3
| 0.26052
| false
| 13,336,852
| 1
| 0
| 0
| 2
|
Currently running Mac OS X Lion 10.7.5 , and it has python2.7 as default. In the terminal, i type 'python' and it automatically pulls up python2.7. I don't want that.
from terminal I have to instead type 'python3.2' if i want to use python3.2.
How do i change that?
|
Python fabric.api backslash hell
| 13,338,597
| 4
| 7
| 676
| 0
|
python,fabric,backslash
|
OK, finally worked this out. RocketDonkey was correct. Needed to prefix with "r" but also needed to set "shell=False". This allowed what ever worked directly in the bash terminal to work when being called from fabric.api.
Thanks RocketDonkey!!
| 0
| 1
| 0
| 0
|
2012-11-12T03:06:00.000
| 2
| 0.379949
| false
| 13,337,870
| 0
| 0
| 0
| 1
|
I'm new to python and fabric api. I'm trying to use the sudo functionality to run a sed command in bash terminal which insert some text after a particular line of text is found. Some of the text I'm trying to insert into the file I'm modifying contains backslashes which seem to either be ignored by fabric or cause syntax errors. I've tried "shell=true" and "shell=false" options but still no luck. How can I escape the backslash? It seems "shell=true" only escapes $ and ". My Code below.
sudo (' sed -i "/sometext/a textwith\backslash" /home/me/somefile.txt',shell=True)
|
Python usbmount checking for device before writing
| 13,345,336
| 1
| 1
| 905
| 0
|
python,usb,debian
|
cat /etc/mtab | awk '{ print $2 }'
Will give you a list of mountpoints. You can as well read /etc/mtab yourself and just check if anything's mounted under /media/usb0 (file format: whitespace-divided, most likely single space). The second column is mount destination, the first is the source.
| 0
| 1
| 0
| 1
|
2012-11-12T14:11:00.000
| 2
| 1.2
| true
| 13,345,239
| 0
| 0
| 0
| 1
|
I'm using debian with usbmount. I want to check if a USB memory stick is available to write to.
Currently I check if a specific dir exists on the USB drive. If this is True I can then write the rest of my files - os.path.isdir('/media/usb0/Test_Folder')
I would like to create Test_Folder if it doesn't exist. However /media/usb0/ exists even if no USB device is there so I can't just os.mkdir('/media/usb0/Test_Folder') As it makes the file locally.
I need a check that there is a usb drive available on /media/usb0/ to write to before creating the file. Is there a quick way of doing this?
|
Installing Python modules for OpenERP 6.1 in Windows
| 13,358,175
| 1
| 6
| 3,249
| 0
|
python,openerp
|
Good question..
Openerp on windows uses a dll for python (python26.dll in /Server/server of the openerp folder in program files). It looks like all the extra libraries are in the same folder, so you should be able to download the extra libraries to that folder and restart the service. (I usually stop the service and run it manually from the command line - its easier to see if there are any errors etc while debugging)
Let us know if you get it working!
| 0
| 1
| 0
| 0
|
2012-11-12T15:38:00.000
| 2
| 0.099668
| false
| 13,346,698
| 0
| 0
| 1
| 1
|
I installed OpenERP 6.1 on windows using the AllInOne package. I did NOT install Python separately. Apparently OpenERP folders already contain the required python executables.
Now when I try to install certain addons, I usually come across requirements to install certain python modules. E.g. to install Jasper_Server, I need to install http2, pypdf and python-dime.
As there is no separate Python installation, there is no C:\Python or anything like that. Where and how do I install these python packages so that I am able to install the addon?
Thanks
|
How to FTP into a virtual machine?
| 13,353,762
| 1
| 0
| 5,276
| 0
|
python,django,ftp,virtualenv,virtualbox
|
The reason that the the client reported back "Connection refused by server" is that the server returned a TCP packet with the reset bit set, in response to an application trying to connect to a port that is not being listened on by an application, or by a firewall.
I think that the FTP service is not running, or running on an alternate port. Take a look at the output from netstat -nltp (on Linux) or netstat -ntlb (on windows). You should see a program that is waiting to hear request on TCP port 21. If you don't see the program listed at all or not on the expected port that your client is going to try and connect to, then modify the FTP servers configuration file.
| 0
| 1
| 0
| 0
|
2012-11-12T23:08:00.000
| 1
| 1.2
| true
| 13,353,113
| 0
| 0
| 1
| 1
|
I've recently started learning Django and have set up a virtual machine running a Django server on VirtualEnv. I can use the runserver command to run the basic Django installation server and view it on another computer with the local IP address.
However, I can't figure out how to connect to my virtual machine with my FTP client so that I can edit files on my host machine (Windows). I've tried using the IP address of the virtual machine with an FTP client but it says "Connection refused by server".
Any help would be appreciated, thanks!
|
Creating a debian package for my python application from a system running fedora
| 13,425,528
| 1
| 1
| 110
| 0
|
python,debian,fedora
|
Create an rpm package, give it to your Debian users and tell them to convert the rpm to a Debian package using alien on their Debian box.
| 0
| 1
| 0
| 0
|
2012-11-13T00:37:00.000
| 2
| 0.099668
| false
| 13,353,978
| 1
| 0
| 0
| 1
|
I have created a small python application to be used internally in my organization. I wrote the code on my primary development machine running Fedora 17 and I would like to create a .deb in order to make it easy for my colleagues to install my program.
Is it possible to create debian packages for python application from a system running fedora? If yes, how?
|
what is the difference between "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/" and "/Library/Python/2.7/"
| 16,665,606
| 8
| 4
| 2,452
| 0
|
python-2.7
|
python.org
The installer from python.org installs to /Library/Frameworks/Python.framework/, and only that python executable looks in the contained site-package dir for packages.
/Library/Python
In contrast, the dir /Library/Python/2.7/site-packages/ is a global place where you can put python packages, all python 2.7 interpreter will. (For example the python 2.7 that comes with OS X).
~/Library/Python
The dir ~/Library/Python/2.7/site-packages, if it exists, is also used but for your user only.
sys.path
From within python, you can check, which directories are currently used by import sys; print(sys.path)
homebrew
Note, a python installed via homebrew, will put it's site-packages in $(brew --prefix)/lib/python2.7/site-packages but also be able to import packages from /Library/Python/2.7/site-packages and ~/Library/Python/2.7/site-packages.
| 0
| 1
| 0
| 0
|
2012-11-13T03:55:00.000
| 1
| 1
| false
| 13,355,370
| 1
| 0
| 0
| 1
|
I am working on a mac, a quick question, could someone told me the difference of these two directories?
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/
/Library/Python/2.7/site-packages/
|
Running Python from PHP
| 13,356,068
| 0
| 0
| 189
| 0
|
php,python,caching,egg
|
Make sure whatever user php is running under has appropriate permissions. You can try opening a pipe and changing users, or just use apache's suexec.
| 0
| 1
| 0
| 1
|
2012-11-13T05:33:00.000
| 1
| 0
| false
| 13,356,024
| 0
| 0
| 0
| 1
|
I have a python script that runs as a daemon process. I want to be able to stop and start the process via a web page. I made a PHP script that runs exec() on the python daemon. Any idea?
Traceback (most recent call last): File
"/home/app/public_html/daemon/daemon.py", line 6, in from
socketServer import ExternalSocketServer, InternalSocketServer File
"/home/app/public_html/daemon/socketServer.py", line 3, in
import json, asyncore, socket, MySQLdb, hashlib, urllib, urllib2,
logging, traceback, sys File
"build/bdist.linux-x86_64/egg/MySQLdb/init.py", line 19, in
File "build/bdist.linux-x86_64/egg/_mysql.py", line 7, in
File "build/bdist.linux-x86_64/egg/_mysql.py", line 4, in
bootstrap File "build/bdist.linux-i686/egg/pkg_resources.py", line 882, in resource_filename File
"build/bdist.linux-i686/egg/pkg_resources.py", line 1351, in
get_resource_filename File
"build/bdist.linux-i686/egg/pkg_resources.py", line 1373, in
_extract_resource File "build/bdist.linux-i686/egg/pkg_resources.py", line 962, in
get_cache_path File "build/bdist.linux-i686/egg/pkg_resources.py",
line 928, in extraction_error pkg_resources.ExtractionError: Can't
extract file(s) to egg cache The following error occurred while
trying to extract file(s) to the Python egg cache: [Errno 13]
Permission denied: '//.python-eggs' The Python egg cache directory is
currently set to: //.python-eggs Perhaps your account does not
have write access to this directory? You can change the cache
directory by setting the PYTHON_EGG_CACHE environment variable to
point to an accessible directory.
|
Asynchronous replacement for Celery
| 13,429,864
| 1
| 4
| 1,476
| 0
|
python,django,asynchronous,celery,gevent
|
Have you tried to use Celery + eventlet? It works well in our project
| 0
| 1
| 0
| 0
|
2012-11-13T11:42:00.000
| 2
| 0.099668
| false
| 13,360,145
| 0
| 0
| 1
| 1
|
We're using Celery for background tasks in our Django project.
Unfortunately, we have many blocking sockets in tasks, that can be established for a long time. So Celery becomes fully loaded and does not respond.
Gevent can help me with sockets, but Celery has only experimental support of gevent (and as I found in practice, it doesn't work well).
So I considered to switch to another task queue system.
I can choose between two different ways:
Write my own task system. This is a least preferred choice, because it requires much time.
Find good and well-tried replacement for Celery that will work after monkey patching.
Is there any analogue of Celery, that will guarantee me execution of my tasks even after sudden exit?
|
Live output from a batch file with Python
| 13,370,997
| 1
| 1
| 148
| 0
|
python,batch-file,python-3.x
|
Can't you just use os.system?
| 0
| 1
| 0
| 0
|
2012-11-13T23:48:00.000
| 1
| 0.197375
| false
| 13,370,877
| 1
| 0
| 0
| 1
|
I'm trying to get the output from a batch file LIVE as it runs. I would prefer the console of the batch file to also show as it runs. I have tried using os.popen and subprocess.Popen but the problem is that it does not run the program LIVE in the background and constantly show what is being printed to the console.
Exactly what I want, is to have a string that is constantly updated with the data from the console of the running batch file.
|
Python : Check file is locked
| 13,371,542
| 7
| 14
| 23,396
| 0
|
python,file-io
|
You can use os.access for checking your access permission. If access permissions are good, then it has to be the second case.
| 0
| 1
| 0
| 0
|
2012-11-14T00:48:00.000
| 3
| 1.2
| true
| 13,371,444
| 1
| 0
| 0
| 1
|
My goal is to know if a file is locked by another process or not, even if I don't have access to that file!
So to be more clear, let's say I'm opening the file using python's built-in open() with 'wb' switch (for writing). open() will throw IOError with errno 13 (EACCES) if:
the user does not have permission to the file or
the file is locked by another process
How can I detect case (2) here?
(My target platform is Windows)
|
Convert an EXE and its dependencies into one stand-alone EXE
| 13,967,222
| 0
| 3
| 2,963
| 0
|
python,compilation,exe,cx-freeze
|
Have you tried innosetup? It can create installer files from the output of cxfreeze. There might be an option somewhere to bundle everything into one file.
| 0
| 1
| 0
| 0
|
2012-11-15T00:22:00.000
| 3
| 0
| false
| 13,389,724
| 1
| 0
| 0
| 1
|
I'm using cx_Freeze to compile Python programs into executables and it works just fine, but the problem is that it doesn't compile the program into one EXE, it converts them into a .exe file AND a whole bunch of .dll files including python32.dll that are necessary for the program to run.
Does anyone know how I can package all of these files into one .exe file? I would rather it be a plain EXE file and not just a file that copies the DLLs into a temporary directory in order to launch the program.
EDIT: This is in reference to Python 3
|
how to switch the action of execution of script between the terminals in the python script?
| 13,391,941
| 0
| 2
| 101
| 0
|
python,unix,vi
|
Have your script myscript.py take an optional argument e.g. other_term. Spawn xterm -e myscript.py other_term. When the second instance of myscript.py starts, check for the optional argument other_term; if present, perform the second set of commands. Use environment variables or files or command line arguments or pipes to transfer any required state between the first (initial) and second (other_term) instances. When the other_term instance finishes, the second xterm will automatically close and return control to the first (intial) instance which can then proceed with its commands. If you do NOT want the second xterm to close then spawn xterm -e asynchronously and do NOT let your other_term script exit; have it signal instead (e.g. via a semaphore file or, if you know how to do it, via pipe -- xterm does not close filehandles) to the first instance that the first instance can resume, and e.g. wait for user confirmation before exiting on both scripts.
| 0
| 1
| 0
| 0
|
2012-11-15T05:15:00.000
| 1
| 0
| false
| 13,391,901
| 0
| 0
| 0
| 1
|
I want to run a script that contains some commands to execute for eg: pwd, xterm home, date, time Here i want to run the script which executes pwd in first terminal, and creates a xterm home then in xterm home terminal i want to run date and time command then i want to run once again pwd in main terminal.
"How to switch between the terminals in a python script ?"
Thanks n Regards
Vasantkumar.R.Nagoor
|
System process listener in python
| 13,399,024
| 0
| 0
| 524
| 0
|
python
|
You can launch ps command, parse its output and get info about running processes. I haven't found better way still. If you're on unix of cause.
| 0
| 1
| 0
| 0
|
2012-11-15T13:12:00.000
| 2
| 0
| false
| 13,398,191
| 0
| 0
| 0
| 1
|
Is there any way to notify when the system process started, terminated or completed.
I am searching kind of listener that listen events of the system, such as process start, stop etc.
|
Python redirect stdin
| 13,405,792
| 1
| 1
| 1,168
| 0
|
python,usb,barcode,stdin
|
The normal way of intercepting standard input in Unix is pipes and multiple processes. If you have a multi-process application, then one process can receive the "raw" standard input, capture barcode input, and pass on the rest to its standard output. That output would then be the standard input of your UI process which would only receive non-barcode data. To set this up initially, have a single launch process that sets up the pipes, starts the other two processes, and exits.
If you're new to these concepts, you have a long and interesting learning process ahead of you :-)
All this assumes that you really are receiving "keyboard" data through standard input, and not through X11 events as you seem to imply. If you are developing within X11 (or GTK, etc.) then what I have described will almost certainly not work.
| 0
| 1
| 0
| 0
|
2012-11-15T20:05:00.000
| 1
| 1.2
| true
| 13,405,277
| 0
| 0
| 0
| 1
|
I'm trying to set up a barcode scanner object that will capture anything input from the scanner itself. The barcode scanner is recognized as a standard input (stdin) and therefore whenever I scan a barcode, I get standard input text. There will also be a keyboard attached to the system, which is another standard input. In order to differentiate between a barcode scan input and keyboard input, I will be using a prefix for any barcode information. In other words, if my barcodes will be 16 characters in length total, the first 4 would a predetermined character string/key to indicate that the following 12 characters are barcode inputs. This is pretty standard from what I've read.
Now most examples I've seen will recognize the barcode input by capturing the character input event in a GUI application. This event callback method then builds a buffer to check for the 4 character prefix and redirects barcode input as necessary. The event callback method also will skip any character input events that are not barcode related and allow them to interact with the GUI as a standard input normally would (type into a text box or what have you).
I want to do this same thing except without using a GUI application. I want my barcode scanner object to be independent of the GUI application. Ideally I would have a callback method, within the barcode scanner object, that stdin would call every time a character is input. From there I would grab any barcode input by checking for the 4 character prefix and would pass along any characters not apart of the barcode input. So in other words, I'd like stdin to pipe through my barcode scanner callback method, and then have my barcode scanner call back method be able to pipe non barcode characters back out as a standard input as though nothing had happened (still standard input that would go to a text box or something).
Is this possible without a while loop constantly monitoring stdin? Even if I had a while loop monitoring stdin, how would I pump characters back out as stdin if they weren't barcode input? I looked into using pyusb to take over the barcode scanner's USB interface, but this requires root privileges to interact with the hardware (not an option for my project). Any help would be greatly appreciated. I have not been able to find an example of this yet.
Edit: This project will be run in CentOS or some flavor of Linux.
|
ZeroMQ 2nd connection fail with einval
| 13,487,196
| 0
| 0
| 411
| 0
|
python,c,zeromq,pyzmq
|
It's not a race condition in zmq, and not a problem with zmq_connect. That extra 0x01 byte is presumably at fault. If you are passing that to zmq_connect, what result do you expect except EINVAL?
So where does that extra byte come from? Do you get it on all messages sent between two peers? What are you doing different in this program?
Since you haven't provided source code it's hard to offer any more detailed advice than this.
| 0
| 1
| 0
| 0
|
2012-11-17T00:25:00.000
| 1
| 0
| false
| 13,426,374
| 0
| 0
| 0
| 1
|
I got a C-ZMQ client that receiving two random ports (from pyzmq server) and then connecting to them.
Usually, everything is working, but sometimes the 2nd connect fail with errno set to EINVAL. (Even when I switched between the connect calls the 2nd still failed).
The port number is fine and it looks like some kind of race condition in ZeroMQ.
Anyone know how can I solve this problem?
[EDIT]:
The server sends the ports in this structure "port1:port2" for example "1234:1235"
the hexdump of the packet on the server is 31 32 33 34 3a 31 32 33 35
and on the client is 31 32 33 34 3a 31 32 33 35 01
and because the extra byte the 2nd connect fails...
Maybe this is some kind of compatibility bug between pyzmq and zmq
I'm using zmq ver 2.2.0
|
Pass data in google app engine using POST
| 13,427,499
| 1
| 1
| 297
| 0
|
python,google-app-engine,http-post,http-get
|
Links inherently generate GET requests. If you want to generate a POST request, you'd need to either:
Use a form with method="POST" and submit it, or
Use AJAX to load the new page.
| 0
| 1
| 0
| 0
|
2012-11-17T03:45:00.000
| 1
| 1.2
| true
| 13,427,477
| 0
| 0
| 1
| 1
|
I'm trying to pass a variable from one page to another using google app engine, I know how to pass it using GET put putting it in the URL. But I would like to keep the URL clean, and I might need to pass a larger amount of data, so how can pass info using post.
To illustrate, I have a page with a series of links, each goes to /viewTaskGroup.html, and I want to pass the name of the group I want to view based on which link they click (so I can search and get it back and display it), but I'd rather not use GET if possible.
I didn't think any code is required, but if you need any I'm happy to provide any needed.
|
Streaming audio and video
| 13,435,380
| 5
| 2
| 1,745
| 0
|
python,linux,streaming,video-streaming,audio-streaming
|
A good start for trying different options is to use vlc (http://www.videolan.org) Its file->transmit menu command opens a wizard with which you can play.
Another good one is gstreamer, (http://www.gstreamer.net), the gst-launch program in particular, which allows you to build pipelines from the command line.
| 0
| 1
| 0
| 0
|
2012-11-17T18:42:00.000
| 2
| 1.2
| true
| 13,433,597
| 0
| 0
| 1
| 1
|
I've been trying for a while but struggling. I have two projects:
Stream audio to server for distribution over the web
Stream audio and video from a webcam to a server for distribution over the web.
I have thus far tried ffmpeg and ffserver, PulseAudio, mjpegstreamer (I got this working but no audio) and IceCast all with little luck. While I'm sure this is likely my fault, I was wondering if there are any more option?
I've spent a while experimenting with Linux options and was also wondering if there were options with Python having recently played with OpenCV.
If anyone can suggest more options to look into Python or Linux based it would be much appreciated or point me at some good tutorials or explainations of what I've already used it would be much appreciated.
|
How to elegantly compare zip folder contents to unzipped folder contents
| 13,451,351
| 1
| 7
| 5,546
| 0
|
python,zip,backup,unzip
|
Rsync will automatically detect and only copy modified files, but seeing as you want to bzip the results, you still need to detect if anything has changed.
How about you output the directory listing (including time stamps) to a text file alongside your archive. The next time you diff the current directory structure against this stored text. You can grep differences out and pipe this file list to rsync to include those changed files.
| 0
| 1
| 0
| 1
|
2012-11-19T09:48:00.000
| 4
| 0.049958
| false
| 13,451,235
| 0
| 0
| 0
| 1
|
This is the scenario. I want to be able to backup the contents of a folder using a python script. However, I want my backups to be stored in a zipped format, possibly bz2.
The problem comes from the fact that I don’t want to bother backing up the folder if the contents in the “current” folder are exactly the same as what is in my most recent backup.
My process will be like this:
Initiate backup
Check contents of “current” folder against what is stored in the most recent zipped backup
If same – then “complete”
If different, then run backup, then “complete”
Can anyone recomment the most reliable and simple way of completing step2? Do I have to unzip the contents of the backup and store in a temp directory to do a comparison or is there a more elegant way of doing this? Possibly to do with modified date?
|
re-arrange pythonpath in pydev-eclipse?
| 14,926,237
| 1
| 3
| 464
| 0
|
python,eclipse,pydev
|
if you are using setuptools, you can try running sudo python setup.py develop on the egg as well as adding project dependencies between the two in Eclipse
| 0
| 1
| 0
| 1
|
2012-11-19T18:13:00.000
| 1
| 0.197375
| false
| 13,459,647
| 1
| 0
| 0
| 1
|
I'm using pydev in eclipse.
I was hoping that pydev would first use the python classes I develop in my source dir. but since I also install the built egg into system dir, pydev also picks up the classes from system dir.
the problem is that pydev uses system dir first in its python path. so after I installed a buggy version, and debug through pydev, and made the necessary changes in local sourcecode, it does not take effect, since the installed egg is not changed. or in the reverse case, as I was debugging, pydev takes me to the egg files, and I modify those egg files, so the real source code is not changed.
so How could I let pydev rearrange pythonpath order? (just like eclipse does for java build classpath) ?
thanks
yang
|
python library access in ubuntu 12.04
| 13,464,518
| 2
| 0
| 409
| 0
|
python,installation,ubuntu-12.04
|
As an absolute beginner, don't worry right now about where to install libraries. Simple example scripts that you're trying out for learning purposes don't belong being installed in any lib directory such as under /usr/lib/python.'
On Linux you want to do most work in your home directory, so just cd ~ to make sure you're there and create files there with an editor of your choice. You might want to organize your files hierarchically too. For example create a directory called src/ using the mkdir command in your home directory. And and then mdkir src/lpthw, for example, as a place to store all your samples from "Learn Python the Hard Way". Then simply fun python <path/to/py/file> to execute the script. Or you can cd ~/src/lpthw and run your scripts by filename only.
| 0
| 1
| 0
| 1
|
2012-11-19T23:46:00.000
| 1
| 1.2
| true
| 13,464,456
| 0
| 0
| 0
| 1
|
I am learning python from learnpythonthehardway. in the windows I had no issues with going through a lots of exercises because the setup was easier but I want to learn linux as well and ubuntu seemed to me the nicest choice.
now I am having trouble with setting up. I can get access to the terminal and then usr/lib/python.2.7 but I don't know if to save the script in this directory? if I try to make a directory inside this through mkdir I can't as permission is denied. I also tried to do chmod but didn't know how or if to do it.
any help regarding how to save my script in what libary? how to do that? and how to run it in terminal as: user@user$ python sampleexcercise.py
using ubuntu 12.04 lts
skill = newbie
thanks in advance.
|
Running Python in CMD not working
| 13,482,036
| 0
| 2
| 7,684
| 0
|
python,windows-8,cmd,environment-variables
|
Unless the project's folder is in the PATH, you cannot call the file unless you are inside the project's folder. Don't create PATHs for projects, unless they are needed; it's unnecessary.
Just transverse to the file's directory and run the command inside the directory. That will work.
If the project will be used by other projects/files, you can use PYTHONPATH to set the directory, so the other projects can successfully access it.
Hope that helps.
| 0
| 1
| 0
| 0
|
2012-11-20T20:50:00.000
| 3
| 0
| false
| 13,481,893
| 1
| 0
| 0
| 1
|
I am a Python beginner and I am having trouble running Python from CMD. I have added the Python installation directory as a PATH variable (;C:\Python27). I am able to run the Python Interpreter from CMD, however when I issue a command like "python file.py command" from CMD, it returns "Error2, Python can't open, no such file/directory".
So what I do is go to "cd C:\Folder\Folder2\My_Python_Files", then type the "file.py command" each and every time. Is there faster or more efficient way of doing this? I am currently running Python2.7 on Windows 8.
|
"No user interaction allowed" When running AppleScript in python
| 53,218,057
| 0
| 5
| 5,247
| 0
|
python,applescript
|
My issue was an app with LSBackgroundOnly = YES set attempting to run an AppleScript that displays UI, such as display dialog ...
Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"
AppleScript.scpt: execution error: No user interaction allowed. (-1713)
Using tell application "Finder" ... or etc. works, as shown in the other answer.
Or, remove the LSBackgroundOnly key to enable UI AppleScripts without telling a different Application.
LSUIElement presents a similar mode - no dock icon, no menu bar, etc. - but DOES allow UI AppleScripts to be launched.
| 0
| 1
| 0
| 0
|
2012-11-21T00:23:00.000
| 2
| 0
| false
| 13,484,482
| 0
| 0
| 0
| 1
|
I have the applescript which will displays menu list and allow user to select menu items etc.
It runs fine by itself. And now I try to run it in python. I get the No user interaction allowed. (-1713) error.
I looked online. I tried the following:
add on run function in the same applescript, so what i did is just add the main into the run
on run
tell application “AppleScript Runner”
main()
end tell
end run
i tried to run the above in python
import os
def main():
os.system ('osascript -e "tell application "ApplesScript Runner" do script /Users/eee/applescript/iTune.scpt end tell"')
if name == 'main':
main()
Neither way works. Can anyone tell me how to do this correctly?
|
When should I use `wait` instead of `communicate` in subprocess?
| 13,849,245
| 7
| 16
| 6,735
| 0
|
python,subprocess,pipe,wait,communicate
|
I suspect (the docs don't explicitly state it as of 2.6) in the case where you don't use PIPEs communicate() is reduced to wait(). So if you don't use PIPEs it should be OK to replace wait().
In the case where you do use PIPEs you can overflow memory buffer (see communicate() note) just as you can fill up OS pipe buffer, so either one is not going to work if you're dealing with a lot of output.
On a practical note I had communicate (at least in 2.4) give me one character per line from programs whose output is line-based, that wasn't useful to put it mildly.
Also, what do you mean by "retcode is not needed"? -- I believe it sets Popen.returncode just as wait() does.
| 0
| 1
| 0
| 0
|
2012-11-23T02:53:00.000
| 1
| 1.2
| true
| 13,522,556
| 1
| 0
| 0
| 1
|
In the document of wait (http://docs.python.org/2/library/subprocess.html#subprocess.Popen.wait), it says:
Warning
This will deadlock when using stdout=PIPE and/or stderr=PIPE and the
child process generates enough output to a pipe such that it blocks
waiting for the OS pipe buffer to accept more data. Use communicate()
to avoid that.
From this, I think communicate could replace all usage of wait() if retcode is not need. And even when the stdout or stdin are not PIPE, I can also replace wait() by communicate().
Is that right? Thanks!
|
Script to Shutdown Ubuntu
| 13,534,648
| 0
| 2
| 4,767
| 0
|
python,linux,bash,ubuntu,sh
|
You can call poweroff from a script, as long as it's running with superuser privileges.
| 0
| 1
| 0
| 1
|
2012-11-23T19:01:00.000
| 6
| 0
| false
| 13,534,541
| 0
| 0
| 0
| 1
|
I want to write a script which can Shutdown remote Ubuntu system. Actually i want my VM to shutdown safely when i shutdown my main machine on which my VM is installed .
Is there is any of doing this with the help of Sh scripts or script written in any language like Python.
|
New tables created in web2py not seen when running in Google app Engine
| 13,551,914
| 0
| 1
| 100
| 1
|
python,google-app-engine,web2py
|
App Engine datastore doesn't really have tables. That said, if web2py is able to make use of the datastore (I'm not familiar with it), then Kinds (a bit like tables) will only show up in the admin-console (/_ah/admin locally) once an entity has been created (i.e. tables only show up once one row has been inserted, you'll never see empty tables).
| 0
| 1
| 0
| 0
|
2012-11-25T05:29:00.000
| 1
| 0
| false
| 13,548,590
| 0
| 0
| 1
| 1
|
I have created an app using web2py and have declared certain new table in it using the syntax
db.define_table() but the tables created are not visible when I run the app in Google App Engine even on my local server. The tables that web2py creates by itself like auth_user and others in auth are available.
What am I missing here?
I have declared the new table in db.py in my application.
Thanks in advance
|
Force python module to be installed in certain directory
| 13,555,251
| 0
| 1
| 70
| 0
|
python,module,installation
|
Install the module:
sudo pip-2.7 install guess_language
Validate import and functionality:
> Python2.7
>>> import guess_language
>>> print guess_language.guessLanguage(u"שלום לכם")
he
| 0
| 1
| 0
| 1
|
2012-11-25T18:49:00.000
| 2
| 0
| false
| 13,554,241
| 1
| 0
| 0
| 1
|
Is there any way to force python module to be installed in the following directory? /usr/lib/python2.7
|
How to open ppt file using Python
| 13,568,384
| 0
| 3
| 14,562
| 0
|
python
|
Using catdoc/catppt with subprocess to open doc files and ppt files.
| 0
| 1
| 0
| 1
|
2012-11-26T05:28:00.000
| 5
| 0
| false
| 13,559,133
| 0
| 0
| 0
| 1
|
I want to open a ppt file using Python on linux, (like python open a .txt file).
I know win32com, but I am working on linux.
So, What do I need to do?
|
Build command in sublime text has stopped functioning
| 20,480,227
| 0
| 2
| 182
| 0
|
python,sublimetext2
|
Yes, you might want to give more detail. Have you made sure you have saved the file as .py? Try something simple like Print "Hello" and then see if this works.
| 0
| 1
| 0
| 1
|
2012-11-26T19:47:00.000
| 1
| 1.2
| true
| 13,571,993
| 0
| 0
| 0
| 1
|
I am a new user of Sublime text.
It has been working fine for a few days until it began to refuse to compile anything and I don't know where the problem is. I wrote python programs and pressed cmd+b and nothing happened. When I try to launch repl for this file - that also doesn't work. I haven't installed any plugins and before this issue all has been working well.
Any suggestions on how to identify/fix the problem are greatly appreciated
|
Changing default directories for python and ruby
| 13,577,108
| 1
| 1
| 60
| 0
|
python,ruby,terminal
|
Seems like you ought to just modify your $PATH to include /usr/local/Cellar before /usr/local/bin. Your shell will use the first one it finds.
| 0
| 1
| 0
| 0
|
2012-11-27T03:20:00.000
| 2
| 0.099668
| false
| 13,576,887
| 0
| 0
| 0
| 2
|
Hey :) I'm working on a Mac with Mountain Lion, and installed both Ruby 1.9.3 and Python 2.7.3 from homebrew. However, which python and which ruby return that they are in /usr/local/bin/__, respectively. I would like them to read from /usr/local/Cellar/python or /usr/local/Cellar/ruby. How do I change their paths?
|
Changing default directories for python and ruby
| 13,577,066
| 0
| 1
| 60
| 0
|
python,ruby,terminal
|
I don't know on mac but on linux they're set up as links to /usr/local/bin/*
If you wanted to change the symbolic link you could run the command
ln -s /user/local/Celler/python /usr/local/bin/python which would make a new symbolic link.
Whether this works on OSX I can't confirm though.
Another method you might want to try is just calling the homebrew versions directly rather than making everything on your system use them. Or just making a symbolic link to something else such as ln -s /user/local/Celler/python /usr/local/bin/pythonH
| 0
| 1
| 0
| 0
|
2012-11-27T03:20:00.000
| 2
| 0
| false
| 13,576,887
| 0
| 0
| 0
| 2
|
Hey :) I'm working on a Mac with Mountain Lion, and installed both Ruby 1.9.3 and Python 2.7.3 from homebrew. However, which python and which ruby return that they are in /usr/local/bin/__, respectively. I would like them to read from /usr/local/Cellar/python or /usr/local/Cellar/ruby. How do I change their paths?
|
Subprocess in Reading Serial Port Read
| 13,585,552
| 1
| 2
| 2,531
| 0
|
python,multiprocessing
|
use a pipe.
Ceate two processes using the subprocess module, the first reads from the serial port and writes the set of hex codes to stdout. This is piped to the second process which reads from stdin and updates the database.
| 0
| 1
| 0
| 0
|
2012-11-27T13:25:00.000
| 2
| 0.099668
| false
| 13,585,238
| 0
| 0
| 0
| 1
|
Hi I am using Serial port communication to one of my device using python codes. It is sending a set of Hex code, Receives a set of data. process it.
This data has to be stored in to a database.
I have another script that has MYSQLdb library pushing it in to the database.
If I do that in sequentially in one script i lose a lot in sampling rate. I can sample up to 32 data sets per second if I dont connect to a database and insert it in to the table.
If I use Multiprocessing and try to run it my sampling rate goes to 0.75, because the parent process is waiting for the child to join. so how can i handle this situation.
Is it possible to run them independently by using a queue to fill data?
|
Invoking python under CygWin on Windows hangs
| 13,588,963
| 43
| 40
| 23,262
| 0
|
python,cygwin
|
The problem is that due to the way that the Cygwin terminal (MinTTY) behaves, the native Windows build of Python doesn't realize that stdout is a terminal device -- it thinks it's a pipe, so it runs in non-interactive mode instead of interactive mode, and it fully buffers its output instead of line-buffering it.
The reason that this is new is likely because in your previous Cygwin installation, you didn't have MinTTY, and the terminal used was just the standard Windows terminal.
In order to fix this, you either need to run Python from a regular Windows terminal (Cmd.exe), or install the Cygwin version of Python instead of a native Windows build of Python. The Cygwin version (installable as a package via Cygwin's setup.exe) understands Cygwin terminals and acts appropriately when run through MinTTY.
If the particular version of Python you want is not available as a Cygwin package, then you can also download the source code of Python and build it yourself under Cygwin. You'll need a Cygwin compiler toolchain if you don't already have one (GCC), but then I believe it should compile with a standard ./configure && make && make install command.
| 0
| 1
| 0
| 0
|
2012-11-27T16:17:00.000
| 8
| 1.2
| true
| 13,588,454
| 1
| 0
| 0
| 2
|
Installing a new Windows system, I've installed CygWin and 64
bit Python (2.7.3) in their default locations (c:\cygwin and
c:\Python27\python), and added both the CygWin bin and the
Python directory to my path (in the user variable PATH). From
the normal command window, Python starts up perfectly, but when
I invoke it from bash in the CygWin environment, it hangs,
never giving my the input prompt.
I've done this on other machines, previously, but always with
older versions of Python (32 bits) and CygWin, and with Python
in a decidely non-standard location. Has anyone else had this
problem, or could someone tell me what it might be due to?
|
Invoking python under CygWin on Windows hangs
| 48,180,292
| -1
| 40
| 23,262
| 0
|
python,cygwin
|
Reinstall mintty with cygwin setup. Didn't have to use python -i after that.
| 0
| 1
| 0
| 0
|
2012-11-27T16:17:00.000
| 8
| -0.024995
| false
| 13,588,454
| 1
| 0
| 0
| 2
|
Installing a new Windows system, I've installed CygWin and 64
bit Python (2.7.3) in their default locations (c:\cygwin and
c:\Python27\python), and added both the CygWin bin and the
Python directory to my path (in the user variable PATH). From
the normal command window, Python starts up perfectly, but when
I invoke it from bash in the CygWin environment, it hangs,
never giving my the input prompt.
I've done this on other machines, previously, but always with
older versions of Python (32 bits) and CygWin, and with Python
in a decidely non-standard location. Has anyone else had this
problem, or could someone tell me what it might be due to?
|
C++ Python Module not being linked into Python with g++
| 13,590,180
| 0
| 0
| 85
| 0
|
python,linux,gcc,g++,python-2.6
|
I was able to solve my problem by manually editing the Makefile generated by the configure script so that the linker was using g++ instead of gcc and that solved my problems. Thanks for the possible suggestions!
| 0
| 1
| 0
| 1
|
2012-11-27T16:50:00.000
| 1
| 1.2
| true
| 13,589,075
| 1
| 0
| 0
| 1
|
I have a custom C++ Python Module that I want to build into Python that builds fine but fails when it goes to the linking stage. I have determined that the problem is that it is using the gcc to link and not g++ and this is what is causing all of the errors I am seeing when it tries to link in the std libraries. How would I get the Python build process to link with g++ instead of gcc? Do I have to manually edit the Makefile or is it something I need to set when I am configuring it. I am compiling Python 2.6 on CentOS 5.8.
Thanks in advance for the help!
|
How to distribute and deploy Python 3 code with dependency isolation
| 13,671,961
| 0
| 5
| 1,205
| 0
|
python,deployment,python-3.2
|
Have you looked at buildout (zc.buildout)? With a custom recipe you may be able to automate most of this.
| 0
| 1
| 0
| 0
|
2012-11-27T21:40:00.000
| 2
| 0
| false
| 13,593,594
| 1
| 0
| 0
| 1
|
I'm not happy with the way that I currently deploy Python code and I was wondering if there is a better way. First I'll explain what I'm doing, then the drawbacks:
When I develop, I use virtualenv to do dependancy isolation and install all libraries using pip. Python itself comes from my OS (Ubuntu)
Then I build my code into a ".deb" debian package consisting of my source tree and a pip bundle of my dependancies
Then when I deploy, I rebuild the virtualenv environment, source foo/bin/activate and then run my program (under Ubuntu's upstart)
Here are the problems:
The pip bundle is pretty big and increases the size of the debian package significantly. This is not too big a deal, but it's annoying.
I have to build all the C libraries (PyMongo, BCrypt, etc) every time I deploy. This takes a little while (a few minutes) and it's a bit lame to do this CPU bound job on production
Here are my constraints:
Must work on Python 3. Preferably 3.2
Must have dependency isolation
Must work with libraries that use C (like PyMongo)
I've heard things about freezing, but I haven't been able to get this to work. cx_freeze out of Pypi doesn't seem to compile (on my Python, at least). The other freeze utilities don't seem to work with Python 3. How can I do this better?
|
Python command not working in command prompt
| 29,402,992
| 5
| 116
| 713,166
| 0
|
python,windows,windows-8,command
|
I am probably the most novice user here, I have spent six hours just to run python in the command line in Windows 8. Once I installed the 64-bit version, then I uninstalled it and replaced it with 32-bit version. Then, I tried most suggestions here, especially by defining path in the system variables, but still it didn't work.
Then I realised when I typed in the command line:
echo %path%
The path still was not directed to C:\python27. So I simply restarted the computer, and now it works.
| 0
| 1
| 0
| 0
|
2012-11-28T01:53:00.000
| 23
| 0.043451
| false
| 13,596,505
| 1
| 0
| 0
| 11
|
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.