Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Get process name by PID
| 4,189,752
| 20
| 26
| 26,492
| 0
|
python,process,pid
|
Under Linux, you can read proc filesystem. File /proc/<pid>/cmdline contains the commandline.
| 0
| 1
| 0
| 0
|
2010-11-15T23:00:00.000
| 5
| 1.2
| true
| 4,189,717
| 0
| 0
| 0
| 1
|
This should be simple, but I'm just not seeing it.
If I have a process ID, how can I use that to grab info about the process such as the process name.
|
Python: open and display a text file at a specific line
| 4,190,782
| 3
| 2
| 2,373
| 0
|
python,vim
|
Once you've got the line number, you can run gvim filename -c 12 and it will go to line 12 (this is because -c <command> is "Execute <command> after loading the first file", so -c 12 is just saying run :12 after loading the file).
So I'm not sure if you really need Python at all in this case; just sending the line number direct to gvim may be all you need.
| 0
| 1
| 0
| 0
|
2010-11-16T02:32:00.000
| 3
| 0.197375
| false
| 4,190,695
| 1
| 0
| 0
| 1
|
I don't use Python very often, but I sometimes develop simple tools in it to make my life easier. My most frequently used is a log checker/crude debugger for SAS. It reads the SAS log line by line checking for any errors in my list and dumps anything it finds into standard out (I'm running Python 2.6 in a RedHat Linux environment) - along with the error, it prints the line number of that error (not that that's super useful).
What I'd really like to do is to optionally feed the script a line number and have it open the SAS log itself in GVIM and display it scrolled down to the line number I've specified. I haven't had any luck finding a way to do this - I've looked pretty thoroughly on Google to no avail. Any ideas would be greatly appreciated. Thanks!
Jeremy
|
Python: Create a tuple from a command line input
| 4,196,432
| 1
| 2
| 5,466
| 0
|
python,command,command-line-arguments,tuples
|
Iterate through sys.argv until you reach another flag.
| 0
| 1
| 0
| 0
|
2010-11-16T16:25:00.000
| 8
| 0.024995
| false
| 4,196,389
| 1
| 0
| 0
| 1
|
I have a program which provides a command line input like this:
python2.6 prog.py -p a1 b1 c1
Now, we can have any number of input parameters i.e. -p a1 and -p a1 c1 b1 e2 are both possibilities.
I want to create a tuple based on the variable input parameters. Any suggestions on how to do this would be very helpful! A fixed length tuple would be easy, but I am not sure how to implement a variable length one.
thanks.
|
When to use sys.path.append and when modifying %PYTHONPATH% is enough
| 4,208,698
| 0
| 2
| 2,369
| 0
|
python,environment-variables
|
If the other modules belongs to the same package you should be responsible to locate them if
they are not stored in the conventional format (i.e. append the path with sys).
If the other modules are user-configurable then the user have to specify the installation
path trough PYTHONPATH
| 0
| 1
| 0
| 0
|
2010-11-17T20:04:00.000
| 2
| 0
| false
| 4,208,659
| 1
| 0
| 0
| 1
|
So, it turned out i was missing a semi-colon from my PYTHONPATH definition. But this only got me so far. for some reason, my script did NOT work as a scheduled task (on WinXP) until I explicitly added a directory from PYTHONPATH to the top of my script.
Question is:
When do I need to explicitly append something to my path and when can I simply rely on the environment variables?
|
When and how to use Tornado? When is it useless?
| 4,213,777
| 50
| 87
| 29,364
| 0
|
python,asynchronous,nonblocking,tornado
|
There is a server and a webframework. When should we use framework and when can we replace it with other one?
This distinction is a bit blurry. If you are only serving static pages, you would use one of the fast servers like lighthttpd. Otherwise, most servers provide a varying complexity of framework to develop web applications. Tornado is a good web framework. Twisted is even more capable and is considered a good networking framework. It has support for lot of protocols.
Tornado and Twisted are frameworks that provide support non-blocking, asynchronous web / networking application development.
When should Tornado be used?
When is it useless?
When using it, what should be taken into account?
By its very nature, Async / Non-Blocking I/O works great when it is I/O intensive and not computation intensive. Most web / networking applications suits well for this model. If your application demands certain computational intensive task to be done then it has to be delegated to some other service that can handle it better. While Tornado / Twisted can do the job of web server, responding to web requests.
How can we make inefficient site using Tornado?
Do any thing computational intensive task
Introduce blocking operations
But I guess it's not a silver bullet and if we just blindly run Django-based or any other site with Tornado it won't give any performance boost.
Performance is usually a characteristic of complete web application architecture. You can bring down the performance with most web frameworks, if the application is not designed properly. Think about caching, load balancing etc.
Tornado and Twisted provide reasonable performance and they are good for building performant web applications. You can check out the testimonials for both twisted and tornado to see what they are capable of.
| 0
| 1
| 0
| 0
|
2010-11-18T08:29:00.000
| 2
| 1.2
| true
| 4,212,877
| 0
| 0
| 1
| 1
|
Ok, Tornado is non-blocking and quite fast and it can handle a lot of standing requests easily.
But I guess it's not a silver bullet and if we just blindly run Django-based or any other site with Tornado it won't give any performance boost.
I couldn't find comprehensive explanation of this, so I'm asking it here:
When should Tornado be used?
When is it useless?
When using it, what should be taken into account?
How can we make inefficient site using Tornado?
There is a server and a webframework.
When should we use framework and when can we replace it with other one?
|
Detect if python script is run from console or by crontab
| 4,213,327
| 3
| 15
| 4,781
| 0
|
python,bash,environment-variables,crontab
|
Use a command line option that only cron will use.
Or a symlink to give the script a different name when called by cron. You can then use sys.argv[0]to distinguish between the two ways to call the script.
| 0
| 1
| 0
| 1
|
2010-11-18T09:01:00.000
| 3
| 0.197375
| false
| 4,213,091
| 1
| 0
| 0
| 1
|
Imagine a script is running in these 2 sets of "conditions":
live action, set up in sudo crontab
debug, when I run it from console ./my-script.py
What I'd like to achieve is an automatic detection of "debug mode", without me specifying an argument (e.g. --debug) for the script.
Is there a convention about how to do this? Is there a variable that can tell me who the script owner is? Whether script has a console at stdout? Run a ps | grep to determine that?
Thank you for your time.
|
Getting CPU or motherboard serial number?
| 4,216,127
| 7
| 5
| 7,942
| 0
|
python,c,licensing,cpu,motherboard
|
Under Linux, you could use "lshw -quiet -xml" and parse its output. You'll find plenty of system information here: cpuid, motherboard id and much more.
| 0
| 1
| 0
| 1
|
2010-11-18T14:49:00.000
| 5
| 1
| false
| 4,216,009
| 0
| 0
| 0
| 2
|
I'm trying to get the CPU serial or motherboard serial using C or Python for licensing purposes. Is it possible?
I'm using Linux.
|
Getting CPU or motherboard serial number?
| 4,223,022
| 0
| 5
| 7,942
| 0
|
python,c,licensing,cpu,motherboard
|
CPUs no longer obtain a serial number and it's been like that for a while now. For the CPUID - it's unique per CPU model therefore it doesn't help with licensing.
| 0
| 1
| 0
| 1
|
2010-11-18T14:49:00.000
| 5
| 0
| false
| 4,216,009
| 0
| 0
| 0
| 2
|
I'm trying to get the CPU serial or motherboard serial using C or Python for licensing purposes. Is it possible?
I'm using Linux.
|
check ldap access rights with python
| 4,243,802
| 0
| 2
| 1,976
| 0
|
python,ldap,acl
|
I think I will try the "test" approch, if its too slow maybe with some caching.
The reason why I want to keep the ACL definition only on the ldap server is that there are other ways to interact with the server (there will be cli tools, and also the standard ldap tools), so I'd like to keepall those interfaces in sync ACL wise, with just one place to define ACLs
| 0
| 1
| 0
| 0
|
2010-11-19T11:29:00.000
| 3
| 0
| false
| 4,224,527
| 0
| 0
| 0
| 1
|
I am writing a web frontend (django) for an LDAP server. There are different kinds of people with different kinds of privileges, so I set up LDAP ACL to control who gets to see or edit a specific attribute.
Now its easy to determine if a specific user has read access - try to read, and you will see what you get.
But I don't see an elegant way to distinguish between read and write access before I actually try to write some changes. That is, I would like to make it clear in the interface, if the logged in user has write access, or can only read. So that users won't try to edit an attribute which they can not.
The only thing that came to my mind was to try to temporarily write some sort of dummy into an attribute, and see if that was successful. If so, the original value would be restored, and I know that this user has write access. I can then display this attribute as editable.
The problem with this is that if the server crashes after the dummy has been written and before the original value has been restored, I am left with dummy values in my LDAP server.
What I would need is some way to query the ACLs, in a way similar that I can query schema definitions. But maybe that is "forbidden" by LDAP?
Any ideas?
Isaac
|
How should I use Celery when task results are large?
| 18,987,208
| 1
| 7
| 1,893
| 0
|
python,architecture,task,celery,task-queue
|
I handle this by structuring my app to write the multi-megabyte results into files, which I them memmap into memory so they are shared among all processes that use that data... This totally finesses the question of how to get the results to another machine, but if the results are that large, it sounds like the these tasks are internal tasks coordinate between server processes.
| 0
| 1
| 0
| 0
|
2010-11-22T04:09:00.000
| 2
| 0.099668
| false
| 4,242,205
| 0
| 0
| 1
| 1
|
What's the best way to handle tasks executed in Celery where the result is large? I'm thinking of things like table dumps and the like, where I might be returning data in the hundreds of megabytes.
I'm thinking that the naive approach of cramming the message into the result database is not going to serve me here, much less if I use AMQP for my result backend. However, I have some of these where latency is an issue; depending on the particular instance of the export, sometimes I have to block until it returns and directly emit the export data from the task client (an HTTP request came in for the export content, it doesn't exist, but must be provided in the response to that request ... no matter how long that takes)
So, what's the best way to write tasks for this?
|
URLError: urlopen error timed out
| 4,244,324
| 0
| 2
| 14,600
| 0
|
python,django,ubuntu,apache2
|
Run simple network analysis first,
tracert
ping
wireshark (for network analysis)
Check your firewall and proxy settings on the server and make sure the correct ports, routes and permissions are fine.
| 0
| 1
| 0
| 0
|
2010-11-22T08:23:00.000
| 2
| 0
| false
| 4,243,550
| 0
| 0
| 1
| 1
|
Whenever i try to make a HTTP request to some url through my django application which is running on top of apache mod_python (Machine: Ubuntu 10.04 server edition, 64-bits), it gives a timeout error.
The strange thing is that it works fine on Ubuntu 10.04 server edition, 32-bits.
I feel there could be some proxy connection issue. But i am not sure how to resolve it, if that is the case.
What could be the issue? Can anyone please throw some light on this.
Thanks in Advance.
|
possible to define a function in commandline window of a programming language?
| 4,251,568
| 2
| 2
| 1,865
| 0
|
python,bash,r,matlab
|
This isn't really a feature of a programming language but of an implementation of that programming language. For example, there exist C interpreters and Lisp Compilers. This is normaly called an REPL (Read-Eval-Print-Loop) and is generally a feature of interpreted implementations.
| 0
| 1
| 0
| 0
|
2010-11-23T00:14:00.000
| 3
| 0.132549
| false
| 4,251,553
| 0
| 0
| 0
| 1
|
Is it possible to define a function in commandline window of Matlab? Looks no to me.
But for R, it is possible to do so. I was wondering why there is this difference and if there is more to say behind this kind of feature of a programming language, or can I say just interpretive language (such as Python, Bash,...)?
Thanks!
|
Looking to write my own 'Application Whitelisting Tool' Something like Bit9?
| 4,255,403
| 1
| 1
| 611
| 0
|
c#,python,process,whitelist
|
How do I block ALL other processes from starting?
Deep, mysterious OS API magic. After all, you're interfering with how the OS works. You must, therefore patch or hook into the OS itself.
Is this possible in .Net? What about python?
It doesn't involve time-travel, anti-gravity or perpetual motion. It can be done.
It's a matter of figuring out (1) which OS API calls are required to put your new hook into the OS, and (2) implementing a call from the OS to your code.
Is really hard.
Is really easy.
| 0
| 1
| 0
| 0
|
2010-11-23T04:16:00.000
| 2
| 0.099668
| false
| 4,252,725
| 1
| 0
| 0
| 1
|
Playing around with project ideas that I might actually use, figured I might try to write my own simple version of Bit9 Parity, in either C# or Python. My question is what is the best way to go about doing this. I've googled .Net functionality for prevent processes from executing, but I havn't really found what I'm looking for. What I'd like to do is monitory the system memory as a whole, and deny any process or application from starting unless specifically identified in a list. ProcessWatcher caught my eye, but is that not for a specific process ID. How do I block ALL other processes from starting? Is this possible in .Net? What about python?
|
where to start programing a server application
| 4,255,361
| 0
| 4
| 186
| 0
|
python
|
Here's an approach.
Write an "agent" in Python. The agent is installed on the various computers. It does whatever processing your need locally. It uses urllib2 to make RESTful HTTP requests of the server. It either posts data or requests work to do or whatever is supposed to go on.
Write a "server" in Python. The server is installed on one computer. This is written using wsgiref and is a simple WSGI-based server that serves requests from the various agents scattered around campus.
While this requires agent installation, it's very, very simple. It can be made very, very secure (use HTTP Digest Authentication). And the agent's privileges define the level of vulnerability. If the agent is running in an account with relatively few privileges, it's quite safe. The agent shouldn't run as root and the agent's account should not be allowed to su or sudo.
| 0
| 1
| 1
| 0
|
2010-11-23T07:04:00.000
| 3
| 0
| false
| 4,253,557
| 0
| 0
| 0
| 1
|
Question: Where is a good starting point for learning to write server applications?
Info:
I'm looking in to writing a distributed computing system to harvest the idle cycles of the couple hundred computers sitting idle around my college's campus. There are systems that come close, but don't quite meet all the requirements I need. (most notable all transactions have to be made through SSH because the network blocks everything else) So I've decided to write my own application. partly to get exactly what I want, but also for experience.
Important features:
Written in python
All transaction made through ssh(this is solved through the simple use of pexpect)
Server needs to be able to take potentially hundreds of hits. I'll optimize later, the point being simulation sessions.
I feel like those aren't to ridiculous of things to try and accomplish. But with the last one I'm not certain where to even start. I've actually already accomplished the first 2 and written a program that will log into my server, and then print ls -l to a file locally. so that isn't hard. but how do i attach several clients asking the server for simulation data to crunch all at the same time? obviously it feels like threading comes in to play here, but more than that I'm sure.
This is where my problem is. Where does one even start researching how to write server applications? Am I even using the right wording? What information is there freely available on the internet and/or what books are there on such? again, specifically python, but a step in the right direction is one more than where i am now.
p.s. this seeemed more fitting for stackoverflow than serverfault. Correct me if I am wrong.
|
App Engine: CPU Over Quota on urlfetch()
| 4,262,449
| 2
| 2
| 454
| 0
|
python,google-app-engine
|
You should change the design of your application.
Instead of making requests to Twitter from App Engine for every user request:
Do the request in the user's browser with JavaScript if possible.
After a urlfetch, store Twitter's response in the datastore, since a call to the datastore is faster on the next request. If you can cache something in memcache, even better.
Update the stored data regularly with the help of cron jobs and task queue.
| 0
| 1
| 0
| 0
|
2010-11-23T14:05:00.000
| 2
| 0.197375
| false
| 4,256,767
| 0
| 0
| 1
| 1
|
Hey. I'm quite new to App Engine. I created a web-based Twitter app which is now running on App Engine and I'm constantly hitting my CPU Over Quota limits. I did a little profiling and I found out that every request consists of two urlfetch queries, each one of which takes up to 2 CPU seconds. That time is probably spent waiting, all the rest of the code is done in under 200 ms (including work with the Datastore). The Quota is for 6.5 hours per day and every request of mine takes approx. 4 CPU seconds. I ran out of the free quota this morning in only a few hours.
What is the way around this? I can't make Twitter respond to my API calls quicker, and I cannot cache the results, since every request is for a different Twitter profile.
Any help is appreciated,
Thanks!
|
Python/Erlang: What's the difference between Twisted, Stackless, Greenlet, Eventlet, Coroutines? Are they similar to Erlang processes?
| 9,762,122
| 11
| 30
| 7,391
| 0
|
python,asynchronous,erlang,nonblocking,python-stackless
|
You are almost right when comparing Stackless
to Greenlet. The missing thing is:
Stackless per se does not add something. Instead, Greenlet, invented 5 years after Stackless, removes certain things. It is written simple enough to be built as an extension module instead of a replacement interpreter.
This is really funny—Stackless has many more features, is about 10 times more efficient on switching, and provides pickling of execution state.
Greenlet still wins, probably only due to ease of use as an extension module. So I'm thinking about reverting the process by extending Greenlet with pickling. Maybe that would change the picture, again :-)
| 0
| 1
| 0
| 0
|
2010-11-24T02:50:00.000
| 3
| 1
| false
| 4,263,059
| 1
| 0
| 0
| 1
|
My incomplete understanding is that Twisted, Stackless, Greenlet, Eventlet, Coroutines all make use of async network IO and userland threads that are very lightweight and quick to switch. But I'm not sure what are the differences between them.
Also they sound very similar to Erlang processes. Are they pretty much the same thing?
Anyone who could help me understand this topic more would be greatly appreciated.
|
Gtk loop or Cron for timer
| 4,268,840
| 2
| 2
| 383
| 0
|
python
|
Cron job. It's more likely to be "in line" with actual time, since it's a more stable and time-tested choice. It's also less demanding on resources than using a loop in Python since it doesn't require a constant Python interpreter process, and is probably better optimized than pyGTK (choice is mature, stable software vs. less mature, less stable).
| 0
| 1
| 0
| 1
|
2010-11-24T15:19:00.000
| 1
| 1.2
| true
| 4,268,374
| 0
| 0
| 0
| 1
|
Well, I have created a python script, which checks the number of uncompleted tasks of tasque and displays it using pynotify periodically. My question is how do I implement this timer. I can think of two things. A Cron job to execute a python script periodically or using a python script which uses a gtk loop to call the specified function for checking periodically.
|
Python subprocess.Popen as different user on Windows
| 4,274,871
| 3
| 7
| 10,165
| 0
|
python,windows,subprocess,runas
|
Another option is to popen not the desired process but runas ... command. Note that the Run As service should be enabled and running.
| 0
| 1
| 0
| 0
|
2010-11-25T05:12:00.000
| 3
| 0.197375
| false
| 4,273,939
| 1
| 0
| 0
| 1
|
What is the best manner of launching a subprocess as a different user in Python on Windows? Preferably XP and up, but if it works only on Vista and 7, I can live with that too.
|
receiving a linux signal and interating with threads
| 4,280,474
| 1
| 1
| 474
| 0
|
python,linux,multithreading,signals
|
About the only thing you can do is set a global variable from your signal handler, and have your threads check its value periodically.
| 0
| 1
| 0
| 0
|
2010-11-25T19:50:00.000
| 3
| 0.066568
| false
| 4,280,432
| 0
| 0
| 0
| 2
|
hello to you all :)
i have a program that have a n number of threads(could be a lot) and they do a pretty extensive job. My problem is that sometimes some people turn off or reboot the server(the program runs all day in the company servers) i know that there is a way to make a handler for the linux signals i want to know what i should do to interact with all threads making them to use run a function and then stop working. There is a way to do that?
sorry the bad english :P
|
receiving a linux signal and interating with threads
| 4,280,803
| 3
| 1
| 474
| 0
|
python,linux,multithreading,signals
|
The best way of handling this is not requiring any shutdown actions at all.
For example, your signal handler for (e.g.) SIGTERM or SIGQUIT can just call _exit and quit the process with no clean-up.
Under Linux (with non-ancient threads) when one thread calls _exit (or exit if you really want) other threads get stopped too - whatever they were in the middle of doing.
This would be good as it implements a crash-only design.
Crash-only design for a server is based on the principle that the machine may crash at any point, so you need to be able to recover from such a failure anyway, so just make it the normal way of quitting. No extra code should be required as your server should be robust enough anyway.
| 0
| 1
| 0
| 0
|
2010-11-25T19:50:00.000
| 3
| 1.2
| true
| 4,280,432
| 0
| 0
| 0
| 2
|
hello to you all :)
i have a program that have a n number of threads(could be a lot) and they do a pretty extensive job. My problem is that sometimes some people turn off or reboot the server(the program runs all day in the company servers) i know that there is a way to make a handler for the linux signals i want to know what i should do to interact with all threads making them to use run a function and then stop working. There is a way to do that?
sorry the bad english :P
|
Use wildcard with os.path.isfile()
| 38,177,861
| -1
| 58
| 80,871
| 0
|
python,path,wildcard
|
iglob is better than glob here since you do not actually want the full list of rar files, but just want to check that one rar exists
| 0
| 1
| 0
| 0
|
2010-11-28T09:15:00.000
| 8
| -0.024995
| false
| 4,296,138
| 1
| 0
| 0
| 1
|
I'd like to check if there are any .rar files in a directory. It doesn’t need to be recursive.
Using wildcard with os.path.isfile() was my best guess, but it doesn't work. What can I do then?
|
GAE - Secure data Connector and Taskqueue/Cron
| 4,300,033
| 2
| 1
| 236
| 0
|
python,google-app-engine
|
'Offline' requests such as Task Queue tasks and Cron jobs have no 'user' as far as systems like SDC are concerned. If your SDC connection requires a logged in user, you will not be able to access it from a cron/task queue job.
| 0
| 1
| 0
| 0
|
2010-11-28T12:56:00.000
| 1
| 1.2
| true
| 4,296,830
| 0
| 0
| 1
| 1
|
Is it possible to use the Secure Data Connector (SDC) to access internal resources in Tasks/Cron Jobs on the Google AppEngine?
The documentation speaks about the currently logged in user but does not further elaborate this scenario.
|
PHP and Python interfacing
| 4,305,158
| 0
| 1
| 354
| 0
|
php,python
|
I think you should implement a web service . I don't know how to do it with python but i suppose it would be fairly easy.
| 0
| 1
| 0
| 1
|
2010-11-29T15:16:00.000
| 4
| 0
| false
| 4,305,070
| 0
| 0
| 0
| 1
|
I have a Python application (command line tool running on a machine M1) that has input I1 and output O2. Also, I have a PHP application (website running on a machine M2) that feeds the Python application with input I1 and expects to read the output O1. My question is what's the best approach to solve this problem? (the environment is GNU/Linux)
I was thinking at a solution with ssh; the PHP script executes a command via ssh: "ssh M2:./my_script.py arguments output_file" and transfers the output file "scp M2:output_file ." from M2 to M1. But, I don't think this solution is elegant. I was thinking of web services (the Python application should expose a web service), but I'm not sure what's the complexity of this solution or if it works.
Thanks,
|
How can I protect a logging object from the garbage collector in a multiprocessing process?
| 4,316,259
| 1
| 1
| 534
| 0
|
python,multithreading,logging,garbage-collection,multiprocessing
|
I agree with @THC4k. This doesn't seem like a GC issue. I'll give you my reasons why, and I'm sure somebody will vote me down if I'm wrong (if so, please leave a comment pointing out my error!).
If you're using CPython, it primarily uses reference counting, and objects are destroyed immediately when the ref count goes to zero (since 2.0, supplemental garbage collection is also provided to handle the case of circular references). Keep a reference to your log object and it won't be destroyed.
If you're using Jython or IronPython, the underlying VM does the garbage collection. Again, keep a reference and the GC shouldn't touch it.
Either way, it seems that either you're not keeping a reference to an object you need to keep alive, or you have some other error.
| 0
| 1
| 0
| 0
|
2010-11-30T14:41:00.000
| 3
| 1.2
| true
| 4,314,912
| 1
| 0
| 0
| 2
|
I create a couple of worker processes using Python's Multiprocessing module 2.6.
In each worker I use the standard logging module (with log rotation and file per worker)
to keep an eye on the worker. I've noticed that after a couple of hours that no more
events are written to the log. The process doesn't appear to crash and still responds
to commands via my queue. Using lsof I can see that the log file is no longer open.
I suspect the log object may be killed by the garbage collector, if so is there a way
that I can mark it to protect it?
|
How can I protect a logging object from the garbage collector in a multiprocessing process?
| 4,316,922
| 0
| 1
| 534
| 0
|
python,multithreading,logging,garbage-collection,multiprocessing
|
You could run gc.collect() immediately after fork() to see if that causes the log to be closed. But it's not likely garbage collection would take effect only after a few hours.
| 0
| 1
| 0
| 0
|
2010-11-30T14:41:00.000
| 3
| 0
| false
| 4,314,912
| 1
| 0
| 0
| 2
|
I create a couple of worker processes using Python's Multiprocessing module 2.6.
In each worker I use the standard logging module (with log rotation and file per worker)
to keep an eye on the worker. I've noticed that after a couple of hours that no more
events are written to the log. The process doesn't appear to crash and still responds
to commands via my queue. Using lsof I can see that the log file is no longer open.
I suspect the log object may be killed by the garbage collector, if so is there a way
that I can mark it to protect it?
|
How to install Python using source files to a custom directory
| 4,315,780
| 4
| 3
| 2,364
| 0
|
python,installation
|
Configure with:
./configure --prefix=/opt/Python27
In general, you can just do ./configure --help to get a list of all the options you're allowed to set for that configure script.
| 0
| 1
| 0
| 0
|
2010-11-30T15:54:00.000
| 2
| 1.2
| true
| 4,315,657
| 1
| 0
| 0
| 2
|
I want to build Python 2.7.1 additionally to the one the redhat server already has pre-installed.
What options do I need to modify/use so that Python can be built under i.e. /opt/Python27
I would appreciate any help!
|
How to install Python using source files to a custom directory
| 4,319,108
| 1
| 3
| 2,364
| 0
|
python,installation
|
You also want to pass --enable-shared to configure since it is an additional installation of Python.
| 0
| 1
| 0
| 0
|
2010-11-30T15:54:00.000
| 2
| 0.099668
| false
| 4,315,657
| 1
| 0
| 0
| 2
|
I want to build Python 2.7.1 additionally to the one the redhat server already has pre-installed.
What options do I need to modify/use so that Python can be built under i.e. /opt/Python27
I would appreciate any help!
|
importError: no module named _winreg python3
| 48,344,053
| 13
| 22
| 38,800
| 0
|
python,cx-freeze,winreg
|
I know this is an old question, but this was the first search result when Googling for ModuleNotFoundError: No module named '_winreg', and perhaps may be helpful for someone.
I got the same error when trying to use a virtual environment folder, which has been created using different (already deleted) python binaries. The solution was recreate the virtual environment:
Delete the virtual environment folder
Run python -m venv <name_of_virtual_environment>
| 0
| 1
| 0
| 0
|
2010-12-01T02:47:00.000
| 4
| 1
| false
| 4,320,761
| 0
| 0
| 0
| 1
|
Where can I download _winreg for python3 if I can at all. I have my 'windir' on E:\Windows. I do not know if cx_Freeze did not notice that. I am using cx_Freeze to create an msi installer.
|
python executable
| 4,322,272
| 0
| 1
| 4,099
| 0
|
python,linux,macos,executable
|
Dont know if available for OS X but take a look at cx_freeze
| 0
| 1
| 0
| 0
|
2010-12-01T07:52:00.000
| 3
| 0
| false
| 4,322,250
| 1
| 0
| 0
| 2
|
is it possible to create python executable targeted for linux, from mac os x?
PyInstaller seems to be at an early stage, and I don't know much else.
Thanks
|
python executable
| 4,322,264
| 5
| 1
| 4,099
| 0
|
python,linux,macos,executable
|
Do you really need a standalone executable? For most Linux distributions, the easier and more common way to distribute Python software is to just distribute the source.
Every major Linux distribution already has Python installed, and some have it installed by default.
| 0
| 1
| 0
| 0
|
2010-12-01T07:52:00.000
| 3
| 0.321513
| false
| 4,322,250
| 1
| 0
| 0
| 2
|
is it possible to create python executable targeted for linux, from mac os x?
PyInstaller seems to be at an early stage, and I don't know much else.
Thanks
|
how to install a python package to windows?
| 4,329,365
| 2
| 0
| 240
| 0
|
python,windows,installation
|
My first question would be, do you have administration rights when you try to run setup.py?
| 0
| 1
| 0
| 0
|
2010-12-01T21:28:00.000
| 1
| 0.379949
| false
| 4,329,330
| 1
| 0
| 0
| 1
|
I would like to install a python package to windows. I have tried to run setup.py install on command prompt but it returned an error:
could not create 'C:\Program Files\Python...': access is denied.
Please, help.
Špela
|
how to show mime data using python cgi in windows+apache
| 5,335,384
| 0
| 0
| 381
| 0
|
python,windows,apache,cgi,mime
|
Now I know how to solve this problem:
For windows+IIS:
While adding the application mapping(IIS), write C:\Python20\python.exe -u %s %s. I used to write like this c:\Python26\python.exe %s %s, that will create wrong mime data. And "-u" means unbuffered binary stdout and stderr.
For windows+Apache:
Add #!E:/program files/Python26/python.exe -u to the first line of the python script.
Thank Ignacio Vazquez-Abrams all the same!
| 0
| 1
| 0
| 1
|
2010-12-02T06:26:00.000
| 2
| 1.2
| true
| 4,332,293
| 0
| 0
| 1
| 1
|
I met a problem while using python(2.6) cgi to show a mime data in windows(apache).
For example, to show a image, here is my code:
image.py
#!E:/program files/Python26/python.exe
# -*- coding: UTF-8 -*-
data = open('logo.png','rb').read()
print 'Content-Type:image/png;Content-Disposition:attachment;filename=logo.png\n'
print data
But it dose not work in windows(xp or 7)+apache or IIS.
(I try to write these code in diferent way, and also try other file format, jpg and rar, but no correct output, the output data seems to be disorder in the begining lines.)
And I test these code in linux+apache, and it is Ok!
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
data = open('logo.png','rb').read()
print 'Content-Type:image/png;Content-Disposition:attachment;filename=logo.png\n'
print data
I just feel confused why it does not work in windows.
Could anybody give me some help and advice?
|
Generating a default value with the Google App Engine Bulkloader
| 4,340,593
| 1
| 1
| 296
| 0
|
python,google-app-engine,bulkloader
|
Defining a custom conversion function, as you did, is the correct method. You don't have to modify transform.py, though - put the function in a file in your own app, and import it in the yaml file's python_preamble.
| 0
| 1
| 0
| 0
|
2010-12-02T20:05:00.000
| 1
| 1.2
| true
| 4,339,325
| 0
| 0
| 1
| 1
|
I have successfully used the bulkloader with my project before, but I recently added a new field to timestamp when the record was modified. This new field is giving me trouble, though, because it's defaulting to null. Short of manually inserting the timestamp in the csv before importing it, is there a way I can insert the current right data? I assume I need to look toward the import_transform line, but I know nothing of Python (my app is in Java).
Ideally, I'd like to insert the current timestamp (milliseconds since epoch) automatically. If that's non-trivial, maybe set the value statically in the transform statement before running the import. Thanks.
|
Difference between Systems programming language and Application programming languages
| 4,343,035
| 9
| 13
| 13,401
| 0
|
c#,java,python,perl,programming-languages
|
As with a great many things in IT, the line is blurry. For example, C started its life as a systems programming language (and was used to implement Unix), but was and is used for applications development too.
Having said that, there are clearly some languages better suited to systems programming than others (eg. C/C++ are better suited than COBOL/FORTRAN for systems programming). Likewise there are languages that are better suited to applications development and not systems programming eg. VB.NET.
The language features that stand out from the examples above, are the low level features of the systems programming languages like C/C++ (eg. pointers, bit manipulation operators, etc). There is of course the old joke that C is a "Sea" level language (sitting somewhere between the assembly level and the "high" level).
Warning: I'm coming at systems programming from the perspective of OS developer / OS tool developer.
I think it is fair to say, notwithstanding the projects to develop OSes with Java (though I believe mostly are native compiled, rather than to byte code and JIT'ed / interpreted), that systems programming languages target native machine code of their target platforms. So languages that primarily target managed code / interpreted code are less likely to be used for systems programming.
Anyway, that is surely enough to stir up some comments both in support and in opposition :)
| 0
| 1
| 0
| 1
|
2010-12-03T06:23:00.000
| 5
| 1
| false
| 4,343,014
| 1
| 0
| 0
| 5
|
What are the differences between a systems programming language and Application programming language?
|
Difference between Systems programming language and Application programming languages
| 4,343,639
| 16
| 13
| 13,401
| 0
|
c#,java,python,perl,programming-languages
|
A few factors should in my opinon come into consideration
In a system programming language you must be able to reach low-level stuff, getting close to the real hardware world. In an application language instead there is a sort of "virtual world" (hopefully nicer and easier to interact with) that has been designed with the language and you only need to be able to cope with that.
In a system programming language there should be no concession in terms of performance. One must be able to write code that squeezes out all the juice from the hardware. This is not the biggest concern in an application programming language, where the time needed to actually write the program plays instead a greater role.
Because of 2 a system programming language is free to assume that the programmer makes no mistake and so there will be no "runtime error" guards. For example indexing out of an array is going to mean the end of the world unless the hardware gives those checks for free (but in that case you could probably choose less expensive or faster hardware instead). The idea is that if you assume that the code is correct there is no point in paying even a small price for checking the impossible. Also a system programming language shouldn't get into the way trying to forbid the programmer doing something s/he wants to do intentionally... the assumption is that s/he knows that is the right thing to do. In an application programming language instead it's considered good helping the programmer with checking code and also trying to force the code to use certain philosophical schemas. In application programming languages things like execution speed, typing time and code size can be sacrificed trying to help programmers avoiding shooting themselves.
Because of 3 a system programming language will be much harder to learn by experimentation. In a sense they're sort of powerful but dangerous tools that one should use carefully thinking to every single statement and for the same reason they're languages where debugging is much harder. In application programming languages instead the try-and-see approach may be reasonable (if the virtual world abstraction is not leaking too much) and letting errors in to remove them later is considered a viable option.
| 0
| 1
| 0
| 1
|
2010-12-03T06:23:00.000
| 5
| 1.2
| true
| 4,343,014
| 1
| 0
| 0
| 5
|
What are the differences between a systems programming language and Application programming language?
|
Difference between Systems programming language and Application programming languages
| 4,343,077
| 5
| 13
| 13,401
| 0
|
c#,java,python,perl,programming-languages
|
These are not exact concepts, but in essence, systems programming languages are suitable for writing operating systems (so they have low-level concepts such as pointers, integration with assembler, data types corresponding to memory and register organization), while the application programming languages are more suitable for writing applications, so they generally use higher-level concepts to represent the computation (such as OOP, closures, in-built complex datatypes and so on).
| 0
| 1
| 0
| 1
|
2010-12-03T06:23:00.000
| 5
| 0.197375
| false
| 4,343,014
| 1
| 0
| 0
| 5
|
What are the differences between a systems programming language and Application programming language?
|
Difference between Systems programming language and Application programming languages
| 4,343,085
| 2
| 13
| 13,401
| 0
|
c#,java,python,perl,programming-languages
|
In general, a systems programming language is lower level than applications programming languages. However, the language itself has nothing to do with it.. it's more the particulars of the implementation of the language.
For example, Pascal started life as a teaching language, and was pretty much strictly applications.. however, it was evolved into a systems language and was used to create early versions of MacOS and Windows.
C# is not, typically a systems language because it cannot do low-level work, although even that line is blurred as managed operating systems come into being.
| 0
| 1
| 0
| 1
|
2010-12-03T06:23:00.000
| 5
| 0.07983
| false
| 4,343,014
| 1
| 0
| 0
| 5
|
What are the differences between a systems programming language and Application programming language?
|
Difference between Systems programming language and Application programming languages
| 4,350,156
| 2
| 13
| 13,401
| 0
|
c#,java,python,perl,programming-languages
|
i don't think there is a final answer here anymore.
perl and python come by default with almost every linux distro...both can inline C...both can do job control and other "low level" tasks...threading etc.
any language with a good set of system call bindings and/or FFI should be as fundamentally system-aware as C or C++.
the only languages i would discount as being systems languages are those that specifically address another platform (jvm, clr) and actively seek to prevent native interaction
| 0
| 1
| 0
| 1
|
2010-12-03T06:23:00.000
| 5
| 0.07983
| false
| 4,343,014
| 1
| 0
| 0
| 5
|
What are the differences between a systems programming language and Application programming language?
|
Proper way to publish and find services on a LAN using Python
| 4,343,600
| 2
| 2
| 1,210
| 0
|
python,networking,bonjour,zeroconf
|
Zeroconf/DNS-SD is an excellent idea in this case. It's provided by Bonjour on OS X and Windows (but must be installed separately or as part of an Apple product on Windows), and by Avahi on FOSS *nix.
| 0
| 1
| 1
| 0
|
2010-12-03T08:07:00.000
| 4
| 0.099668
| false
| 4,343,575
| 0
| 0
| 0
| 1
|
My app opens a TCP socket and waits for data from other users on the network using the same application. At the same time, it can broadcast data to a specified host on the network.
Currently, I need to manually enter the IP of the destination host to be able to send data. I want to be able to find a list of all hosts running the application and have the user pick which host to broadcast data to.
Is Bonjour/ZeroConf the right route to go to accomplish this? (I'd like it to cross-platform OSX/Win/*Nix)
|
Celery - minimize memory consumption
| 4,349,979
| 1
| 13
| 19,420
| 0
|
python,django,profiling,memory-management,celery
|
The natural number of workers is close to the number of cores you have. The workers are there so that cpu-intensive tasks can use an entire core efficiently. The broker is there so that requests that don't have a worker on hand to process them are kept queued. The number of queues can be high, but that doesn't mean you need a high number of brokers either. A single broker should suffice, or you could shard queues to one broker per machine if it later turns out fast worker-queue interaction is beneficial.
Your problem seems unrelated to that. I'm guessing that your agencies don't provide a message queue api, and you have to keep around lots of requests. If so, you need a few (emphasis on not many) evented processes, for example twisted or node.js based.
| 0
| 1
| 0
| 0
|
2010-12-03T14:08:00.000
| 4
| 0.049958
| false
| 4,346,318
| 0
| 0
| 1
| 1
|
We have ~300 celeryd processes running under Ubuntu 10.4 64-bit , in idle every process takes ~19mb RES, ~174mb VIRT, thus - it's around 6GB of RAM in idle for all processes.
In active state - process takes up to 100mb of RES and ~300mb VIRT
Every process uses minidom(xml files are < 500kb, simple structure) and urllib.
Quetions is - how can we decrease RAM consuption - at least for idle workers, probably some celery or python options may help?
How to determine which part takes most of memory?
UPD: thats flight search agents, one worker for one agency/date. We have 10 agencies, one user search == 9 dates, thus we have 10*9 agents per one user search.
Is it possible start celeryd processes on demand to avoid idle workers(something like MaxSpareServers on apache)?
UPD2: Agent lifecycle is - send HTTP request, wait for response ~10-20 sec, parse xml( takes less then 0.02s), save result to MySQL
|
Code sharing between small student group
| 4,350,649
| 3
| 3
| 582
| 0
|
python,eclipse
|
I'd suggest using a version control system.
Git might be good for you - it doesn't require a central server and there is also support for Windows these days.
| 0
| 1
| 0
| 1
|
2010-12-03T22:53:00.000
| 4
| 0.148885
| false
| 4,350,627
| 1
| 0
| 0
| 2
|
I'm currently in college, and we work in groups of three to create small python projects on a weekly basis.
We code with Eclipse and PyDev but we've got a problem when it comes to sharing our work. We end up sending an infinite stream of emails with compressed projects.
What we need is a way to keep the source code updated and we need to be able to share it between us. (on both Windows and Linux) What do you recommend?
thanks in advance.
|
Code sharing between small student group
| 4,350,652
| 1
| 3
| 582
| 0
|
python,eclipse
|
What you need is a control version server (SVN for instance). You will be able to commit the changes and update the local version of your code to the current server version.
It is for free:
http://code.google.com/
You should set up your repo and share your work! :-)
I hope it helps.
| 0
| 1
| 0
| 1
|
2010-12-03T22:53:00.000
| 4
| 0.049958
| false
| 4,350,627
| 1
| 0
| 0
| 2
|
I'm currently in college, and we work in groups of three to create small python projects on a weekly basis.
We code with Eclipse and PyDev but we've got a problem when it comes to sharing our work. We end up sending an infinite stream of emails with compressed projects.
What we need is a way to keep the source code updated and we need to be able to share it between us. (on both Windows and Linux) What do you recommend?
thanks in advance.
|
Need some help on Cookie Handling and session in python
| 4,368,390
| 1
| 1
| 383
| 0
|
python,google-app-engine,cookies
|
As mentioned above gaeutilties provides sessions support. If that's what you're looking for overall you may want to check it out.
However, also to answer your question. Cookies set persist between requests, you don't need to keep resetting it unless you the expiration extremely low. If you do not set an expiration the cookie will persist until the browser is closed.
| 0
| 1
| 0
| 0
|
2010-12-04T12:29:00.000
| 2
| 0.099668
| false
| 4,353,491
| 0
| 0
| 1
| 1
|
I want to set some value to cookie when user visits the homepage so that when he hits some url I'll get that value and compare it with what I've stored in db. Now do I have to set the same cookie value again to handle the next request(in order to maintain session).
I'm using python on GAE and I couldn't find any session service available. So the way I've chosen is it the correct one? or Is there any other way to recognize the user?
Any tutorial on session maintaining and cookie handling on python will also be very helpful.
I'm using python 2.6 with django on GAE.
Thanks
|
Does app engine have a Deploy Hook or Event?
| 4,355,054
| 4
| 5
| 525
| 0
|
python,google-app-engine
|
No.
However, you could get the desired behavior if you write your own deployment script. This script could be a thin wrapper around appcfg.py which makes a request to your app once the deployment is complete (the request handler can execute the logic you wanted to put in your "deploy hook").
| 0
| 1
| 0
| 0
|
2010-12-04T16:58:00.000
| 1
| 1.2
| true
| 4,354,647
| 0
| 0
| 1
| 1
|
I want to increase the version number on a model whenever there is a new deployment to the server.
So the idea behind this is: Everytime there is a deployment I wanna run some code.
Is this possible within App Engine using hooks or events?
I'm using App Engine for Python.
|
how to create a user interface that a user can signup and login using django
| 4,361,550
| 1
| 0
| 888
| 0
|
python,django,user-interface,login
|
Django auth also provide generic views for login/logout etc. You can use build-in templates or expand it.
See in documentation for generic views: django.contrib.auth.views.login, django.contrib.auth.views.logout
| 0
| 1
| 0
| 0
|
2010-12-04T22:43:00.000
| 2
| 0.099668
| false
| 4,356,234
| 0
| 0
| 1
| 1
|
i love gae(google app engine),because it is easy to create a user interface for user login and signup,
but now , my boss want me to create a site use django ,
so the first is to create a site that someone can be login ,
which is the esaist way to create a user interface ?
thanks
|
Cx_freeze - How can I Install the shared libraries to /usr/lib
| 5,087,086
| 1
| 3
| 700
| 0
|
python,shared-libraries,compiled
|
bbfreeze will put everything in a single executable.
| 0
| 1
| 0
| 0
|
2010-12-04T22:56:00.000
| 2
| 1.2
| true
| 4,356,293
| 1
| 0
| 0
| 1
|
I am using cx_freeze to compile my python script and when I compile the program, all the files are placed in one specified folder. The executable wont run if the shared libs are not within the same directory.
How would I set it up so the executable looks within /usr/lib/PROGRAMNAME/ to run the libraries?
|
wx.App (wxPython) crash when calling
| 4,369,662
| 1
| 1
| 1,983
| 0
|
python,wxpython
|
If you are running that in IDLE, then that is your problem. IDLE and wx don't get along very well because you basically end up with two mainloops fighting each other. Try putting it in a file and then run the file from the command line:
c:\python27\python.exe myPyFile.py
That should work just fine. Otherwise, download the correct wxPython for your Python and OS (32/64 bit), uninstall the current one and install the new one. I've been using wxPython on Windows XP, Vista and 7 with no problems like this.
| 1
| 1
| 0
| 0
|
2010-12-06T17:19:00.000
| 3
| 0.066568
| false
| 4,369,102
| 0
| 0
| 0
| 3
|
Recently I installed wxPython to do some works under Windows. Most of the time I work in Linux so I have a little experience here.
with python.exe interpreter, I just do 2 line of codeimport wxtmp=wx.App(False)
Then the interpreter crashed with Windows error reporting.
I tried both python 2.7.1 and 2.6.6 with wxPython 2.8.11, all come from their main website, still no luck.
Is there something I must do after install Python in Windows ? I can see that python install just fine and can do some basic job, wxPython library can be load, but can't call wx.App
|
wx.App (wxPython) crash when calling
| 4,496,450
| 0
| 1
| 1,983
| 0
|
python,wxpython
|
I searched for a while and found that this is the problem with wxPython and Python >2.5. Tried many fix with manyfest file but no luck, so I think switch to PyQt is the only solution now.
| 1
| 1
| 0
| 0
|
2010-12-06T17:19:00.000
| 3
| 1.2
| true
| 4,369,102
| 0
| 0
| 0
| 3
|
Recently I installed wxPython to do some works under Windows. Most of the time I work in Linux so I have a little experience here.
with python.exe interpreter, I just do 2 line of codeimport wxtmp=wx.App(False)
Then the interpreter crashed with Windows error reporting.
I tried both python 2.7.1 and 2.6.6 with wxPython 2.8.11, all come from their main website, still no luck.
Is there something I must do after install Python in Windows ? I can see that python install just fine and can do some basic job, wxPython library can be load, but can't call wx.App
|
wx.App (wxPython) crash when calling
| 16,407,973
| 1
| 1
| 1,983
| 0
|
python,wxpython
|
In case like me somebody will stumble into this question like I did. Recently installed wxpython on two machines, windows 7 and XP. Testing the sample code in simple.py (provided with the wxpython docs-demos installer), running from a python console, I had the following problem on both machines: First import ok, but when I did a reload of the module, python crash.
I added this line in the end of the simple.py file: del app
and that fixed the problem on windows 7 and tomorrow I try it on the XP machine.
Same solution fitted for the XP machine. So, reloading an un-edited module with a reference to a wx.App with a closed gui seem not to be feasable. Killing the reference with a del statement was enough to solve the problem.
| 1
| 1
| 0
| 0
|
2010-12-06T17:19:00.000
| 3
| 0.066568
| false
| 4,369,102
| 0
| 0
| 0
| 3
|
Recently I installed wxPython to do some works under Windows. Most of the time I work in Linux so I have a little experience here.
with python.exe interpreter, I just do 2 line of codeimport wxtmp=wx.App(False)
Then the interpreter crashed with Windows error reporting.
I tried both python 2.7.1 and 2.6.6 with wxPython 2.8.11, all come from their main website, still no luck.
Is there something I must do after install Python in Windows ? I can see that python install just fine and can do some basic job, wxPython library can be load, but can't call wx.App
|
How can I profile a multithreaded program?
| 4,374,516
| 2
| 4
| 1,368
| 0
|
python,multithreading,performance,profile
|
Depending on how far you've come in your troubleshooting, there are some tools that might point you in the right direction.
"top" is a helpful start to show you if your problem is burning CPU time or simply waiting for stuff.
"dtruss -c" can show you where you spend time and what system calls takes most of your time.
Both these can give you a hint without knowing anything about python.
If you just want to use yappi, it isn't too much work to set up a virtualbox and install some sort of Linux on your machine. I find myself doing that from time to time when I want to try something.
There might of course be things I don't know about that makes it impossible or not worth the effort. Also, profiling on another OS running virtualized might not give the exact same results, but it might still be helpful.
| 0
| 1
| 0
| 0
|
2010-12-07T05:07:00.000
| 2
| 0.197375
| false
| 4,373,585
| 1
| 0
| 0
| 1
|
I have a program that is performing waaaay under par, and I would like to profile it. However, it is multithreaded, so I can't seem to find a good way to profile this thing. Any advice?
I've tried yappi, but it segfaults on OS X :(
EDIT: This is in python, sorry for putting it under profiling...
|
Python is not existing when ran from System Scheduler
| 4,378,686
| 1
| 0
| 78
| 0
|
python,system,scheduler
|
Check the user context that System Scheduler is running under and ensure the location of Python is in it's PATH.
| 0
| 1
| 0
| 0
|
2010-12-07T15:56:00.000
| 2
| 0.099668
| false
| 4,378,653
| 1
| 0
| 0
| 1
|
I have made a batch script which runs a Python application. This batch script is triggered by a program called System Scheduler, but when the program runs the batch script, it says that Python is not exisiting.
When I run my batch script manually, I get no error.
Can anyone explain this or come up with a solution?
|
how to bring a background task to foreground in google app engine?
| 4,379,300
| 0
| 2
| 282
| 0
|
python,google-app-engine,task-queue
|
This won't work directly as you describe it.
Once a background task is started, it's a background task for its entire existence. If you want to return some information from the background task to the user, you'll have to add it to the datastore, and have a foreground handler check the datastore for that information.
You may also be able to use the Channel API to have a background task send messages directly to the browser, but I'm not sure if this will work or not (I haven't tried it).
If you give a little more information about exactly what you're trying to accomplish I can try to give more details about how to get it done.
| 0
| 1
| 0
| 0
|
2010-12-07T16:46:00.000
| 2
| 0
| false
| 4,379,200
| 0
| 0
| 1
| 1
|
Currently I have tasks running in background. After the tasks are done executing I need to show output. How do I do this in Google App Engine?
Once the tasks are done the only thing I can do is create another task which is supposed to show output or is there any other way?
|
Can Gnuplot take different arguments at run time? maybe with Python?
| 4,379,452
| 0
| 11
| 16,943
| 0
|
python,scripting,gnuplot
|
I create a file named test.txt containing plot [0:20] x;
I run gnuplot test.txt and I see that gnuplot has indeed read contents of my file, so it does support arguments at runtime.
| 0
| 1
| 0
| 0
|
2010-12-07T16:57:00.000
| 8
| 0
| false
| 4,379,330
| 1
| 0
| 0
| 1
|
I have 500 files to plot and I want to do this automatically. I have the gnuplot script
that does the plotting with the file name hard coded. I would like to have a loop that calls gnuplot every iteration with a different file name, but it does not seem that gnuplot support command line arguments.
Is there an easy way? I also installed the gnuplot-python package in case I can do it via a python script.However, I couldn't find the api so it's a bit difficult to figure out.
Thank you!
|
distutils "not a regular file --skipped"
| 4,404,450
| 6
| 5
| 3,673
| 0
|
python,installation,distutils
|
(Already worked, reposting as a proper answer):
Try removing the "MANIFEST" file and re-running it. If you've moved files around, MANIFEST can be wrong (it gets regenerated automatically if it's not there).
| 0
| 1
| 0
| 0
|
2010-12-09T23:38:00.000
| 3
| 1.2
| true
| 4,404,258
| 0
| 0
| 0
| 2
|
I have a very simple setup:
from distutils.core import setup
setup(name='myscripts',
description='my scripts',
author='Ago',
author_email='blah',
version='0.1',
packages=['myscripts']
)
myscripts folder consists of about 10 python files. Everthing works fine if I just execute my main.py file (executable, which uses those myscripts files). Now I try to do:
python setup.py sdist
But I get:
running sdist
warning: sdist: missing required meta-data: url
reading manifest file 'MANIFEST'
creating myscripts-0.1
making hard links in myscripts-0.1...
'file1.py' not a regular file -- skipping
hard linking setup.py -> myscripts-0.1
'file2.py' not a regular file -- skipping
tar -cf dist/myscripts-0.1.tar myscripts-0.1
gzip -f9 dist/myscripts-0.1.tar
removing 'myscripts-0.1' (and everything under it)
Files file1.py and file2.py are as regular as other files. Any suggestions?
|
distutils "not a regular file --skipped"
| 36,453,425
| 0
| 5
| 3,673
| 0
|
python,installation,distutils
|
In my case this error was caused by inadvertly running distutils with Python 2.7 instead of Python 3. The quick fix:
python3 setup.py register sdist upload
Better still, mark the script correctly:
sed -i '1i #!/usr/bin/python3' setup.py
| 0
| 1
| 0
| 0
|
2010-12-09T23:38:00.000
| 3
| 0
| false
| 4,404,258
| 0
| 0
| 0
| 2
|
I have a very simple setup:
from distutils.core import setup
setup(name='myscripts',
description='my scripts',
author='Ago',
author_email='blah',
version='0.1',
packages=['myscripts']
)
myscripts folder consists of about 10 python files. Everthing works fine if I just execute my main.py file (executable, which uses those myscripts files). Now I try to do:
python setup.py sdist
But I get:
running sdist
warning: sdist: missing required meta-data: url
reading manifest file 'MANIFEST'
creating myscripts-0.1
making hard links in myscripts-0.1...
'file1.py' not a regular file -- skipping
hard linking setup.py -> myscripts-0.1
'file2.py' not a regular file -- skipping
tar -cf dist/myscripts-0.1.tar myscripts-0.1
gzip -f9 dist/myscripts-0.1.tar
removing 'myscripts-0.1' (and everything under it)
Files file1.py and file2.py are as regular as other files. Any suggestions?
|
sending password to command line tools
| 4,408,136
| 2
| 12
| 4,756
| 0
|
python,bash,shell
|
If the password is only accepted on the command line, you're pretty much out of luck. Are you absolutely sure there's no option to send the password in another way? If you can send it over the process's stdin, you can talk to it via a pipe, and send the password securely in that way.
| 0
| 1
| 0
| 1
|
2010-12-10T10:25:00.000
| 6
| 0.066568
| false
| 4,407,843
| 0
| 0
| 0
| 3
|
i'm writing a python application that uses a command line utility (propietary so it can't be modified) to do part of its work. The problem is that I have to pass the password as a command line argument to the tool which would easily be seen by any user doing 'ps ax'. How can I send the password to the command line tool safely from within python (or a shell script)?
|
sending password to command line tools
| 13,034,720
| 0
| 12
| 4,756
| 0
|
python,bash,shell
|
You may be able to gain more security by having an encrypted password argument and passing an encrypted password, and the program can de-crypt it. At least there would be no plain-text password floating around. I used this method when launching a process via another process and passing it arguments. It may not be feasible in your case though.
| 0
| 1
| 0
| 1
|
2010-12-10T10:25:00.000
| 6
| 0
| false
| 4,407,843
| 0
| 0
| 0
| 3
|
i'm writing a python application that uses a command line utility (propietary so it can't be modified) to do part of its work. The problem is that I have to pass the password as a command line argument to the tool which would easily be seen by any user doing 'ps ax'. How can I send the password to the command line tool safely from within python (or a shell script)?
|
sending password to command line tools
| 13,034,839
| 0
| 12
| 4,756
| 0
|
python,bash,shell
|
Write another python script that will accept password from command prompt using getpass.getpass() and store it in a variable. Then call the command from the script with that variable having password as parameter.
| 0
| 1
| 0
| 1
|
2010-12-10T10:25:00.000
| 6
| 0
| false
| 4,407,843
| 0
| 0
| 0
| 3
|
i'm writing a python application that uses a command line utility (propietary so it can't be modified) to do part of its work. The problem is that I have to pass the password as a command line argument to the tool which would easily be seen by any user doing 'ps ax'. How can I send the password to the command line tool safely from within python (or a shell script)?
|
Is there a portable python interpreter that will run on Mac OS X 10.6 from a USB key?
| 4,414,652
| 2
| 2
| 5,446
| 0
|
python,macos,usb,portability,interpreter
|
Python is already on OS X. I would look at trying to find an editor/shell that will work from a usb drive.
| 0
| 1
| 0
| 0
|
2010-12-11T00:35:00.000
| 2
| 0.197375
| false
| 4,414,484
| 1
| 0
| 0
| 1
|
I've been running myself ragged trying to find a portable interpreter that I can run from a USB key on my work computer. Work comp is running Mac OS X 10.6, fairly restricted environment, no access to terminal, can't install apps but I do know that portable apps can be run from a USB drive. I've been using shell in a box to serve remote access to my comp at home over the web but out of respect for their network integrity I'd prefer not to. I've also just come across ideone.com which seems promising and I plan to give it a go tomorrow. Ideally though, I'd like to have the code running locally. Any help would be greatly appreciated by myself and, I'm sure, a few others that might be in the same situation.
|
Permission denied when trying to install easy_install on OSX
| 4,416,999
| 3
| 17
| 17,459
| 0
|
python,macos,installation,easy-install
|
You should use sudo . You will need to enter your password.
| 0
| 1
| 0
| 0
|
2010-12-11T13:40:00.000
| 4
| 0.148885
| false
| 4,416,984
| 0
| 0
| 0
| 2
|
I'm trying to install easy_install and, well... see for yourself:
sh setuptools-0.6c11-py2.6.egg
Processing setuptools-0.6c11-py2.6.egg
Copying setuptools-0.6c11-py2.6.egg to /Library/Python/2.6/site-packages
Adding setuptools 0.6c11 to easy-install.pth file
Installing easy_install script to /usr/local/bin
error: /usr/local/bin/easy_install: Permission denied
How do I give my computer permission to do this? I tried telling it in a friendly voice, "computer, I hereby grant you permission to install easy_install" but that didn't work.
|
Permission denied when trying to install easy_install on OSX
| 4,417,579
| 3
| 17
| 17,459
| 0
|
python,macos,installation,easy-install
|
Judging from the paths displayed, you are likely using the Apple-supplied Python 2.6 in OS X 10.6. If so, be aware that Apple has already easily installed easy_install for you in /usr/bin. Just try typing easy_install; you may need to use sudo easy_install if the package tries to install a script. If you are using another Python (one you installed yourself), you will need to install a separate version of setuptools (or the newer Distribute) for it.
| 0
| 1
| 0
| 0
|
2010-12-11T13:40:00.000
| 4
| 0.148885
| false
| 4,416,984
| 0
| 0
| 0
| 2
|
I'm trying to install easy_install and, well... see for yourself:
sh setuptools-0.6c11-py2.6.egg
Processing setuptools-0.6c11-py2.6.egg
Copying setuptools-0.6c11-py2.6.egg to /Library/Python/2.6/site-packages
Adding setuptools 0.6c11 to easy-install.pth file
Installing easy_install script to /usr/local/bin
error: /usr/local/bin/easy_install: Permission denied
How do I give my computer permission to do this? I tried telling it in a friendly voice, "computer, I hereby grant you permission to install easy_install" but that didn't work.
|
comparing batch to python commands?
| 4,418,349
| 1
| 1
| 2,645
| 0
|
python,syntax,batch-file,equivalent
|
Python is not a system shell, Python is a multi-paradigm programming language.
If you want to compare .bat with anything, compare it with sh or bash. (You can have those on various platforms too - for example, sh for windows is in the MinGW package).
| 0
| 1
| 0
| 0
|
2010-12-11T18:30:00.000
| 5
| 0.039979
| false
| 4,418,252
| 0
| 0
| 0
| 2
|
Ok i have these commands used in batch and i wanted to know the commands in python that would have a similar affect, just to be clear i dont want to just use os.system("command here") for each of them. For example in batch if you wanted a list of commands you would type help but in python you would type help() and then modules... I am not trying to use batch in a python script, i just wanna know the similarities in both languages. Like in english you say " Hello" but in french you say "Bonjour" not mix the two languages. (heres the list of commands/functions id like to know:
change the current directory
clear the screen in the console
change the prompt to something other than >>>
how to make a loop function
redirections/pipes
start an exteral program (like notepad or paint) from within a script
how to call or import another python script
how to get help with a specific module without having to type help()
@8: (in batch it would be command /?)
EDITED COMPLETELY
Thanks in Adnvance!
|
comparing batch to python commands?
| 17,412,938
| 0
| 1
| 2,645
| 0
|
python,syntax,batch-file,equivalent
|
I am pretty much facing the same problem as you, daniel11. As a solution, I am learning BATCH commands and their meaning. After I understand those, I am going to write a program in Python that does the same or accomplishes the same task.
Thanks to Adam V. and katrielatex for their insight and suggestions.
| 0
| 1
| 0
| 0
|
2010-12-11T18:30:00.000
| 5
| 0
| false
| 4,418,252
| 0
| 0
| 0
| 2
|
Ok i have these commands used in batch and i wanted to know the commands in python that would have a similar affect, just to be clear i dont want to just use os.system("command here") for each of them. For example in batch if you wanted a list of commands you would type help but in python you would type help() and then modules... I am not trying to use batch in a python script, i just wanna know the similarities in both languages. Like in english you say " Hello" but in french you say "Bonjour" not mix the two languages. (heres the list of commands/functions id like to know:
change the current directory
clear the screen in the console
change the prompt to something other than >>>
how to make a loop function
redirections/pipes
start an exteral program (like notepad or paint) from within a script
how to call or import another python script
how to get help with a specific module without having to type help()
@8: (in batch it would be command /?)
EDITED COMPLETELY
Thanks in Adnvance!
|
Wrap all commands entered within a Bash-Shell with a Python script
| 4,420,885
| -2
| 12
| 3,793
| 0
|
python,linux,bash
|
There is no direct way you can do it .
But you can make a python script to emulate a bash terminal and you can use the beautiful "Subprocess" module in python to execute commnands the way you like
| 0
| 1
| 0
| 1
|
2010-12-11T19:00:00.000
| 4
| -0.099668
| false
| 4,418,378
| 1
| 0
| 0
| 1
|
What i'd like to have is a mechanism that all commands i enter on a Bash-Terminal are wrapped by a Python-script. The Python-script executes the entered command, but it adds some additional magic (for example setting "dynamic" environment variables).
Is that possible somehow?
I'm running Ubuntu and Debian Squeezy.
Additional explanation:
I have a property-file which changes dynamically (some scripts do alter it at any time). I need the properties from that file as environment variables in all my shell scripts. Of course i could parse the property-file somehow from shell, but i prefer using an object-oriented style for that (especially for writing), as it can be done with Python (and ConfigObject).
Therefore i want to wrap all my scripts with that Python script (without having to modify the scripts themselves) which handles these properties down to all Shell-scripts.
This is my current use case, but i can imagine that i'll find additional cases to which i can extend my wrapper later on.
|
Django + Adsense on Google App Engine
| 4,426,058
| 1
| 0
| 1,724
| 0
|
python,django,google-app-engine,templates,adsense
|
adsense is javascript, if your django is returning nothing check your HttpResponse and how you are generating the template. it looks like you need to specify in settings.py the location of the template file.
you may want add the following (in settings.py):
import os
ROOT_PATH = os.path.dirname(__file__)
TEMPLATE_DIRS = (
os.path.join(ROOT_PATH, 'templates')
)
then put your django templates in a sub-directory to your project called "/templates"
or try the process of elimination:
comment out any javascript and see if you can generate the template from django.
| 0
| 1
| 0
| 0
|
2010-12-12T16:52:00.000
| 2
| 0.099668
| false
| 4,422,754
| 0
| 0
| 1
| 2
|
I have a problem with Django on Google App Engine. I finished to design html templet for my web application and I imported them on django using django template system.
The problem is google ad-sense. I can say ad-sense banner on the html version of my pages if i try to open them in my browser. But nothing appear if I try to do the same operation with them loaded in django.
I also tried to develop a simple html template that contains only the adSense script, if i load this on django it returns a white page. No banner, nothing.
What can i do to solve this problem?
|
Django + Adsense on Google App Engine
| 4,448,211
| 0
| 0
| 1,724
| 0
|
python,django,google-app-engine,templates,adsense
|
I solved the problem! Sorry, I accidentally added some character with erroneous codeing.
| 0
| 1
| 0
| 0
|
2010-12-12T16:52:00.000
| 2
| 0
| false
| 4,422,754
| 0
| 0
| 1
| 2
|
I have a problem with Django on Google App Engine. I finished to design html templet for my web application and I imported them on django using django template system.
The problem is google ad-sense. I can say ad-sense banner on the html version of my pages if i try to open them in my browser. But nothing appear if I try to do the same operation with them loaded in django.
I also tried to develop a simple html template that contains only the adSense script, if i load this on django it returns a white page. No banner, nothing.
What can i do to solve this problem?
|
python on win32: how to get absolute timing / CPU cycle-count
| 4,430,461
| 0
| 4
| 1,794
| 0
|
python,winapi,api,performancecounter
|
You could just call the C# StopWatch class directly from Python couldn't you? Maybe a small wrapper is needed (don't know Python/C# interop details - sorry) - if you are already using C# for data acquisition, doing the same for timings via Stopwatch should be simpler than anything else you can do.
| 0
| 1
| 0
| 1
|
2010-12-13T15:10:00.000
| 2
| 0
| false
| 4,430,227
| 0
| 0
| 0
| 1
|
I have a python script that calls a USB-based data-acquisition C# dotnet executable. The main python script does many other things, e.g. it controls a stepper motor. We would like to check the relative timing of various operations, for that purpose the dotnet exe generates a log with timestamps from C# Stopwatch.GetTimestamp(), which as far as I know yields the same number as calls to win32 API QueryPerformanceCounter().
Now I would like to get similar numbers from the python script. time.clock() returns such values, unfortunately it subtracts the value obtained at the time of 1st call to time.clock(). How can I get around this? Is it easy to call QueryPerformanceCounter() from some existing python module or do I have to write my own python extension in C?
I forgot to mention, the python WMI module by Tim Golden does this:
wmi.WMI().Win32_PerfRawData_PerfOS_System()[0].Timestamp_PerfTime
, but it is too slow, some 48ms overhead. I need something with <=1ms overhead. time.clock() seems to be fast enough, as is c# Stopwatch.GetTimestamp().
TIA,
Radim
|
View Script Output Over SSH?
| 4,430,456
| 1
| 2
| 1,034
| 0
|
python,linux,ubuntu,ssh
|
A very quick alternative is to pipe the output of your python program to a file, and then simply using tail with the second user to see the output as it's being written to the file. However, with a program like you have there, the file will very quickly become massive.
| 0
| 1
| 0
| 1
|
2010-12-13T15:26:00.000
| 2
| 0.099668
| false
| 4,430,408
| 0
| 0
| 0
| 1
|
I have a script, called test.py, that does the following:
while (1):
....print "hello world"
(this script simply prints 'hello world' continuously).
Now, I am using two machines (machine A and machine B). Same user is used for both machines. I would like to do the following:
(1) [working with machine A] run test.py programatically on machine A { meaning, a local python script will be running test.py using say os.system(....) }
( at this point, the script test.py is printing "hello world" to the screen of machine A )
(2) [working with machine B] I now want to log in into machine A using ssh and 'view' the output of the script that we ran in (1)
How do I achieve this? I know how to write the script that will be running and starting test.py on machine A. I also know how to ssh from machine B to machine A.
What I don't know is:
(*) What command should I use in (1) in order to run the python script so that its output can be easily viewed while logging from a different machine (machine B) to machine A?
(*) Following the ssh from machine B to machine A, how do I 'navigate' to the screen that shows the output of test.py?
|
How do you submit a form to an app engine python application without refreshing the sending page?
| 4,437,354
| 8
| 0
| 514
| 0
|
python,google-app-engine,forms,submit,datastore
|
Sounds like you want to look into AJAX. The simplest way to do this is probably to use the ajax functions in one of the popular Javascript libraries, like jQuery.
| 0
| 1
| 0
| 0
|
2010-12-14T08:42:00.000
| 3
| 1
| false
| 4,437,307
| 0
| 0
| 1
| 2
|
As a newbie to app engine and python I can follow the examples given by Google and have created a python application with a template HTML page where I can enter data, submit it to the datastore and by reading back the data, just sent, recreate the sending page so I can continue adding data and store again. However what I would like to do is submit the data, have it stored in the datastore without the sending page being refreshed. It seems like a waste of traffic to have all the data sent back again.
|
How do you submit a form to an app engine python application without refreshing the sending page?
| 4,439,461
| -1
| 0
| 514
| 0
|
python,google-app-engine,forms,submit,datastore
|
Have a look at Pyjamas pyjs.org
It's a Python Compiler for web browsers. Write your client side in Python and Pyjamas will compile it into JavaScript.
| 0
| 1
| 0
| 0
|
2010-12-14T08:42:00.000
| 3
| -0.066568
| false
| 4,437,307
| 0
| 0
| 1
| 2
|
As a newbie to app engine and python I can follow the examples given by Google and have created a python application with a template HTML page where I can enter data, submit it to the datastore and by reading back the data, just sent, recreate the sending page so I can continue adding data and store again. However what I would like to do is submit the data, have it stored in the datastore without the sending page being refreshed. It seems like a waste of traffic to have all the data sent back again.
|
pssh and known_hosts file
| 4,441,673
| 18
| 5
| 12,459
| 0
|
python,ssh
|
Try pssh -O StrictHostKeyChecking=no. This works for me.
By default ssh uses the value of "ask", which causes it to ask the user whether to continue connecting to unknown host. By setting the value to "no", you avoid the question, but are no longer protected against certain attacks. E.g. if you are connecting to hostA, and someone puts hostB there with the same IP address, then by default ssh will notice that hostB has changed, and will prompt you about it. With StrictHostKeyChecking=no, it will silently assume everything is OK.
| 0
| 1
| 0
| 1
|
2010-12-14T08:47:00.000
| 2
| 1.2
| true
| 4,437,331
| 0
| 0
| 0
| 1
|
when I use pssh, trying to access a remote machine which is not inside the UNIX
known hosts file, pssh freeze after giving the password.
After having added the host using a direct ssh command, pssh works.
So is there an option to give to the pssh command in order to avoid this problem ?
Thanks for your help,
Regards
|
How to start a python file while Windows starts?
| 61,368,121
| 0
| 80
| 132,882
| 0
|
python,windows
|
Above mentioned all the methods did not worked I tried them all , I will tell you more simpler solution and alternative of windows task scheduler
Create a .bat file with content
"ADDRESS OF YOUR PROJECT INTERPRETER" "ADDRESS OF YOUR PYTHON SCRIPT WITH SCRIPT NAME"
Store this bat file into the window startup folder(by default hidden)
FYI: to find window startup folder
press windos+r then
type shell:startup -- it will directly take you to the startup folder
copy the bat file there with following 2 address in the same format ,
then simply restart the system or shut down and boot up.
The code will automatically run within 20 seconds of opening.
Thank me later
| 0
| 1
| 0
| 0
|
2010-12-14T10:12:00.000
| 11
| 0
| false
| 4,438,020
| 1
| 0
| 0
| 4
|
I have a python file and I am running the file.
If Windows is shutdown and booted up again, how I can run that file every time Windows starts?
|
How to start a python file while Windows starts?
| 22,035,422
| 1
| 80
| 132,882
| 0
|
python,windows
|
try adding an entry to "HKLM/SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce" .
Right click ->new -> string value -> add file path
| 0
| 1
| 0
| 0
|
2010-12-14T10:12:00.000
| 11
| 0.01818
| false
| 4,438,020
| 1
| 0
| 0
| 4
|
I have a python file and I am running the file.
If Windows is shutdown and booted up again, how I can run that file every time Windows starts?
|
How to start a python file while Windows starts?
| 66,016,922
| 9
| 80
| 132,882
| 0
|
python,windows
|
click Win+R
type shell:startup
drag and drop your python file my_script.py
if you don't need the console:
change extension from my_script.py to my_script.pyw
else:
create run_my_script.cmd with content: python path\to\your\my_script.py
| 0
| 1
| 0
| 0
|
2010-12-14T10:12:00.000
| 11
| 1
| false
| 4,438,020
| 1
| 0
| 0
| 4
|
I have a python file and I am running the file.
If Windows is shutdown and booted up again, how I can run that file every time Windows starts?
|
How to start a python file while Windows starts?
| 4,439,204
| 70
| 80
| 132,882
| 0
|
python,windows
|
Depending on what the script is doing, you may:
package it into a service, that should then be installed
add it to the windows registry (HKCU\Software\Microsoft\Windows\CurrentVersion\Run)
add a shortcut to it to the startup folder of start menu - its location may change with OS version, but installers always have some instruction to put a shortcut into that folder
use windows' task scheduler, and then you can set the task on several kind of events, including logon and on startup.
The actual solution depends on your needs, and what the script is actually doing.
Some notes on the differences:
Solution #1 starts the script with the computer, while solution #2 and #3 start it when the user who installed it logs in.
It is also worth to note that #1 always start the script, while #2 and #3 will start the script only on a specific user (I think that if you use the default user then it will start on everyone, but I am not sure of the details).
Solution #2 is a bit more "hidden" to the user, while solution #3 leaves much more control to the user in terms of disabling the automatic start.
Finally, solution #1 requires administrative rights, while the other two may be done by any user.
Solution #4 is something I discovered lately, and is very straightforward. The only problem I have noticed is that the python script will cause a small command window to appear.
As you can see, it all boils down to what you want to do; for instance, if it is something for your purposes only, I would simply drag it into startup folder.
In any case, lately I am leaning on solution #4, as the quickest and most straightforward approach.
| 0
| 1
| 0
| 0
|
2010-12-14T10:12:00.000
| 11
| 1
| false
| 4,438,020
| 1
| 0
| 0
| 4
|
I have a python file and I am running the file.
If Windows is shutdown and booted up again, how I can run that file every time Windows starts?
|
How to install curl on python 3.x in Windows 7?
| 4,439,344
| 1
| 0
| 907
| 0
|
windows-7,curl,python-3.x
|
There is no module called "curl", so it's unclear what you mean?
PycURL?
friendly_curl?
pylibcurl?
curlwrapper?
pyparallelcurl?
In any case, as far as I can gather, none of them are ported to Python 3, so the answer on how to install them on Python 3 is: Talk to the authors and help port them! It's fun!
| 0
| 1
| 0
| 0
|
2010-12-14T11:29:00.000
| 1
| 0.197375
| false
| 4,438,644
| 1
| 0
| 0
| 1
|
Can you please tell how to install curl for python 3.x to Windows 7. Ease_install only for the 2 version of the sort.
|
Executing a python script from inittab not as root
| 4,456,065
| 1
| 1
| 1,378
| 0
|
python,root
|
you could copy python binary to python-suid,
chown it to user you want to run scripts as and chown u+s python-suid
then in script #!/usr/bin/python-suid
| 0
| 1
| 0
| 1
|
2010-12-15T23:21:00.000
| 2
| 0.099668
| false
| 4,455,973
| 0
| 0
| 0
| 2
|
I have a python script which I would like to launch from inittab, shown below
s1:respawn:/home/a_user/app/script.py
I believe initab executes as root, so the a_user's envrinment is not available
The script needs to know "a_user" home directory for ini file settings and log file storage. I would like to avoid hard coding these paths in my script. Is it possible to execute this script as a_user and not a root? If this is possible would a_user HOME environment variable be available?
Regards
|
Executing a python script from inittab not as root
| 4,456,006
| 1
| 1
| 1,378
| 0
|
python,root
|
Use runuser (or the distro's equvalent) to run it as a different user. runuser does change $HOME, but other similar commands may not.
| 0
| 1
| 0
| 1
|
2010-12-15T23:21:00.000
| 2
| 0.099668
| false
| 4,455,973
| 0
| 0
| 0
| 2
|
I have a python script which I would like to launch from inittab, shown below
s1:respawn:/home/a_user/app/script.py
I believe initab executes as root, so the a_user's envrinment is not available
The script needs to know "a_user" home directory for ini file settings and log file storage. I would like to avoid hard coding these paths in my script. Is it possible to execute this script as a_user and not a root? If this is possible would a_user HOME environment variable be available?
Regards
|
Twisted - should this code be run in separate threads
| 4,464,644
| 2
| 1
| 648
| 0
|
python,twisted
|
No, you shouldn't use threads. You can't call LoopingCall from a thread (unless you use reactor.callFromThread), but it wouldn't help you make your code faster.
If you notice a performance problem, you may want to profile your workload, figure out where the CPU-intensive work is, and then put that work into multiple processes, spawned with spawnProcess. You really can't skip the step where you figure out where the expensive work is, though: there's no magic pixie dust you can sprinkle on your Twisted application that will make it faster. If you choose a part of your code which isn't very intensive and doesn't require blocking resources like CPU or disk, then you will discover that the overhead of moving work to a different process may outweigh any benefit of having it there.
| 0
| 1
| 0
| 0
|
2010-12-16T15:56:00.000
| 3
| 0.132549
| false
| 4,462,678
| 1
| 0
| 0
| 2
|
I am running some code that has X workers, each worker pulling tasks from a queue every second. For this I use twisted's task.LoopingCall() function. Each worker fulfills its request (scrape some data) and then pushes the response back to another queue. All this is done in the reactor thread since I am not deferring this to any other thread.
I am wondering whether I should run all these jobs in separate threads or leave them as they are. And if so, is there a problem if I call task.LoopingCall every second from each thread ?
|
Twisted - should this code be run in separate threads
| 4,462,745
| 1
| 1
| 648
| 0
|
python,twisted
|
You shouldn't use threads for that. Doing it all in the reactor thread is ok. If your scraping uses twisted.web.client to do the network access, it shouldn't block, so you will go as fast as it gets.
| 0
| 1
| 0
| 0
|
2010-12-16T15:56:00.000
| 3
| 0.066568
| false
| 4,462,678
| 1
| 0
| 0
| 2
|
I am running some code that has X workers, each worker pulling tasks from a queue every second. For this I use twisted's task.LoopingCall() function. Each worker fulfills its request (scrape some data) and then pushes the response back to another queue. All this is done in the reactor thread since I am not deferring this to any other thread.
I am wondering whether I should run all these jobs in separate threads or leave them as they are. And if so, is there a problem if I call task.LoopingCall every second from each thread ?
|
Is it possible to download packages for Win32 in zip format?
| 4,465,491
| 0
| 1
| 309
| 0
|
python,packages
|
If it's a "self-extracting" zip file you can just change the .exe extension to .zip and then unzip it with any standard zip file handling utility...assuming you can at least download .exe files. If you can't, you might be able to rename them during the download process (i.e. via a "Save As" dialog).
| 0
| 1
| 0
| 0
|
2010-12-16T20:01:00.000
| 4
| 0
| false
| 4,464,853
| 1
| 0
| 0
| 4
|
Everything seems to be only available in self-extracting .exe. My company blocks executable files from being downloaded.
|
Is it possible to download packages for Win32 in zip format?
| 4,464,917
| 1
| 1
| 309
| 0
|
python,packages
|
You could try to "easy-install" the package
| 0
| 1
| 0
| 0
|
2010-12-16T20:01:00.000
| 4
| 0.049958
| false
| 4,464,853
| 1
| 0
| 0
| 4
|
Everything seems to be only available in self-extracting .exe. My company blocks executable files from being downloaded.
|
Is it possible to download packages for Win32 in zip format?
| 4,465,455
| 0
| 1
| 309
| 0
|
python,packages
|
The self-extracting exe is only necessary if the package contains C-code that needs to be compiled, and you don't have a compiler. Otherwise you can use the source package, which often is a tgz.
| 0
| 1
| 0
| 0
|
2010-12-16T20:01:00.000
| 4
| 0
| false
| 4,464,853
| 1
| 0
| 0
| 4
|
Everything seems to be only available in self-extracting .exe. My company blocks executable files from being downloaded.
|
Is it possible to download packages for Win32 in zip format?
| 4,464,895
| 1
| 1
| 309
| 0
|
python,packages
|
Pretty close. Download the source - a .tar.gz archive, so you need something beyond window's built-in zip handling to unpack it - and run python setup.py install.
| 0
| 1
| 0
| 0
|
2010-12-16T20:01:00.000
| 4
| 0.049958
| false
| 4,464,853
| 1
| 0
| 0
| 4
|
Everything seems to be only available in self-extracting .exe. My company blocks executable files from being downloaded.
|
evaluating buffer in emacs python-mode on remote host
| 4,577,034
| 2
| 5
| 2,594
| 0
|
python,emacs
|
Short answer: not without writing some missing elisp code.
Long version: In python.el, run-python adds data-directory (which on my Ubuntu 10.10 box is /usr/share/emacs/23.1/etc/ ) to $PYTHONPATH, specifically so that it can find emacs.py (as supplied by the local emacs distribution.) Then it does a (python-send-string "import emacs") and expects it to work...
It looks like the defadvice wrappers that tramp uses don't actually pass PYTHONPATH, so this doesn't work even if you have the matching emacs version on the remote system.
If you M-x customize-variable RET tramp-remote-process-environment RET
then hit one of the INS buttons and add PYTHONPATH=/usr/share/emacs/23.1/etc then hit STATE and set it to "current session" (just to test it, or "save for future sessions" if it works for you) it almost works - the complaint goes away, in any case, because the remote python can now find the remote emacs.py. If you now go back to the original question, doing python-send-buffer, you just run into a different error: No such file or directory: '/tmp/py24574XdA' because python-mode just stuffs the content into a temporary file and tells the python subprocess to load that.
You'd have to change python-send-region (the other functions call it) and particularly the way it uses make-temp-file to be tramp-aware - there's even a tramp-make-tramp-temp-file you could probably build upon. (Be sure to post it if you do...)
| 0
| 1
| 0
| 1
|
2010-12-16T21:39:00.000
| 2
| 1.2
| true
| 4,465,615
| 0
| 0
| 0
| 1
|
I'm using emacs23 with tramp to modify python scripts on a remote host.
I found that when I start the python shell within emacs it starts up
python on the remote host.
My problem is that when I then try to call python-send-buffer via C-c C-c it comes up with the error
Traceback (most recent call last):
File "", line 1, in ?
ImportError: No module named emacs
Traceback (most recent call last):
File "", line 1, in ?
NameError: name 'emacs' is not defined
Now, I must admit that I don't really know what's going on here. Is there a way for me to configure emacs so that I can evaluate the buffer on the remote host?
Many thanks.
Edit: I've followed eichin's advice and re-implemented python-send-region. See my answer below.
|
Is there a way to exclude a specific file from epydoc generation?
| 35,426,350
| 0
| 1
| 630
| 0
|
python,epydoc
|
You can also specified the excluded file in the config file like:
exclude: my_module.my_class
| 0
| 1
| 0
| 0
|
2010-12-16T21:52:00.000
| 2
| 0
| false
| 4,465,721
| 1
| 0
| 0
| 2
|
I'm generating an epydoc for a library of code, and there are a few testing files scattered throughout that I'd like to not include. I could use the --exclude generation option and rename the files, but I'm wondering if there's anything I can add to the files themselves that will be interpreted by epydoc as a command not to include/parse that file.
|
Is there a way to exclude a specific file from epydoc generation?
| 4,696,850
| 3
| 1
| 630
| 0
|
python,epydoc
|
It would seem the answer to this question is no. If you want to exclude an element, the only option is to use
--exclude=PATTERN
Which excludes modules whose dotted name matches the regular expression PATTERN
| 0
| 1
| 0
| 0
|
2010-12-16T21:52:00.000
| 2
| 1.2
| true
| 4,465,721
| 1
| 0
| 0
| 2
|
I'm generating an epydoc for a library of code, and there are a few testing files scattered throughout that I'd like to not include. I could use the --exclude generation option and rename the files, but I'm wondering if there's anything I can add to the files themselves that will be interpreted by epydoc as a command not to include/parse that file.
|
Which is better? Using inbuilt python functions or os.system commands?
| 4,470,327
| 11
| 2
| 211
| 0
|
python
|
The inbuilt Python modules/ stdlib wherever you can, subprocess (os.system) where you must.
Reasons: Portability, maintenance, code readability just to name a few.
| 0
| 1
| 0
| 1
|
2010-12-17T12:08:00.000
| 5
| 1
| false
| 4,470,302
| 1
| 0
| 0
| 4
|
Which is better to use in a python automation script for following simple operations
To create a zip file and copy it or rename it to a new location.
Using python inbuilt functions or terminal commands through os.system modules is better?
|
Which is better? Using inbuilt python functions or os.system commands?
| 4,470,324
| 2
| 2
| 211
| 0
|
python
|
I would say python ones since it'll make the script portable. But sometimes, performance and availability are of concerns.
| 0
| 1
| 0
| 1
|
2010-12-17T12:08:00.000
| 5
| 0.07983
| false
| 4,470,302
| 1
| 0
| 0
| 4
|
Which is better to use in a python automation script for following simple operations
To create a zip file and copy it or rename it to a new location.
Using python inbuilt functions or terminal commands through os.system modules is better?
|
Which is better? Using inbuilt python functions or os.system commands?
| 4,470,331
| 2
| 2
| 211
| 0
|
python
|
In general I'd say use the python libraries where possible - that way it'll be more portable, e.g. you won't need to worry about different commands or command options on various systems, and also if you need to change anything it's easier to do just python code.
| 0
| 1
| 0
| 1
|
2010-12-17T12:08:00.000
| 5
| 0.07983
| false
| 4,470,302
| 1
| 0
| 0
| 4
|
Which is better to use in a python automation script for following simple operations
To create a zip file and copy it or rename it to a new location.
Using python inbuilt functions or terminal commands through os.system modules is better?
|
Which is better? Using inbuilt python functions or os.system commands?
| 4,470,576
| 0
| 2
| 211
| 0
|
python
|
Using python internals command is nice, especially in terms of portability.
But at some point, you can be confused by lack of "os.kill" in Python older than 2.7 (Windows), you can be surprised by way how os.Popen is working, than you will discover win32pipe etc etc.
Personally I would suggest always a small research (do you need daemons etc) and then decide. If you don't need windows platform - using python's internals could be more efficient.
| 0
| 1
| 0
| 1
|
2010-12-17T12:08:00.000
| 5
| 0
| false
| 4,470,302
| 1
| 0
| 0
| 4
|
Which is better to use in a python automation script for following simple operations
To create a zip file and copy it or rename it to a new location.
Using python inbuilt functions or terminal commands through os.system modules is better?
|
How can I accurately gauge the CPU time of a python process?
| 4,483,416
| 1
| 4
| 334
| 0
|
python,timing
|
The quick answer, at least for linux, is to use getrusage along with a kernel that has a higher resolution timer.
The reason my initial tests gave the terrible precision of 10ms was because apparently 64-bit ubuntu is configured to a 100hz timer by default.
| 0
| 1
| 0
| 1
|
2010-12-19T13:44:00.000
| 1
| 1.2
| true
| 4,483,182
| 0
| 0
| 0
| 1
|
I'm designing a distributed system where a master node starts a bunch of worker nodes on remote machines. Since I am using Python and want to take advantage of the fact each physical machine has several cores, I want to run multiple worker nodes per machine (GIL etc). Additionally, each worker node may vary quite a bit in the amount of CPU it requires each "cycle". I can however split the worker nodes into quite a few pieces and my initial strategy will be to spawn many more worker nodes than there are cores per machine. The reasoning being that if a few nodes require more CPU, they can occupy a core for a longer duration. (If each node was already CPU bound, it could not suddenly require more CPU.)
This leads me to a question: How can I accurately gauge the CPU time of a python process?
I cannot measure the time naively, I need the time actually spent specifically for a given process. That is, for each process I want a number X, which, as accurately as possible, represents the amount of CPU resources spent exclusively on that process, regardless of unrelated processes. (I have been looking at Python's getrusage but it appears to give only 2 decimal points of precision on ubuntu, which is insufficient. EDIT: This also happens if I use getrusage() directly in C; at most 0.01 second precision. Close, but no cigar)
My specific use-case would be to measure the CPU time of each node cycle, from Start to End, where End happens about 0-30ms after Start.
The best answer would be a portable way to do this in Python. Methods that requires using C extension is fine.
|
Running a Python Script using Cron?
| 4,486,483
| 8
| 6
| 4,736
| 0
|
python,linux,cron,ubuntu-10.04
|
From cron you should be running the script as script_name.py and your script meets the following criteria:
Executable bit is set
The script's hash-bang is set correctly eg. #!/usr/bin/env python
it is accessible from the PATH
e.g. place it in /usr/local/bin/ or /opt/local/bin/ (and they are accessible to your system PATH.)
If these conditions are met, you should be able to run it from anywhere on your local system as script_name.py
| 0
| 1
| 0
| 1
|
2010-12-20T02:37:00.000
| 1
| 1.2
| true
| 4,486,472
| 0
| 0
| 0
| 1
|
I have a python script that I'd like to add to cron.
The script has +x permission on it.
How shall I add it to crontab? (say, I want it to run every minute).
Important: when I navigate (using the shell) to the script's folder, I cannot run it using "./script_name.py"; it doesn't work. Yet, when I run it using "Python script_name.py", everything works.
|
Trying to learn programming; I suck at command prompt
| 4,486,505
| 3
| 1
| 330
| 0
|
python,command-line
|
You have basically two possibilities:
Use the full path to the Python interpreter when starting you program, e.g. c:\Python2.7\python.exe HelloWorld
Add the directory of the Python interpreter to your PATH environment variable. I can't tell you how to do this in Windows 7, though.
| 0
| 1
| 0
| 0
|
2010-12-20T02:41:00.000
| 3
| 0.197375
| false
| 4,486,481
| 1
| 0
| 0
| 1
|
I know this is really basic but it is frustrating. I'm using Python 2 to try and create my first program (in Windows 7 64bit). Every guide I look at says to install Python2, then go to the command prompt and type python. But... I get an error, because python.exe is not in that directory. So I change to the "python27" directory, and it runs fine. But then when I want to run a program, I type python HelloWorld, but of course that doesn't work. I need to be in the directory that has both the python.exe, and the directory that has my program file.
Surely everyone does not have all of their programs in the python install directory; what am I missing?
|
How Google App Engine limit Python?
| 4,497,237
| 0
| 0
| 269
| 0
|
python,hosting,shared-hosting
|
IMO it's not a standard python, but a version specifically patched for app engine. In other words you can think more or less like an "higher level" VM that however is not emulating x86 instructions but python opcodes (if you don't know what they are try writing a small function named "foo" and the doing "import dis; dis.dis(foo)" you will see the python opcodes that the compiler produced).
By patching python you can impose to it whatever limitations you like. Of course you've however to forbid the use of user supplied C/C++ extension modules as a C/C++ module will have access to everything the process can access.
Using such a virtual environment you're able to run safely python code without the need to use a separate x86 VM for every instance.
| 0
| 1
| 0
| 0
|
2010-12-21T06:56:00.000
| 4
| 0
| false
| 4,496,914
| 0
| 0
| 0
| 2
|
Does anybody know, how GAE limit Python interpreter? For example, how they block IO operations, or URL operations.
Shared hosting also do it in some way?
|
How Google App Engine limit Python?
| 4,496,983
| 1
| 0
| 269
| 0
|
python,hosting,shared-hosting
|
The sandbox "internally works" by them having a special version of the Python interpreter. You aren't running the standard Python executable, but one especially modified to run on Google App engine.
Update:
And no it's not a virtual machine in the ordinary sense. Each application does not have a complete virtual PC. There may be some virtualization going on, but Google isn't saying exactly how much or what.
A process has normally in an operating system already limited access to the rest of the OS and the hardware. Google have limited this even more and you get an environment where you are only allowed to read the very specific parts of the file system, and not write to it at all, you are not allowed to open sockets and not allowed to make system calls etc.
I don't know at which level OS/Filesystem/Interpreter each limitation is implemented, though.
| 0
| 1
| 0
| 0
|
2010-12-21T06:56:00.000
| 4
| 1.2
| true
| 4,496,914
| 0
| 0
| 0
| 2
|
Does anybody know, how GAE limit Python interpreter? For example, how they block IO operations, or URL operations.
Shared hosting also do it in some way?
|
Check if a file is available (not used by another process) in Python
| 4,499,110
| 0
| 0
| 3,773
| 0
|
python,file
|
If the file other process has the file open and you delete it, it will not be removed from the filesystem until all processes close their handles to it.
If the other process merely needs its file handle/object to continue to work you may safely delete the file, it will be removed when the process closes its handle. If you want to be able to call open() on the file until the process has finished with it, both processes need to use locks (see fcntl.flock()).
| 0
| 1
| 0
| 0
|
2010-12-21T10:32:00.000
| 3
| 0
| false
| 4,498,330
| 1
| 0
| 0
| 1
|
I'm looking for some code to check if a file in the file system is available (not used by another process). How could I do it in Python? Thanks!
How I'll use it: cylically check if file is available and when it is (processes don't need it anymore) delete it.
|
fastest way to invoke a process from python?
| 4,502,661
| 4
| 3
| 2,379
| 0
|
python,linux,subprocess
|
I would expect any speed difference between, say, os.system and os.execv and subprocess.Popen, to be swamped by the expense of starting a new process (and the context-switching needed to actually run it). Therefore I recommend using subprocess first and measuring the performance.
One possible performance consideration: os.system and subprocess.Popen(shell=True, ...) cause an extra shell process to be created. In most cases, that shell isn't necessary. It's wasteful to create it; you get twice as many processes as you need for no benefit.
| 0
| 1
| 0
| 1
|
2010-12-21T18:11:00.000
| 1
| 1.2
| true
| 4,502,469
| 0
| 0
| 0
| 1
|
What is the fastest/most efficient method to execute an executable from python? It seems to me that os.system is faster than subprocess.popen. I would like to be able to read the lines that come out of this other process, but far more important than anything else is speed.
|
How to specify a condition on which to complete installation with python setuptools
| 4,703,614
| 0
| 1
| 203
| 0
|
python,setuptools,external-dependencies
|
Depending on what you need wget for, have you considered writing your own wget-equivalent in python, using the urllib or urllib2 modules? Then you'll have something that is both cross-platform and under your control, and you'll break your dependency on wget. Look at the urlretrieve method.
| 0
| 1
| 0
| 0
|
2010-12-22T18:36:00.000
| 2
| 0
| false
| 4,512,445
| 1
| 0
| 0
| 1
|
I'm trying to make a distributable egg with setuptools and my program depends on wget being present, which obviously isn't available in PyPi. I have a little script which checks for the presence of wget, asking the user to install it and returning -1 if it isn't installed, or returning 0 if it is installed.
I'd like to complete the installation of my program only if my wget checking script returns 0. How can I do this with setuptools?
|
Where should settings files be stored?
| 4,516,525
| 0
| 4
| 366
| 0
|
python,wxpython,application-settings
|
On linux, there's really not a standard way. A lot of programs, especially newer python programs I've seen, use ~/.config/appname/. Of course the older ones like bash, vi, etc just add a hidden file in ~/. It depends, what kind of settings are these?
Those are obviously user run programs. System programs generally store their config files somewhere in /etc/
Edit:
~/.config/appname/ seems to be more standard than I thought.
~ $ ll config
total 84K
drwxr-xr-x 2 falmarri 4.0K 2010-12-17 09:48 akonadi/
drwxr-xr-x 2 falmarri 4.0K 2010-12-04 15:48 autokey/
drwxr-xr-x 2 falmarri 4.0K 2010-11-06 01:45 autostart/
drwx------ 2 falmarri 4.0K 2010-11-23 22:32 enchant/
drwxr-xr-x 2 falmarri 4.0K 2010-11-25 21:13 FreeCAD/
drwx------ 2 falmarri 4.0K 2010-12-21 09:16 gtk-2.0/
drwx------ 3 falmarri 4.0K 2010-12-11 13:43 ibus/
drwxr-xr-x 2 falmarri 4.0K 2010-11-06 02:20 kde.org/
drwxr-xr-x 2 falmarri 4.0K 2010-11-06 02:46 qtcurve/
drwxr-xr-x 2 falmarri 4.0K 2010-11-17 13:49 Trolltech/
drwxr-xr-x 2 falmarri 4.0K 2010-11-17 23:29 vlc/
-rw-r--r-- 1 falmarri 31K 2010-12-21 20:51 Trolltech.conf
-rw------- 1 falmarri 632 2010-11-06 01:40 user-dirs.dirs
-rw-r--r-- 1 falmarri 5 2010-11-06 01:40 user-dirs.locale
| 0
| 1
| 0
| 0
|
2010-12-23T06:42:00.000
| 3
| 0
| false
| 4,516,459
| 0
| 0
| 0
| 2
|
I'm writing an application in python (using wxPython for the gui) and I'm looking for a platform independent way to decide where to store application settings files. On linux systems, where is it customary to store application settings files? How about on Mac, and Windows (all modern versions)?
Ideally I'd like to have a module that provides a platform agnostic interface to locate these files. Does something like this already exist?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.