Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Access Arguments to Parent Task from Subtask in Celery
71,510,516
0
2
960
0
python,celery
I think you can, at least in chord. When you bind=True your task, you can access self.request. In self.request.chord You can find a detailed dict. In its kwargs or options['chord'] you will find what you're looking for, but it's not an elegant solution. Also, if the parent has been replaced, you will only be able to see the final state.
0
1
0
0
2013-02-17T16:59:00.000
2
0
false
14,923,583
0
0
0
1
Is it possible to access the arguments with which a parent task A was called, from its child task Z? Put differently, when Task Z gets called in a chain, can it somehow access an argument V that was invoked when Task A was fired, but that was not passed through any intermediary nodes between tasks A and Z? And if so, how? Using Celery 3.0 with RabbitMQ for results backend.
Is there a Python Memcached library with support for AWS ElastiCache's auto-discovery feature?
14,924,764
1
2
2,949
0
python,memcached,amazon-elasticache
As far as I know, ElastiCache cluster is just a bunch of memcached servers, so you need to give your memcached client the list of all of your servers and have the client do the relevant load balancing. For Python, you have a couple of options: pylibmc - which is a wrapper around libmemcached - one of the best and fastest memcached clients there is python-memcached - a native Python client - very basic, but easy to work with, install and use They haven't provided a client yet in python to deal with the new auto-discovery feature unfortunately.
0
1
0
0
2013-02-17T18:35:00.000
3
1.2
true
14,924,586
0
0
1
1
Recently, AWS announced ElastiCache's auto-discovery feature, although they only officially released a client for Java. Does anyone know of a Python Memcached library with support for this feature?
Cluster job scheduler: tools
42,817,839
0
3
1,313
0
python,cluster-computing,scheduler
Take a look at the ipcluster_tools. The documentation is sparse but it is easy to use.
0
1
0
0
2013-02-18T16:54:00.000
2
0
false
14,941,334
0
0
0
1
we are trying to solve a problem related to cluster job scheduler. The problem is the following we have a set of python scripts which are executed in a cluster, the launching process is currently done by means of the human interaction, I mean to start the test we have a bash script which interact with the cluster to request the resources needed for the execution. What we are intending to do is to build an automatic launching process (which should be sound in the sense that it realizes the job status and based on that wait the job ending, restart the execution, etc...). Basically we have to implement a layer between the user workstation and the cluster. Another additional difficulty is that our layer must be clever enough to interact with the different cluster job schedulers. We wonder if there exists a tool or framework which help us to interact with the cluster without having to deal with each cluster scheduler details. We have searched in the web but we did not find anything suitable for our needs. By the way the programming language we use is Python. Thanks in advance! Br.-
Will os.fork() use copy on write or do a full copy of the parent-process in Python?
14,942,111
24
9
3,526
0
python,linux,fork
Even if COW is employed, CPython uses reference counting and stores the reference count in each object's header. So unless you don't do anything with that data, you'll quickly have spurious writes to the memory in question, which will force the system to copy the data. Pass it to a function? That's another reference, an INCREF, a write to the COW'd memory. Store it in a variable or object attribute? Same. Even just look up a method on it? Ditto. Some builtin data structures allocate the bulk of their data separately from the object (e.g. most collections) for various reasons. If these end up on a different page -- or whatever granularity COW works on -- you may get lucky with those. However, an object referenced from such a collection is not exempt -- using it manipulates its refcount just the same. In addition, a bit of data will be shared because there are no writes to it by design (e.g., the native CPython code), and some objects your fork'd process does not touch may be shared (I'm honestly not sure; I think the cycle GC does not write to the object). But Python objects used by Python code is virtually guaranteed to get written to. Similar reasoning applies to PyPy, Jython, IronPython, etc. (only that they fiddle with bits in the object header instead of doing reference counting) though I can't vouch for all possible configurations.
0
1
0
0
2013-02-18T17:15:00.000
2
1.2
true
14,941,729
1
0
0
1
I would like to load a rather large data structure into a process and then fork in the hope to reduce total memory consumption. Will os.fork work that way or copy all of the parent process in Linux (RHEL)?
How to disable Google App Engine python SDK import hook?
14,950,038
0
1
158
0
python,google-app-engine
Easiest thing is to modify google/appengine/tools/dev_appserver_import_hook.py and add the module you want to the whitelist. This will allow you to import whatever you want. Now there's a good reason that the imports are restricted in the development server. The restricted imports match what's available on the production environment. So if you add libraries to the whitelist, your code may run on your local development server, but it will not run on the production environment. And no, you can't import restricted modules on production.
0
1
0
0
2013-02-19T00:35:00.000
1
0
false
14,947,860
0
0
1
1
I am playing around with local deployment of GAE python SDK. The code that I am trying to run contains many external libraries which are not part of GAE import whitelist. I want to disable the import restrictions and let GAE app import any locally installed module. After walking through the code, I figured out that they use custom import hooks for restricting imports. However, I have not been able to figure out how to disable the overridden import hook. Let me know if you have any idea how this can be accomplished.
What is difference between os.getuid() and os.geteuid()?
14,950,419
6
40
27,249
0
python,uid
Function os.getuid() returns ID of a user who runs your program. Function os.geteuid() of a user your program use permissions of. In most cases this will be the same. Well known case when these values will be different is when setuid bit is set for your program executable file, and user that runs your program is different from user that own program executable. In this case os.getuid() will return ID of user who runs program, while os.geteuid() will return ID of user who own program executable.
0
1
0
0
2013-02-19T05:21:00.000
2
1
false
14,950,378
0
0
0
1
The documentation for os.getuid() says: Return the current process’s user id. And of os.geteuid() says: Return the current process’s effective user id. So what is the difference between user id and effective user id? For me both works same (on both 2.x and 3.x). I am using it to check if script is being run as root.
need assisstance to set python2.7 as default & Django to install
14,955,895
1
0
143
0
linux,django,python-2.7,centos
Your CentOS relies on python 2.4 so that's not going to work. You should probably create a new system user and install pyton 2.7 in its home directory (or use your root user and install python in /opt for global usage), you can find plenty of tutorials on Google. After succesfully doing so, you can set an alias in your user's bash profile to define which python version to use. It's also common practice to create a virtualenv for each project and/or user.
0
1
0
0
2013-02-19T10:43:00.000
1
1.2
true
14,955,468
0
0
1
1
I am not able to install Django.I am using CentOS 5,not able to set python2.7 environment variable.Priviously in my system python2.4.3 is available,but after installing python 2.7 in the terminal i checked the version avail in system using "python -V"cmd it executed as python 2.4.3.But if i checked using "python2.7 -V"cmd it is showing python2.7.Please help me with this..... 1.I need to set python2.7 as default version. 2.Help me with the installation of Django.
Running a script on EC2 start and stop
14,966,165
1
1
1,561
0
python,linux,amazon-web-services,amazon-ec2,amazon-rds
It is possible. You just have to write an init script and setup proper symbolic links in /etc/rc#.d directories. It will be started with a parameter start or stop depending on if machine is starting up or shutting down.
0
1
1
1
2013-02-19T16:26:00.000
1
1.2
true
14,962,414
0
0
1
1
I have an Amazon Ubuntu instance which I stop and start (not terminate). I was wondering if it is possible to run a script on start and stop of the server. Specifically, I am looking at writting a python boto script to take my RDS volume offline when the EC2 server is not running. Can anyone tell me if this is possible please?
Retranslation audio stream with python on Google App Engine
14,964,896
1
1
193
0
python,google-app-engine,audio-streaming
You can't make long running external calls with App Engine. Maximum deadline (task queue and cron job handler) for UrlFetch is 10 minutes. So, I think it is not possible.
0
1
0
0
2013-02-19T18:25:00.000
1
1.2
true
14,964,717
0
0
1
1
I have a URL address to the audio stream, how I can retranslate it in the web with my address (myapp.appspot.com)? Let me explain why I need it: I have a very narrow channel, and will not stand many connections, so I have to do it with GAE Thanks!
Anyway to provide progress from os.walk?
14,970,241
3
2
931
0
python
The walk itself can't give you progress, because there's no way of knowing in advance how many entries are under some directory tree.* However, in most programs that use walk, you're actually doing something with the files, which usually takes a whole lot longer than the implicit stat call. For example, grabbing my first program with os.walk in it, list(os.walk(path)) takes 2.301 seconds, while my actual function (despite only operating on a small percentage of those files) takes 139.104 seconds. And I think this kind of thing is pretty typical. So, you can first read in the entire walk (e.g., by using list(os.walk(path))), and then use that information to generate the progress for your real work. In a realistic program, you'd probably want to show an "indeterminate progress bar" with a label like "Determining size..." while doing the list(os.walk(path)), and then replace it with a percentage progress bar with "0/12345 files" once that's done. (In fact, I'm going to go add exactly that indeterminate progress bar to my program, now that I've thought of the idea…) (For a single-threaded interactive program, you obviously wouldn't want to just block on list(os.walk(path)); you might do it in a background thread with a callback to your main thread, or do one iteration of the walk object and runLater the rest each time through the event loop, etc.) * This isn't because no filesystem or OS ever could do such a thing, just because they don't. There would obviously be some tradeoffs—for example, creating and deleting lots of tiny files would be a lot slower if you had to walk up the whole tree updating counts. Classic Mac used to solve this problem by keeping a cached count in the Finder Info… which was great, except that it meant a call that could take either 1us or 1min to return, with no way of predicting which in advance (or interrupting it) programmatically.
0
1
0
0
2013-02-19T22:07:00.000
1
0.53705
false
14,968,441
1
0
0
1
for root, dirs, files in os.walk(rootDir, topdown='true'): is something regularly used in python scripts. Just wondering is there any well known way to provide progress here? When you have a large folder structure this API can take a while? Thanks.
Python shell access to a separate running script
14,970,626
1
0
196
0
python,shell,process
You can't generally access one Python interpreter from another. The most general way to do something like is to put an interpreter-on-a-socket (or -pipe or whatever) into your server program, and just connect your shell up to that interpreter. Doing this on top of the code module isn't hard, but to make it as nice as the normal interactive interpreter shell takes a bit more work. I believe IDLE and IPython both contain lots of useful source code, and possibly even something you can use out of the box, or with minimal changes. It's also possible to share data directly between two separate programs. For example, use multiprocessing.Value on top of mmap—or, more simply, just keep the data in a database file instead of in memory. Then your shell can just read the data without interacting directly with the server. However, this means having appropriate locks in place, or trying to write as atomically as possible and accepting that the shell will still occasionally get garbage because of races. But really, most of the time, if you can afford to dump the data by pickling/JSON/whatever, that's both the easiest and the safest solution.
0
1
0
0
2013-02-19T23:59:00.000
1
0.197375
false
14,969,929
1
0
0
1
I have Python application running on a web server via mod_wsgi and I am able to access python shell through SSH on the server. Part of the application generates a dictionary and a small number of lists in memory over time while the application is running. Is there a possible way of starting the Python shell on the server and being able to access the dictionary and lists through the shell or is the only option to program the application to pickle or json them and store them in a file periodically or by event trigger? Even if this is not focused on a web server situation is it possible for a Python shell to access an already running Python application?
Can the compiled opencl program be stored as a seperate binary file?
15,017,872
1
2
1,010
0
python,opencl,gpgpu,pyopencl,amd-processor
on NVIDIA the binary will be in the ptx format. obtain the Binary sizes clGetProgramInfo() using the flag CL_PROGRAM_BINARY_SIZES store the binaries in ptx file. clGetProgramInfo() using the flag CL_PROGRAM_BINARIES clCreateProgramWithBinary() with the ptx file as input.
0
1
0
0
2013-02-20T15:32:00.000
2
0.099668
false
14,983,709
0
0
0
1
I have 2 two python script on separate files. The first one has opencl program that performs some image processing on the image passed to it and returns the results. The second script reads the image on from a file and calls the first script passing the read image as a parameter and obtains the results returned by it which is used for further processing. Now, I have like a 100 images in the folder. So the second scripts calls the first script 100 times and each time the first script is called, the opencl kernel is compiled which is absolutely unnecessary as all the images are of same format and dimension. Is there a way to first compile the opencl kernel once, store it in a binary format and call it whenever required? Of-course, i can put all the code in one large file, compile the kernel once and call it in a loop for 100 times but I want separate files for the purpose of convenience. Hardware: CPU: AMD A8 APU, AMD Phenom 2 X4 GPU: AMD Radeon HD 7640G + 7670M Dual Graphics, ATI Radeon HD5770
Description of Hadoop job
15,004,130
0
1
109
0
python,hadoop,mapreduce,hadoop-streaming
Yes, you can specify a name for each job using job.setJobName(String). If you were to set the job name to something distinguishing you should be able to tell them apart. For example, by using something likeManagementFactory.getRuntimeMXBean().getName() you can get the process id and machine name (on linux anyway, unsure of behaviour on other operating systems) in the format of 1234@localhost, where 1234 is the process id, which you could set to the job name to tell them apart.
0
1
0
0
2013-02-21T13:15:00.000
2
0
false
15,003,202
0
0
0
1
I have a Hadoop cluster and different processes are able submit mapreduce jobs to this cluster (they all use the same user account). Is there a way to distinguish these jobs? Some kind of description, which can be added to job during submit like 'This is a job of process "1234", do not touch'? I am using Python and HadoopStreaming, and would like to distinguish jobs using simple hadoop job -list (or at least using web management interface).
How to kill downstream jobs if upstream job is stopped?
15,006,928
2
3
1,600
0
java,python,jenkins,jenkins-plugins
Instead of killing the job, have another job that programmatically terminates all the required jobs. You could reuse the same property file to know which all jobs to be killed. You could use groovy script to terminate jobs.
0
1
0
0
2013-02-21T15:40:00.000
2
0.197375
false
15,006,278
0
0
0
1
I have a parent job that triggers many downstream jobs dynamically. I use python code to generate the list of jobs to be triggered, write it to a properties file, Inject the file using EnvInject plugin and then use the "Parameterized trigger plugin" with the job list variable (comma separated) variable to launch the jobs (If anyone know an easier way of doing this I would love to hear that also!). It works great except when killing the parent job, the triggered jobs continue to run, and I want them dead also when killing the parent. Is there a plugin or way to implement this? Maybe a hook that is called when a job is killed? EDIT: Sorry for the confusion, I wasn't clear about what I meant with "killing" the job. I mean clicking the red 'x' button in the Jenkins gui, not the Unix signal. Thanks in advance.
python distutils include shell scripts in module directory
47,823,714
0
3
1,820
0
python
Another issue might be that such pypi packages containing Bash scripts might not run correctly on e.g. Windows?
0
1
0
1
2013-02-21T17:59:00.000
3
0
false
15,009,146
1
0
0
1
What is the best way to include a 'helper' shell script in setup.py that is used by a python module? I don't want to include is as a script since it is not run on it's own. Also, data_files just copies things in the the install path (not the module install path) so that does not really seem like the best route. I guess the question is: is there a way of including non-python (non-C) scripts/binaries in a python distutils package in a generic way?
Why is my clojure shell result not like what works in python?
15,010,740
1
3
189
0
python,clojure
It sounds like you want sh to return immediately instead of waiting for notepad's exit code. How about writing a sh! macro or somesuch that runs the original sh command on a new Thread? If you're only using this as a convenience in the REPL, it would be entirely unproblematic. EDIT Arthur's answer is better and more Clojurian - go with that.
0
1
0
0
2013-02-21T19:10:00.000
2
0.099668
false
15,010,364
1
0
0
1
When working in the python repl I often need to edit multiline code. So I use import os then os.system("notepad npad.py") In clojure I first run (use '[clojure.java.shell :only [sh]]) Then I run (sh "notepad" "jpad.clj") This starts notepad but not in a useful way because the clojure repl now hangs. In other words, until I close notepad I cannot enter code in the repl and I want to keep both open. I know I can easily open notepad without clojure so it is no big deal. However, is there a way for clojure to start an external process without hanging?
Hadoop Streaming Job with binary input?
15,181,203
0
1
546
0
python,hadoop,hadoop-streaming
You may consider using NullWritable as output, and generating the SequenceFile directly inside of your python script. You can look up the hadoop-python project in github to see candidate code: though it is admittedly bit large-ish/heavy it does handle the sequencefile generation.
0
1
0
0
2013-02-21T21:02:00.000
1
0
false
15,012,162
0
0
0
1
I wish to convert a binary file in one format to a SequenceFile. I have a Python script that takes that format on stdin and can output whatever I want. The input format is not line-based. The individual records are binary themselves, hence the output format cannot be \t delimited or broken into lines with \n. Can I use the Hadoop Streaming interface to consume a binary format? How do I produce a binary output format? I assume the answer is "No" unless I hear otherwise.
opening a usb device in python -- what is the path in winXP?
15,013,500
0
0
1,609
0
python,usb
"Everything is a file" is one of the core ideas of Unix. Windows does not share this philosophy and, as far as I know, doesn't provide an equivalent interface. You're going to have to find a different way. The first way would to be to continue handling everything at a low level & have your code use a different code path under Windows. The only real reason to do this is if your goal is to learn about USB programming at a low level. The other way is to find a library that's already abstracted out the differences between platforms. PySDL immediately comes to mind (followed by PyGame, which is a higher level wrapper around that) but, as that's a gaming/multimedia library, it might be overkill for what you're doing. Google tells me that PyUSB exists and appears to just focus on handing USB devices. PySDL/PyGame have been around a while & are probably more mature so, unless you've got a particular aversion to them, I'd probably stick with them.
0
1
0
0
2013-02-21T21:37:00.000
2
0
false
15,012,694
0
0
0
2
I'm trying to access a usb device through python but I'm unsure how to find the path to it. The example I'm going from is: pipe = open('/dev/input/js0','r') In which case this is either a mac or linux path. I don't know how to find the path for windows. Could someone steer me in the proper direction? I've sifted through the forums but couldn't quite find my answer. Thanks, -- Mark
opening a usb device in python -- what is the path in winXP?
15,012,889
0
0
1,609
0
python,usb
The default USB path on windows is D:\. So, if we have a text document named mydoc.txt, which is in the folder myData the appropriate path is D:\myData\mydoc.txt
0
1
0
0
2013-02-21T21:37:00.000
2
0
false
15,012,694
0
0
0
2
I'm trying to access a usb device through python but I'm unsure how to find the path to it. The example I'm going from is: pipe = open('/dev/input/js0','r') In which case this is either a mac or linux path. I don't know how to find the path for windows. Could someone steer me in the proper direction? I've sifted through the forums but couldn't quite find my answer. Thanks, -- Mark
another transaction already in progress
15,033,087
5
2
3,012
0
python,google-app-engine
appcfg.py rollback C:\path\to\my\app is the required command. If you are using Java, the rollback command is same as above, but the path to the application should be to the application's target directory. Otherwise, rollback will fail.
0
1
0
0
2013-02-22T16:54:00.000
1
1.2
true
15,029,252
0
0
1
1
I am using app engine launcher in windows and for some reason the last time i deployed my app, the transaction wouldn't finish, and now every time i try to deploy i get the error another transaction by user is already in progress for app: s~ myapp, version 1 i have tried doing appcfg.py rollback, which brings up a python window, which then closes again almost immediately (i think it says error but it closes so fast i cant tell for sure) i have tried doing appcfg.py rollback C:\ my\apps\directory\path - which leads to the same as above i have tried doing C:\Program Files\Google\google_appengine appcfg.py rollback c:\my\app\path but windows then tells me it cant find C:\ program and now im stuck for things to try?
Python on the AWS Beanstalk. How to snapshot custom logs?
15,183,303
6
15
5,607
0
python,logging,amazon-elastic-beanstalk
If you need have ability to snapshot log files from Beanstalk management console, you should just write you log files to "/opt/python/log/" folder. Elastic beanstalk scripts use this folder for log tailing.
0
1
0
0
2013-02-23T07:12:00.000
2
1.2
true
15,038,135
0
0
1
1
I'm developing python application which works on aws beanstalk environment. For error handling and debugging proposes I write logs to custom lof file on the directory /var/logs/. What should I do in order to have ability snapshot logs from Elastic beanstalk management console?
keep getting access violation after setting a breakpoint with winappdbg
15,235,645
0
0
656
0
python,debugging,breakpoints,access-violation,ida
Apperently the pida_dump script didn't got the right base address so when i did a rebase the code was like address - old_base_address + new_base_address and because the old_base_address was worng it missed up my BP. thanks any way for the help!
0
1
0
0
2013-02-24T19:20:00.000
2
1.2
true
15,055,561
1
0
0
1
i am using winappdbg framework to build a debugger in python. i can set some breakpoints using the event.debug.break_at(event.get_pid(),address) in order to set the breakpoint but after setting certin breakpoints (and not while setting them but once the program hits them!) i get access violation exception. for exemple i can set an access point at 0x48d1ea or 0x47a001 but if i set one at 0x408020 i get the exception. the module base address is 0x400000. 0048D0BE: xor esi,eax 0048D0C0: call [winamp!start+0x25c1] 760DCC50: add [ebx],dh Access Violation Exception event (00000001) at address 779315DE, process 9172, thread 9616 b.t.w i am taking the address to set the breakpoints on from a pida file generated by IDA. i rebased the file so the address should be aligned thanks!
User Idle time in Linux
15,069,543
1
2
3,300
0
python,linux,bash
In deps of internet I found something like this, and it looks like working - but still not perfect solution because it returns much more info than I need and also the number near which is proper value is not constant (it differs in other system): ls -l /dev/pts | fgrep username
0
1
0
0
2013-02-25T14:08:00.000
4
0.049958
false
15,068,887
0
0
0
1
I need to check how much time passed since last user input occurence (preffered way - in python) on Linux (lucid - 10.4) I know that this is easy in normal way (just using XScreenSaverQueryInfo), but the tricky part is that I don't have x11/extensions/scrnsaver.h header and I HAVE to do that some other way (even if I install needed package I cannot install packeges on 100 other computers on which it will work - I don't have permission to do that).
Unable to exit with ^C
15,073,287
7
2
1,083
0
python,terminal,pytest
I would suggest trying control-Z. That should suspend it; you can then do kill %1 (or kill -9 %1) to kill it (assuming you don't have anything else running in the background) What I'm guessing is happening (from personal experience) is that one of your tests is running in a try / except that is catching all exceptions (including the keyboard interrupt which control c triggers) and is inside a while loop / ignoring the exception.
0
1
0
1
2013-02-25T17:52:00.000
1
1.2
true
15,073,210
0
0
0
1
I am using pytest to run tests and, during the execution of a test, interrupted with ctrl-C. No matter how many times I ctrl-C to get out of the test session (I've also tried ctrl-D to get out of the environment I'm using), my terminal prompt does not return. I accidentally pressed F as well... test.py ^CF^C Does the F have something to do with my being stuck in the captured stderr section and the prompt not returning? Are there any logic explanations why I'm stuck here, and if so, are there any alternatives to exiting this state without closing the window and force exiting the session?
pros and cons between os.path.exists vs os.path.isdir
15,077,527
0
81
120,324
0
python,directory,os.path
Most of the time, it is the same. But, path can exist physically whereas path.exists() returns False. This is the case if os.stat() returns False for this file. If path exists physically, then path.isdir() will always return True. This does not depend on platform.
0
1
0
0
2013-02-25T22:06:00.000
5
0
false
15,077,424
1
0
0
3
I'm checking to see if a directory exists, but I noticed I'm using os.path.exists instead of os.path.isdir. Both work just fine, but I'm curious as to what the advantages are for using isdir instead of exists.
pros and cons between os.path.exists vs os.path.isdir
56,261,695
4
81
120,324
0
python,directory,os.path
os.path.isdir() checks if the path exists and is a directory and returns TRUE for the case. Similarly, os.path.isfile() checks if the path exists and is a file and returns TRUE for the case. And, os.path.exists() checks if the path exists and doesn’t care if the path points to a file or a directory and returns TRUE in either of the cases.
0
1
0
0
2013-02-25T22:06:00.000
5
0.158649
false
15,077,424
1
0
0
3
I'm checking to see if a directory exists, but I noticed I'm using os.path.exists instead of os.path.isdir. Both work just fine, but I'm curious as to what the advantages are for using isdir instead of exists.
pros and cons between os.path.exists vs os.path.isdir
15,077,442
6
81
120,324
0
python,directory,os.path
Just like it sounds like: if the path exists, but is a file and not a directory, isdir will return False. Meanwhile, exists will return True in both cases.
0
1
0
0
2013-02-25T22:06:00.000
5
1
false
15,077,424
1
0
0
3
I'm checking to see if a directory exists, but I noticed I'm using os.path.exists instead of os.path.isdir. Both work just fine, but I'm curious as to what the advantages are for using isdir instead of exists.
Can you read variable data of an already running Python Script from its DMP file in Windows?
15,094,305
1
0
165
0
python,windows,python-2.7,save
Does it contain some or all of the current execution state of your program? Yes. Is it in a form that you could easily extract the information in the user-level format you are probably looking for from it? Probably not. It will dump the state of the entire Python interpreter, including the data as represented in memory for the specific Python program that is running. To reconstruct that data, I'm pretty sure you'd need to run the Python interpreter itself in debug mode, then try to reconstruct your data from whatever your C debugger can piece together. If this sounds very difficult or impossible to you, then you probably have some understanding of what it entails.
0
1
0
0
2013-02-26T16:06:00.000
1
1.2
true
15,093,780
1
0
0
1
I have a Python program that had some kind of error that prevents it from saving my data. The program is still running, but I cannot save anything. Unfortunately, I really need to save this data and there seems to be no other way to access it. Does the DMP file created for the process through the task manager contain the data my program collected, and if so, how do I access it? Thanks.
Using pip in pythonbrew
15,259,856
3
2
2,019
0
python,pip,pythonbrew
I found a solution. I uninstalled my Python 3.3.0 by issuing pythonbrew uninstall 3.3.0. Then I installed it again with pythonbrew install --configure="--with-zlib" 3.3.0. This allowed pip to install and thus now I can use it to install to this Python version. Maybe somebody else can find this helpful, cheers!
0
1
0
0
2013-02-26T18:15:00.000
1
1.2
true
15,096,306
0
0
0
1
I have started using pythonbrew to manage different Python installs. The main reason I wanted to do this is to install third party modules without affecting my system's Python install. Fore example I thought I would install the requests library using: pip install requests However this causes an error saying: error: could not create '/usr/local/lib/python2.7/dist-packages/requests': Permission denied Obviously I don't want to install it to the system's Python which is Python 2.7.3. I did have to install pip with my package manager and the resultant path is /usr/bin/pip. How can I use pip to install to my pythonbrew installs? (My current pythonbrew Python version is 3.3.0) Am I missing something?
Google App Engine development server random (?) slowdowns
15,098,634
2
2
237
1
python,google-app-engine
Don't worry about it. It (IIRC) keeps the whole DB (datastore) in memory using a "emulation" of the real thing. There are lots of other issues that you won't see when deployed. I'd suggest that your hard drive is spinning down and the delay you see is it taking a few seconds to wake back up. If this becomes a problem, develop using the deployed version. It's not so different.
0
1
0
0
2013-02-26T19:54:00.000
2
0.197375
false
15,098,051
0
0
1
2
I'm doing a small web application which might need to eventually scale somewhat, and am curious about Google App Engine. However, I am experiencing a problem with the development server (dev_appserver.py): At seemingly random, requests will take 20-30 seconds to complete, even if there is no hard computation or data usage. One request might be really quick, even after changing a script of static file, but the next might be very slow. It seems to occur more systematically if the box has been left for a while without activity, but not always. CPU and disk access is low during the period. There is not allot of data in my application either. Does anyone know what could cause such random slowdowns? I've Google'd and searched here, but need some pointers.. /: I've also tried --clear_datastore and --use_sqlite, but the latter gives an error: DatabaseError('file is encrypted or is not a database',). Looking for the file, it does not seem to exist. I am on Windows 8, python 2.7 and the most recent version of the App Engine SDK.
Google App Engine development server random (?) slowdowns
15,106,246
0
2
237
1
python,google-app-engine
Does this happen in all web browsers? I had issues like this when viewing a local app engine dev site in several browsers at the same time for cross-browser testing. IE would then struggle, with requests taking about as long as you describe. If this is the issue, I found the problems didn't occur with IETester. Sorry if it's not related, but I thought this was worth mentioning just in case.
0
1
0
0
2013-02-26T19:54:00.000
2
0
false
15,098,051
0
0
1
2
I'm doing a small web application which might need to eventually scale somewhat, and am curious about Google App Engine. However, I am experiencing a problem with the development server (dev_appserver.py): At seemingly random, requests will take 20-30 seconds to complete, even if there is no hard computation or data usage. One request might be really quick, even after changing a script of static file, but the next might be very slow. It seems to occur more systematically if the box has been left for a while without activity, but not always. CPU and disk access is low during the period. There is not allot of data in my application either. Does anyone know what could cause such random slowdowns? I've Google'd and searched here, but need some pointers.. /: I've also tried --clear_datastore and --use_sqlite, but the latter gives an error: DatabaseError('file is encrypted or is not a database',). Looking for the file, it does not seem to exist. I am on Windows 8, python 2.7 and the most recent version of the App Engine SDK.
Is it possible to reload (Windows) environment variables?
15,099,480
3
0
7,103
0
python,windows,environment-variables
If you set an environment variable in the registry (or via the System Properties > Advanced > Environment Variables UI), it will be global and persistent for every process launched from the top level context created after the variable was set. Shells and contexts initialized before your change will not pick up those changes unless you explicitly merge the values from the registry with the existing values in that context though. Each context inherits the environment of its parent, but after that point, changes to the parent or child environments do not propagate in either direction. Contexts created at the top level get their environment from the registry.
0
1
0
0
2013-02-26T21:11:00.000
1
1.2
true
15,099,344
0
0
1
1
Exactly what it says. I can set per-user environment variables, either from Windows > type "Path", or using RegEdit, or even from a Python script. But if I run an application (e.g. from Launchy, or launch it from Chrome), it won't pick up the new variables. I've got to start a new cmd or Windows Explorer (I think) to get the new values. Now, obviously I can set them on a per-use basis, but I want to set them globally for my account, and also for whatever process I happen to be using at the time. Is this possible? And is it possible (or easier) to do from a Python script?
How to get full user name in the output of 'top' command in *nix?
15,111,135
2
0
2,320
0
python,unix
Consider using ps instead of top as I don't know any reasons why top would be better for this task. You can configure ps output much more flexibly than top one.
0
1
0
1
2013-02-27T11:29:00.000
2
0.197375
false
15,110,982
0
0
0
1
I need to extract process details from top command on a few *nix systems I monitor. The details needed are username, command executed, PID, PPID, username and resident memory consumption. If memory usage is greater than a threshold or command is illegal, I need to send a warning to the user at username@company.com I am writing a script to do this in python and get the required data by executing 'top -bc -n 1' and grepping for command keyword. However, I also need to extract username for the illegal processes to send the mail warning. However, top automatically truncates usernames greater than 8 characters. How do I retrieve the full user names?
threads or processes
15,121,552
2
0
91
0
python,python-3.x
If you don't have time to wait for a performance test, you presumably just want guesses. So: There's probably no real advantage to multiprocessing over threading here. There is a disadvantage to multiprocessing in the overhead per task. You can get around that by tuning the batch size, but with threading, you don't have to. So, I'd use threading. However, I'd do it using concurrent.futures.ThreadPoolExecutor, so when you get a bit of time later, you can try the one-liner change to ProcessPoolExecutor and compare performance.
0
1
0
0
2013-02-27T20:10:00.000
3
1.2
true
15,121,405
1
0
0
1
I am writing basically port scanner (not really, but it's close). Pinging machines one by one is just slow, so I definitely need some kind of parallel processing. Bottle neck is definitely network I/O, so I was thinking that threads would suffice (with python's GIL existing), they're easier to use. But would utilization of processes instead bring significant performance increase (15%+)? Sadly, I don't have time to try both approaches and pick better of them based on some measurements or something :/ Thanks :)
How to check if I'm running in a shell (have a terminal) in Python?
15,121,639
2
2
206
0
python,python-2.6
If you really need to check this, Pavel Anossov's answer is the way to do it, and it's pretty much the same as your initial guess. But do you really need to check this? Why not just write a Python script that writes to stdout and/or stderr, and your cron job can just redirect to log files? Or, even better, use the logging module and let it write to syslog or whatever else is appropriate and also write to the terminal if there is one?
0
1
0
1
2013-02-27T20:13:00.000
2
0.197375
false
15,121,468
0
0
0
1
I have a Python script that normally runs out of cron. Sometimes, I want to run it myself in a (Unix) shell, and if so, have it write its output to the terminal instead of writing to a log file. What is the pythonic way of determining if a script is running out of cron or in an interactive shell (I mean bash, ksh, etc. not the python shell)? I could check for the existence of the TERM environment variable perhaps? That makes sense but seems deceptively simple... Could os.isatty somehow be used? I'm using Python 2.6 if it makes a difference. Thanks!
Differentiate celery, kombu, PyAMQP and RabbitMQ/ironMQ
15,131,010
2
9
7,047
0
python,heroku,rabbitmq,celery,kombu
"One of the biggest differences between IronMQ and RabbitMQ/AMQP is that IronMQ is hosted and managed, so you don't have to host the server yourself and worry about uptime." Currently there are at least two hosted managed RabbitMQ-as-a-service options: Bigwig and CloudAMQP. Celery should work well with both.
0
1
0
0
2013-02-27T20:16:00.000
2
0.197375
false
15,121,519
0
0
1
1
I want to upload images to S3 server, but before uploading I want to generate thumbnails of 3 different sizes, and I want it to be done out of request/response cycle hence I am using celery. I have read the docs, here is what I have understood. Please correct me if I am wrong. Celery helps you manage your task queues outside the request response cycle. Then there is something called carrot/kombu - its a django middleware that packages tasks that get created via celery. Then the third layer PyAMQP that facilitates the communication of carrot to a broker. eg. RabbitMQ, AmazonSQS, ironMQ etc. Broker sits on a different server and does stuff for you. Now my understanding is - if multiple users upload image at the same time, celery will queue the resizing, and the resizing will actually happen at the ironMQ server, since it offers a cool addon on heroku. Now the doubts: But what after the image is resized, will ironMQ push it to the S3 server, or will it notify once the process is completed.. i am not clear about it. What is the difference between celery and kombu/carrot, could you explain vividly.
How do I get a python program to run instead of opening in Notepad?
31,030,969
3
5
20,810
0
python,notepad
okay. 1) i tried turning it off and on again. 2) i uninstalled and reinstalled python still no joy. and then! in windows explorer there's an open with option that sets the default program that windows is pointed toward if you click on the filename or enter it on the command line. change that from notepad or whatever it is if it's not python. change it to python. then presto. no problem-o.
0
1
0
0
2013-02-27T20:26:00.000
4
0.148885
false
15,121,714
1
0
0
2
I am having some trouble with opening a .py file. I have a program that calls this .py file (i.e. pathname/example.py file.txt), but instead of running the python program, it opens it in Notepad. How to I get it to run? The program itself takes in a file, and creates an output that is more readable. Edit: The operating system is Windows 7. And the file that is calling the python is a .bat file. Edit 2: It looks like I had to reinstall python for some reason... but it looks like it is finally working. Why reinstalling never comes to mind in the first place... And then I had to change how the file extention was opened. Thanks guys
How do I get a python program to run instead of opening in Notepad?
38,337,893
6
5
20,810
0
python,notepad
This happened because most probably you have set notepad as the default program to open a .py file. Go to default programs app in windows. Select choose app by extension. Here search for .py files. Change the option from notepad to python. This should solve your problem.
0
1
0
0
2013-02-27T20:26:00.000
4
1
false
15,121,714
1
0
0
2
I am having some trouble with opening a .py file. I have a program that calls this .py file (i.e. pathname/example.py file.txt), but instead of running the python program, it opens it in Notepad. How to I get it to run? The program itself takes in a file, and creates an output that is more readable. Edit: The operating system is Windows 7. And the file that is calling the python is a .bat file. Edit 2: It looks like I had to reinstall python for some reason... but it looks like it is finally working. Why reinstalling never comes to mind in the first place... And then I had to change how the file extention was opened. Thanks guys
Python version in terminal after download Python for OSX
15,133,546
1
1
58
0
python-3.x
python3 should start the correct version for you.
0
1
0
0
2013-02-28T05:52:00.000
1
0.197375
false
15,128,411
1
0
0
1
I downloaded and successfully installed Python 3.3 for OSX. After executing "python" in terminal it opened the python terminal window, stating: "Python 2.7.2 (default, June 20 2012...) Is there another update I need to do? Thanks!
Django: Automatically executing statements in `manage.py shell`
15,164,238
-1
2
158
0
python,django,shell
I ended up monkeypatching IPython.frontend.terminal.embed.InteractiveShellEmbed.__call__ to add the definitions I wanted. (I know many people are opposed to monkeypatching but I find it to be a good solution in this case.)
0
1
0
0
2013-03-01T15:19:00.000
2
1.2
true
15,161,051
0
0
1
1
Every time I start a shell using python manage.py shell, I want a few lines to be executed automatically. (In my case it would be a few import lines in the style of import django, my_app.) How do I do this?
py2exe cmd failure
15,165,112
3
0
125
0
python
This has nothing to do with py2exe, and everything to do with the machine's setup. You don't have python on your path. You can test this by just running python by itself to open the interactive interpreter. If cmd can't find python, it can't run it. Here are some ways around this: Explicitly use the full path to Python—e.g., if it's C:\Python27\bin\Python.exe, type that instead of just python. Temporarily edit your PATH environment variable in the cmd window. With the above example, this would be set PATH=%PATH%;C:\Python27\bin. You will have to do this again every time you reboot, open a new cmd window, etc. Permanently edit your PATH environment variable. This is done in the Advanced System Settings controls, which I believe are still accessible through the Properties on the context menu for My Computer in Windows 7. Uninstall and reinstall Python, and this time allow it to put itself on your path. Ask for further help at superuser or some other site that's focused on system configuration problems rather than programming problems.
0
1
0
0
2013-03-01T19:09:00.000
1
1.2
true
15,165,070
1
0
0
1
i wrote a python script and installed py2exe 0.6.9 (win32) to a 32bit-Windows7 machine with Python 2.7. I could successfully run "python setup.py py2exe" via cmd. Now I installed py2exe 0.6.9 (win64) to a 2nd PC (Win7, 64bit, Python2.7) and tried the same to exactly the same script: But "python setup.py py2exe" returned this message (hope I translated it correctly into english): "The command 'python' is either written wrong or couldn't be found." Why does this happen? How can I solve this?
Call Python code from LLVM JIT
15,175,018
2
6
1,071
0
python,llvm,llvm-py
You can call external C functions from LLVM JIT-ed code. What else do you need? These external functions will be found in the executing process, meaning that if you link Python into your VM you can call Python's C API functions. The "VM" is probably less magic than you think it is :-) In the end, it's just machine code that gets emitted at runtime into a buffer and executed from there. To the extent that this code has access to other symbols in the process in which it's running, it can do everything any other code in that process can do.
0
1
0
0
2013-03-02T00:08:00.000
2
0.197375
false
15,169,015
1
0
0
1
I write a language lexer/parser/compiler in python, that should run in the LLVM JIT-VM (using llvm-py) later. The first two steps are quite straightforward for now, but (even if I didn't start the compile-task yet) I see a problem, when my code wants to call Python-Code (in general), or interact with the Python lexer/parser/compiler (in special) respectively. My main concern is, that the code should be able to dynamically load additional code into the VM at runtime and thus it must trigger the whole lexer/parser/compiler-chain in Python from within the VM. First of all: Is this even possible, or is the VM "unmutable" once it is started? If it is I currently see 3 possible solutions (I am open for other suggestions) "Break out" of the VM and make it possible to call Python functions of the main process directly (maybe by registering it as a LLVM-function, that redirects to the main process somehow). I didn't found anything about this and anyway I am not sure, if this is a good idea (security and such). Compile the runtime (statically or dynamically at runtime) into LLVM-Assembly/-IR. This requires, that the IR-code is able to modify the VM it runs in Compile the runtime (statically) into a library and load it directly into the VM. Again it must be able to add functions (etc) to the VM it runs in.
python is not recognised as an internal or external command
39,730,308
0
11
50,485
0
python,cmd
Okay, as you said your Python install directory is C:\Python27, open my computer, then open c: drive, if you don't see "Python27" named folder there then try to search it using search option, (in my case i found it in old.window folder, don't know how it moved there) cut and past it in c drive along with folders like, program files, user etc... , now open cmd and type python and hit enter to check if it is working now,
0
1
0
0
2013-03-02T05:36:00.000
8
0
false
15,171,157
1
0
0
6
This is a really annoying problem. I've prowled the web for solutions, but all I found was tips about changing the PATH variable, which I did, of course. My Python install directory is C:\Python27. It' a 32 bit version. Whenever I type python in the command prompt, it says that it isn't recognised as an internal or external command. Currently, my PATH variable is set to C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts. Anyone has any ideas? I run Windows 7 by the way (64 bit). I'm pretty desperate. Heck, if nothing works I guess I'll try dual-booting Linux and Windows 7...
python is not recognised as an internal or external command
15,171,325
1
11
50,485
0
python,cmd
After adding the python folder to the system PATH variable, you should reboot your computer. Another simple solution is: create a shortcut of the python.exe executable (probably it is in C:\Python27\python.exe, or similar) in a place like C:\Windows\system32 (that is, a place that already is listed in the PATH variable). The name of your shortcut should be python (maybe python.exe should work too). I mean, it can't be python - shortcut or similar, for your purposes. To see the contents of the PATH variable, go to the cmd and enter set PATH.
0
1
0
0
2013-03-02T05:36:00.000
8
0.024995
false
15,171,157
1
0
0
6
This is a really annoying problem. I've prowled the web for solutions, but all I found was tips about changing the PATH variable, which I did, of course. My Python install directory is C:\Python27. It' a 32 bit version. Whenever I type python in the command prompt, it says that it isn't recognised as an internal or external command. Currently, my PATH variable is set to C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts. Anyone has any ideas? I run Windows 7 by the way (64 bit). I'm pretty desperate. Heck, if nothing works I guess I'll try dual-booting Linux and Windows 7...
python is not recognised as an internal or external command
41,311,109
0
11
50,485
0
python,cmd
This is only a partial answer, but I found (repeatedly) that I'd have similar issues when I would use the gui installer and not go through the custom setup. Using the custom setup option, then using the same settings, the "install for all users" (that then installs to C://python.version/blah instead of the user based default structure) WOULD allow the installer to setup PATH correctly.
0
1
0
0
2013-03-02T05:36:00.000
8
0
false
15,171,157
1
0
0
6
This is a really annoying problem. I've prowled the web for solutions, but all I found was tips about changing the PATH variable, which I did, of course. My Python install directory is C:\Python27. It' a 32 bit version. Whenever I type python in the command prompt, it says that it isn't recognised as an internal or external command. Currently, my PATH variable is set to C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts. Anyone has any ideas? I run Windows 7 by the way (64 bit). I'm pretty desperate. Heck, if nothing works I guess I'll try dual-booting Linux and Windows 7...
python is not recognised as an internal or external command
15,171,259
4
11
50,485
0
python,cmd
Quick fix: May not be the most elegant or long term fix but if you are really frustrated and just want to get it to run, just copy paste the python.exe file to your current directory. This worked for me.
0
1
0
0
2013-03-02T05:36:00.000
8
0.099668
false
15,171,157
1
0
0
6
This is a really annoying problem. I've prowled the web for solutions, but all I found was tips about changing the PATH variable, which I did, of course. My Python install directory is C:\Python27. It' a 32 bit version. Whenever I type python in the command prompt, it says that it isn't recognised as an internal or external command. Currently, my PATH variable is set to C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts. Anyone has any ideas? I run Windows 7 by the way (64 bit). I'm pretty desperate. Heck, if nothing works I guess I'll try dual-booting Linux and Windows 7...
python is not recognised as an internal or external command
15,171,248
1
11
50,485
0
python,cmd
After changing the PATH variable in windows, you need to reboot your system before it takes effect. Edit: As stated by @tdelaney, only a restart of cmd.exe should be required. This is true atleast for Windows 7 64bit.
0
1
0
0
2013-03-02T05:36:00.000
8
0.024995
false
15,171,157
1
0
0
6
This is a really annoying problem. I've prowled the web for solutions, but all I found was tips about changing the PATH variable, which I did, of course. My Python install directory is C:\Python27. It' a 32 bit version. Whenever I type python in the command prompt, it says that it isn't recognised as an internal or external command. Currently, my PATH variable is set to C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts. Anyone has any ideas? I run Windows 7 by the way (64 bit). I'm pretty desperate. Heck, if nothing works I guess I'll try dual-booting Linux and Windows 7...
python is not recognised as an internal or external command
35,899,777
0
11
50,485
0
python,cmd
When installing, there is a checkbox that is by default not selected, but it asks to add python to the environment variable. Re-install and check that box. I'd rather the installer do it than struggle in the weeds myself.
0
1
0
0
2013-03-02T05:36:00.000
8
0
false
15,171,157
1
0
0
6
This is a really annoying problem. I've prowled the web for solutions, but all I found was tips about changing the PATH variable, which I did, of course. My Python install directory is C:\Python27. It' a 32 bit version. Whenever I type python in the command prompt, it says that it isn't recognised as an internal or external command. Currently, my PATH variable is set to C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts. Anyone has any ideas? I run Windows 7 by the way (64 bit). I'm pretty desperate. Heck, if nothing works I guess I'll try dual-booting Linux and Windows 7...
Does initialize in tornado.web.RequestHandler get called every time for a request/
15,202,174
8
3
1,206
0
python,tornado
Yes, tornado calls initialize for each request. If you want to share state between requests(like db connection) - store it in self.application.
0
1
0
0
2013-03-04T10:09:00.000
1
1.2
true
15,199,028
0
0
1
1
There'a an initialize method in tornado.web.RequestHandler class, is it called every time there's a request?
Twisted and libtorrent - do I need to worry about blocking?
15,299,811
2
4
370
0
python,twisted,bittorrent,libtorrent
I may be able to provide answers to some of those questions. all of libtorrents logic, including networking and disk I/O is done in separate threads. So, over all, the concern of "blocking" is not that great. Assuming you mean libtorrent functions not returning immediately. Some operations are guaranteed to return immediately, functions that don't return any state or information. However, functions that do return something, must synchronize with the libtorrent main thread, and if it is under heavy load (especially when built in debug mode with invariant checks and no optimization) this synchronization may be noticeable, especially when making many of them, and often. There are ways to use libtorrent that are more asynchronous in nature, and there is an ongoing effort in minimizing the need for using functions that synchronize. For example, instead of querying the status of all torrents individually, one can subscribe to torrent status updates. Asynchronous notifications are returned via pop_alerts(). Whether it would interfere with twisted's epoll; I can't say for sure, but it doesn't seem very likely. I don't think there's much need to interact with libtorrent via another layer of threads, since all of the work is already done in separate threads.
0
1
0
0
2013-03-04T10:30:00.000
1
1.2
true
15,199,448
1
0
0
1
I am looking into building a multi-protocol application using twisted. One of those protocols is bittorrent. Since libtorrent is a fairly complete implementation and its python bindings seems to be a good choice. Now the question is: When using libtorrent with twisted, do I need to worry about blocking? Does the libtorrent networking layer (using boost.asio, a async networking loop) interfere with twisted epoll in any way? Should I perhaps run the libtorrent session in a thread or target a multi-process application design?
AppEngine NDB property validations
15,202,052
0
0
744
0
python,google-app-engine,app-engine-ndb,wtforms,google-cloud-endpoints
Not this is is always the best solution, but you could roll your own. Just pre-define a bunch of properties with reg-exs/mins and maxs etc. It seems like your properties are straight forward enough that it wouldn't be too difficult.
0
1
0
0
2013-03-04T11:53:00.000
1
0
false
15,200,952
0
0
1
1
I wonder what the best approach is for validating NDB entity properties likes: a date must be in the future a grade (integer property) must be between 1 and 10 a reference to another entity must have certain property values (e.g. book.category.active must be True) I'm also using WTForms to validate submitted requests, but I want to enforce validations also on a lower level like the datastore entities itself. So basically what I'm looking for is to call a validate on a datastore entity to see if it's valid or not. In case it's valid I can put the entity to the datastore, but if it's not valid I want to retrieve the invalid properties including the applies validator that did not validate successfully. Another reason WTForms might not be sufficient is that I'm experiencing with the new Cloud Endpoints. In this model I'm receiving the actial entity and not a http request. How are other AppEngine users solving this?
How to start app without navigating to its directory
15,210,530
1
1
113
0
python,console
http://stackoverflow.com/questions/7054424/python-not-recognised-as-a-command Add python to your environment path. You should then be able to use it anywhere.
0
1
0
0
2013-03-04T20:25:00.000
2
1.2
true
15,210,454
1
0
0
1
I've looked around for an answer to this, but i don't know how to phrase it in a way that google will understand. I'm trying to learn Python, and i've installed it on my machine. However, when i just type "python" in cmd.exe, the python app is not found/launched. I have to manually go to the directory in which python.exe is found in order to run my python commands. Is this normal? Online tutorials seem to indicate that I should be able to run the app from anywhere :s I'm on Win7, and trying to run Python from the Django stack by BitNami.
Can I use uwsgi + (tornado, gevent, etc) at the same time?
15,225,962
2
3
1,970
0
python,django,tornado,uwsgi
uWSGI+gevent is a solid combo, while there is currently no-way to run uWSGI with the tornado api (and as uWSGI dropped support in 1.9 for callback based approach, i think that we will never see that combo working). The problem you need to solve before starting working with gevent, is ensuring that all of your pieces are gevent friendly (redis and celery are ok, you need to check your database adapter). After that simply add --gevent to your uWSGI instance, where is the maximum number of concurrent requests per worker.
0
1
0
0
2013-03-05T13:43:00.000
2
0.197375
false
15,225,320
0
0
1
2
why? because I have a django project that capture data from user and consume many webservices displaying the results to the user in order to compare information, something like aggregators websites who search flights tickets via airlines webservices and show the result in real time in order to compare tickets. nowaday im doing this in a "waiting page", where celery hits webservices while jquery is asking every 5 seconds if all results are ready, so when ready redirect to the results page. what I want to do is not to use this "waiting page", I want to feed the results page in real time as the results are comming, and I want to make it following the best practices, I mean I dont want to jquery get the results each X seconds to feed the table. I think some coroutine-based python library can help me with this, but I want to learn more about your experience first and see some examples, I am confused because this part of the project was designed to run asynchronously, I mean, consuming webservices with celery-chords, but not designed for dispatching the information in real time through the app server. actual architecture: python 2.7, django 1.3, postgresql 9, celery 3 + redis, uwsgi, nginx, all hosted on aws. thank you in advance.
Can I use uwsgi + (tornado, gevent, etc) at the same time?
45,253,612
0
3
1,970
0
python,django,tornado,uwsgi
I don't know about uWSGI+gevent, but you can use tornado with uWSGI. Tornado basically gives you an inbuilt WSGI support in tornado.wsgi.WSGIContainer module to make it compactable with other WSGI servers like uWSGI and gunicorn.But it depends on your use and I think it's not a good idea to use an Asynchronous framework with a synchronous server(like uWSGI). Tornado has this warning for this. WSGI is a synchronous interface, while Tornado’s concurrency model is based on single-threaded asynchronous execution. This means that running a WSGI app with Tornado’s WSGIContainer is less scalable than running the same app in a multi-threaded WSGI server like gunicorn or uwsgi. Use WSGIContainer only when there are benefits to combining Tornado and WSGI in the same process that outweigh the reduced scalability.
0
1
0
0
2013-03-05T13:43:00.000
2
0
false
15,225,320
0
0
1
2
why? because I have a django project that capture data from user and consume many webservices displaying the results to the user in order to compare information, something like aggregators websites who search flights tickets via airlines webservices and show the result in real time in order to compare tickets. nowaday im doing this in a "waiting page", where celery hits webservices while jquery is asking every 5 seconds if all results are ready, so when ready redirect to the results page. what I want to do is not to use this "waiting page", I want to feed the results page in real time as the results are comming, and I want to make it following the best practices, I mean I dont want to jquery get the results each X seconds to feed the table. I think some coroutine-based python library can help me with this, but I want to learn more about your experience first and see some examples, I am confused because this part of the project was designed to run asynchronously, I mean, consuming webservices with celery-chords, but not designed for dispatching the information in real time through the app server. actual architecture: python 2.7, django 1.3, postgresql 9, celery 3 + redis, uwsgi, nginx, all hosted on aws. thank you in advance.
Python script will not run in Task Scheduler for "Run whether use is logged on or not"
15,235,210
0
0
2,604
0
python,automation,task,scheduler
I would try it with the script not in your Users directory
0
1
0
0
2013-03-05T22:06:00.000
3
0
false
15,235,085
0
0
0
1
I have written a python script and wanted to have it run at a set period everyday with the use of Task Scheduler. I have had no problems with Task Scheduler for running programs while logged off, before creating this task. If I select "Run only when user is logged on" my script runs as expected with the desired result and no error code (0x0). If I select "Run whether user is logged on or not" with "Run with highest privileges" and then leave it overnight or log off to test it, it does not do anything and has an error code of 0x1. I have the action to "Start a program" with the Details as follows: Program/script: C:\Python27\python2.7.exe Add arguments: "C:\Users\me\Desktop\test.py" I think it has to do with permissions to use python while logged off but I can't figure this one out. Wondering if anyone has suggestions or experience on this. This is on Windows 7 (fyi) Thanks, JP
Invoking scons from a Python script
15,252,510
3
8
2,664
0
python,scons
Well, I guess it's possible in theory. The scons executable itself is just a python script and all it does is execute SCons.Script.main() after modifying the sys.path variable. But you would probably have to start digging deep into the source to really understand how to make it do things you want. A much cleaner solution is calling your script from the SConscript file, which is probably the intended usage and should prove much easier.
0
1
0
0
2013-03-06T14:53:00.000
3
0.197375
false
15,250,538
0
0
0
2
I'm new to scons and Python. I was wondering if there's a way to invoke scons from within a python script. My python script accepts from the user, a list of directories where the code to be compiled together is located (in addition to doing some other non-trivial things). It also generates a string which is to be used as a name of the executable created by scons. I want to pass this information from my python script to scons, and then invoke scons. Is there a simple way to do this? I can think of the following possibilities: use subprocess.call("scons"...) I'm not sure if scons accepts all the info I need to pass as command line arguments have the python script write into a file. Have the SConscript parse the file and get the information passed.
Invoking scons from a Python script
15,271,954
1
8
2,664
0
python,scons
Thanks for the answers. I ended up using this method: The python script writes a list of options into a text file, closes it, and invokes scons -f MySConscript_file using a subprocess call. The SConstruct reads values from the text file into a list, and uses them.
0
1
0
0
2013-03-06T14:53:00.000
3
0.066568
false
15,250,538
0
0
0
2
I'm new to scons and Python. I was wondering if there's a way to invoke scons from within a python script. My python script accepts from the user, a list of directories where the code to be compiled together is located (in addition to doing some other non-trivial things). It also generates a string which is to be used as a name of the executable created by scons. I want to pass this information from my python script to scons, and then invoke scons. Is there a simple way to do this? I can think of the following possibilities: use subprocess.call("scons"...) I'm not sure if scons accepts all the info I need to pass as command line arguments have the python script write into a file. Have the SConscript parse the file and get the information passed.
standard way to handle user session in tornado
15,265,556
14
14
14,722
1
python,tornado
Tornado designed to be stateless and don't have session support out of the box. Use secure cookies to store sensitive information like user_id. Use standard cookies to store not critical information. For storing large objects - use standard scheme - MySQL + memcache.
0
1
0
0
2013-03-06T17:55:00.000
4
1.2
true
15,254,538
0
0
1
3
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached. The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer. The ideal answer would point to example code, of course.
standard way to handle user session in tornado
16,320,593
17
14
14,722
1
python,tornado
Here's how it seems other micro frameworks handle sessions (CherryPy, Flask for example): Create a table holding session_id and whatever other fields you'll want to track on a per session basis. Some frameworks will allow you to just store this info in a file on a per user basis, or will just store things directly in memory. If your application is small enough, you may consider those options as well, but a database should be simpler to implement on your own. When a request is received (RequestHandler initialize() function I think?) and there is no session_id cookie, set a secure session-id using a random generator. I don't have much experience with Tornado, but it looks like setting a secure cookie should be useful for this. Store that session_id and associated info in your session table. Note that EVERY user will have a session, even those not logged in. When a user logs in, you'll want to attach their status as logged in (and their username/user_id, etc) to their session. In your RequestHandler initialize function, if there is a session_id cookie, read in what ever session info you need from the DB and perhaps create your own Session object to populate and store as a member variable of that request handler. Keep in mind sessions should expire after a certain amount of inactivity, so you'll want to check for that as well. If you want a "remember me" type log in situation, you'll have to use a secure cookie to signal that (read up on this at OWASP to make sure it's as secure as possible, thought again it looks like Tornado's secure_cookie might help with that), and upon receiving a timed out session you can re-authenticate a new user by creating a new session and transferring whatever associated info into it from the old one.
0
1
0
0
2013-03-06T17:55:00.000
4
1
false
15,254,538
0
0
1
3
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached. The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer. The ideal answer would point to example code, of course.
standard way to handle user session in tornado
16,346,968
4
14
14,722
1
python,tornado
The key issue with sessions is not where to store them, is to how to expire them intelligently. Regardless of where sessions are stored, as long as the number of stored sessions is reasonable (i.e. only active sessions plus some surplus are stored), all this data is going to fit in RAM and be served fast. If there is a lot of old junk you may expect unpredictable delays (the need to hit the disk to load the session).
0
1
0
0
2013-03-06T17:55:00.000
4
0.197375
false
15,254,538
0
0
1
3
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached. The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer. The ideal answer would point to example code, of course.
glibc detected *** free(): invalid pointer: Python c++ and Swig
15,264,849
1
1
2,165
0
c++,python,swig
I finally figured out! I executed the command ulimit -c unlimited After this i see a core dump, now i can analyse it via gdb /usr/bin/python2.3 core.31685
0
1
0
0
2013-03-06T19:14:00.000
2
0.099668
false
15,255,989
0
0
0
1
I have to run some unit tests which are written in Python. We have the code to test in c++, so I compiled it into a shared object and using swig providing an interface for the python scripts to call into the necessary api's to test. Now when i run one of the python scripts (it is obviously accessing the c++ codebase which i intend to test), i am getting a "glibc detected free(): invalid pointer". Now I do understand that there is some memory issue, either a double free or I am freeing an inaccessible memory. Now what i am requesting from you experts: 1] I am not getting any backtrace(no line number even), is there anyway to know where the issue is happening? I am not getting any info other than the script stopping abruptly at some point and printing something like this *** glibc detected * free(): invalid pointer: 0x099e9b28 *** Can i get a backtrace somehow? By setting some flag may be? 2] I ran valgrind: "valgrind --leak-check=yes ./myscript.py" I did not get something much, some lines from it: glibc detected free(): invalid pointer: 0x099e9b28 ==25728== ==25728== Conditional jump or move depends on uninitialised value(s) ==25728== at 0x625AEA: PyObject_Free (in /usr/lib/libpython2.3.so.1.0) ==25728== by 0x614C7F: (within /usr/lib/libpython2.3.so.1.0) ==25728== by 0x61EA53: (within /usr/lib/libpython2.3.so.1.0) I am not getting anything related to my code basically. So is there something else i should do with valgrind. 3] I tried printfs, its taking me to nothing actually. 4] I tried gdb: prompt>gdb python gdb> set args myscript.py gdb> run This runs the script, I could not set any breakpoints, it runs and prints the error. No absolute help. Is there something else i should do with GDB? Any way to set breakpoints? Thanks a lot for any kind of pointer you guys can give me.
How to make Jython work with PIG?
20,953,873
0
2
432
0
python,jython,apache-pig
From my short experience in Pig there are two ways of doing this: you can either place the jar in your Pig's lib folder, somewhere about /usr/share/pig/lib/, or register the jar using its specific location from grunt (Pig shell), using: REGISTER /path/to/your/jar/jython.jar; Once available, register your UDF from grunt using: REGISTER '/path/to/your/udf/udf.py' USING jython as py_udf; And you can use it like this: py_udf.my_method(*) my_method being the name of the python method you created.
0
1
0
0
2013-03-06T22:37:00.000
1
0
false
15,259,499
0
0
1
1
I have jython jar and Pig installed on the server. Have Pig jars as well. Can someone help me out with the proper steps to bundle them so that I can use my Python UDFs ? Thanks
Controlling the distribution of tests with py.test xdist
25,073,350
0
3
1,185
0
python,testing,nose,pytest
I am not sure if this would help. But if you know ahead of time how you want to divide up your tests, instead of having pytest distribute your tests, you could use your continuous integration server to call a different run of pytest for each different machine. Using -k or -m to select a subset of tests, or simply specifying different test dir paths, you could control which tests are run together.
0
1
0
1
2013-03-06T23:45:00.000
1
0
false
15,260,422
0
0
0
1
I have several thousand tests that I want to run in parallel. The tests are all compiled binaries that give a return code of 0 or non-zero (on failure). Some unknown subsets of them try to use the same resources (files, ports, etc). Each test assumes that it is running independently and just reports a failure if a resources isn't available. I'm using Python to launch each test using the subprocess module, and that works great serially. I looked into Nose for parallelizing, but I need to autogenerate the tests (to wrap each of the 1000+ binaries into Python class that uses subprocess) and Nose's multiprocessing module doesn't support parallelizing autogenerated tests. I ultimately settled on PyTest because it can run autogenerated tests on remote hosts over SSH with the xdist plugin. However, as far as I can tell, it doesn't look like xdist supports any kind of control of how the tests get distributed. I want to give it a pool of N machines, and have one test run per machine. Is what I want possible with PyTest/xdist? If not, is there a tool out there that can do what I'm looking for?
Python stdin filename
15,261,083
0
8
7,109
0
python,filenames,stdin
I don't think it's possible. As far as your python script is concerned it's writing to stdout. The fact that you are capturing what is written to stdout and writing it to file in your shell has nothing to do with the python script.
0
1
0
0
2013-03-07T00:27:00.000
4
0
false
15,260,888
1
0
0
1
I'm trying to get the filename thats given in the command line. For example: python3 ritwc.py < DarkAndStormyNight.txt I'm trying to get DarkAndStormyNight.txt When I try fileinput.filename() I get back same with sys.stdin. Is this possible? I'm not looking for sys.argv[0] which returns the current script name. Thanks!
Calling Python script from JAVA MySQLdb imports
15,318,731
0
3
534
0
java,python,jakarta-ee
So, I discovered that the issue was with the arguments that I was passing in Java to run the python program. The first argument was - python 2.6 but it should have rather been just python not some version number because there was compatibility issue with MySQLdB and python. I finally decided to use MySQL Python connector instead of MySQLdB in python code. It worked like charm and the problems got solved !
0
1
0
1
2013-03-07T05:33:00.000
1
1.2
true
15,263,854
0
0
1
1
I am calling a Python script from my Java code. This is the code : import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class JavaRunCommand { public static void main(String args[]) throws IOException { // set up the command and parameter String pythonScriptPath = "my-path"; String[] cmd = new String[2]; cmd[0] = "python2.6"; cmd[1] = pythonScriptPath; // create runtime to execute external command Runtime rt = Runtime.getRuntime(); Process pr = rt.exec(cmd); // retrieve output from python script BufferedReader bfr = new BufferedReader(new InputStreamReader( pr.getInputStream())); String line = ""; while ((line = bfr.readLine()) != null) { // display each output line form python script System.out.println(line); } } } python.py which works import os from stat import * c = 5 print c python.py which does not works import MySQLdb import os from stat import * c = 5 print c # some database code down So, I am at a critical stage where I have a deadline for my startup and I have to show my MVP project to the client and I was thinking of calling Python script like this. It works when I am printing anything without dB connection and MySQLdb library. But when I include them, it does not run the python script. Whats wrong here. Isnt it suppose to run the process handling all the inputs. I have MySQLdb installed and the script runs without the java code. I know this is not the best way to solve the issue. But to show something to the client I need this thing working. Any suggestions ?
Does Heroku no longer support Celery?
15,734,662
3
13
6,436
0
python,heroku,celery,amqp
I think there are issues with Celery as a background task on Heroku. We tried to create such tasks and they take all memory after running for about 20 minutes, even with DEBUG=False on Redis or RabbitMQ. Worse still, the memory is NEVER released: every time we have to restart the worker. The same code runs flawlessly on bare Linux or on Mac with Foreman. It happens with very simple tasks, like reading a text file in a loop, writing to a Django model.
0
1
0
0
2013-03-07T07:21:00.000
5
0.119427
false
15,265,319
0
0
1
1
I was finally getting to the point where I had some free time and wanted to add Celery to my Python/Flask project on Heroku. However, almost all mentions of Celery from the Heroku docs are gone. There used to be article with a tutotial in the "Getting started with Django", but it's gone. Will "just doing it" myself work? What's a good AMQP addon to use as backend on Heroku?
gtk allow background processes
15,276,223
0
0
159
0
python,gtk
If your program has a Gtk UI then the program is supposed to be driven by the GTK main loop. You may be able to run a gtk main loop in a secondary thread, but that it not really a supported configuration and you may run into many more problems
1
1
0
0
2013-03-07T12:58:00.000
1
0
false
15,271,870
0
0
0
1
I'm creating a program in python, and use GTK for part of it. Whenever GTK opens, it causes the whole program to stop responding, and not move on to the next process. Ideally, i would like it if the gtk window opened independently to the rest of the program, since the GTK window is just to display information, and the functionality is text based Is there any way to continue executing the python program, after the GTK window opened. I can't find anything in the pygtk manuals Any help would be great, thansks
How to invoke a specific Python version WITHIN a script.py -- Windows
15,279,856
0
8
7,048
0
python,windows,python-2.7
There is not "shebang" notation on Windows. You'll need to change the file association for .py files to use your 2.7.2 installation ("Open With", "Use application as default").
0
1
0
0
2013-03-07T19:15:00.000
3
0
false
15,279,793
1
0
0
1
What line of text should I place at the top of a script.py to invoke the specific version of Python that I need to use? I have two versions of Python on Windows XP, 2.6.5 and 2.7.2. They each have their own special modules and were installed by separate applications. My scripts are placed on the desktop so that I can double-click and run them conveniently. The problem is that all my scripts invoke 2.6.5, which is fine for the scripts that use the modules installed with 2.6.5, but my scripts intended for 2.7.2 don't run. They invoke the Python 2.6.5 without the modules I need to import. I've tried typing various headers without and without the #! to invoke 2.7.2 when I need to, but either my syntax is wrong or it just isn't possible to specify under Windows. Could anyone tell me the precise syntax of the line I need to add to my script. The python.exe for 2.7.2 is stored under C:\OSGeo4W\bin Thanks for letting me know what line to place at the top of a script.py to invoke the exact version of Python I need to use.
How to distribute my Python/shell script?
15,283,656
1
5
894
0
python,bash,shell
chmod +x cmd.py then they can type ./cmd.py they can also use it piped. I would add that unix users would probably already know how to make a file executable and run it, so all you'd have to do is make the file available to them. Do make sure they know what version(s) of python they need to run your script.
0
1
0
1
2013-03-07T22:59:00.000
2
0.099668
false
15,283,483
0
0
0
1
I have written a very simple command line utility for myself. The setup consists of: A single .py file containing the application/source. A single executable (chmod +x) shell script which runs the python script. A line in my .bash_profile which aliases my command like so: alias cmd='. shellscript' (So it runs in the same terminal context.) So effectively I can type cmd to run it, and everything works great. My question is, how can I distribute this to others? Obviously I could just write out these instructions with my code and be done with it, but is there a faster way? I've occasionally seen those one-liners that you paste into your console to install something. How would I do that? I seem to recall them involving curl and piping to sh but I can't remember.
How bottle return binary files
15,287,095
1
5
5,164
0
python,nginx,uwsgi,bottle
If you are not in a hurry i suggest you to try uWSGI 1.9 (it is still in development but the first stable release will be in 10 days) and use offload-threads = n (set it to the number of cpus). In that way when you send a file from your app it will be asynced (and non blocking) served by a different thread, suddenly freeing your worker. Offloading is available in 1.4 too but it is not automatic for apps as in 1.9
0
1
0
0
2013-03-07T23:48:00.000
2
0.099668
false
15,284,154
0
0
0
1
I want to make bottle python web service to serve binary files like pdf, picture and exe with authentication. Is it possible to serve all this files using bottle? I have hard time finding a tutorial for that. How about the performance? Does bottle python handle hundreds of thousands downloads simultaneously? I am planning to use it with nginx uwsgi.
Installed Python from Package, Terminal didn't update?
56,398,790
0
2
2,274
0
python
You have few options In your bash ~/.bash_profile add alias python='python3' Instead of using python command use python3 instead. Install python3 via homebrew
0
1
0
0
2013-03-08T00:08:00.000
3
0
false
15,284,381
1
0
0
1
Terminal is still showing Python 2.7.2 after an install of 3.3.0 I'm new to python- just want to get a good development environment working on Mac 10.8.
Hyde static page site generator - issue with running hyde command from command line
15,325,334
0
0
203
0
python,pip,hyde
I need to have /usr/local/share/python in my PATH.
0
1
0
0
2013-03-08T02:15:00.000
1
0
false
15,285,564
0
0
0
1
I installed Hyde with pip. I can see hyde in /usr/local/share/python. But when I am running hyde from command line, I am getting "Bash - Command not found error". I am on Mac OSX (ML) and python 2.7.3 Please help.
Quick writing to log file after http request
15,291,684
1
0
457
0
php,python,performance,logging
One possible option what I can think of is a separate logging process. So that your web.py can be shielded for performance issue. This is classical way of handling logging module. You can use IPC or any other bus communication infrastructure. With this you will be able to address two issues - Logging will not be a huge bottle neck for high capacity call flows. A separate module can ensure/provide switch off/on facility. As such there would not be any huge/significant process memory usage. However, you should bear in mind below points - You need be sure that logging is restricted to just logging. It must not be a data store for business processing. Else you may have many synchronization problem in your business logic. The logging process (here I mean actual Unix process) will become critical and slightly complex (i.e you may have to handle a form of IPC). HTH!
0
1
0
0
2013-03-08T10:01:00.000
1
0.197375
false
15,291,294
0
0
1
1
I currently finished building a Web server who's main responsibility is to simply take the contents of the body data in each http post request and write it to a log file. The contents of the post data is obfuscated when received. So i'm un obfuscating the post data and writing it to a log file on the server. The contents after obfuscated is a series of random key value pairs that differ between every request. It is not fixed data. The server is running Linux with 2.6+ kernel. Server is configured to handle heavy traffic (open files limit 32k, etc). The application is written in Python using web.py framework. The http server is Gunicorn behind Nginx. After using Apache Benchmark to do some load testing, I noticed that it can handle up to about 600-700 requests per second without any log writing issues. Linux natively does a good job at buffering. Problems start to occur when more than this many requests per second attempt to write to the same file at same moment. Data will not get written and information will be lost. I know that "the writing directly to a file" design might not have been the right solution from the get go. So i'm wondering if anyone can propose a solution that I can implement quickly without altering too much infrastructure and code that can overcome this problem? I have read about in memory storage like Redis, but I have realized that if data is sitting in memory during server failure then that data is lost. I have read in the docs that redis can be configured as a persistent store, there just needs to be enough memory on the server for Redis to do it. This solution would mean that I would have to write a script that would dump the data from Redis (memory) to the Log file at a certain interval. I am wondering if there is even a quicker solution? Any help would be greatly appreciated!
When to transition from Datastore to NDB?
15,311,815
0
1
151
0
google-app-engine,python-2.7,google-cloud-datastore,gql
To add to Dan's correct answer, remember ndb and the older db are just APIs so you can seamlessly begin switching to ndb without worrying about schema changes etc.. You're question asks about switching from datastore to NDB, but you're not switching from the datastore as NDB still uses the datastore. Make sense?
0
1
0
0
2013-03-08T17:35:00.000
2
0
false
15,299,975
0
0
1
1
From what I have heard, it is better to move to NDB from Datastore. I would be doing that eventually since I hope my website will be performance intensive. The question is when. My project is in its early stages. Is it better to start in NDB itself? Does NDB take care of Memcache also. So I don't need to have an explict Memcache layer?
In homebrew how do I change the python3 symlink to only "python"
15,304,867
8
13
11,485
0
python,symlink,homebrew
You definitely do not want to do this! You may only care about Python 3, but many people write code that expects python to symlink to Python 2. Changing this can seriously mess your system up.
0
1
0
0
2013-03-08T23:01:00.000
5
1
false
15,304,785
0
0
0
2
I want to install python using homebrew and I noticed there are 2 different formulas for it, one for python 2.x and another for 3.x. The first symlinks "python" and the other uses "python3". so I ran brew install python3. I really only care about using python 3 so I would like the default command to be "python" instead of having to type "python3" every time. Is there a way to do this? I tried brew switch python 3.3 but I get a "python is not found in the Cellar" error.
In homebrew how do I change the python3 symlink to only "python"
42,743,923
1
13
11,485
0
python,symlink,homebrew
As mentioned this is not the best idea. However, the simplest thing to do when necessary is run python3 in terminal. If you need to run something for python3 then run python3
0
1
0
0
2013-03-08T23:01:00.000
5
0.039979
false
15,304,785
0
0
0
2
I want to install python using homebrew and I noticed there are 2 different formulas for it, one for python 2.x and another for 3.x. The first symlinks "python" and the other uses "python3". so I ran brew install python3. I really only care about using python 3 so I would like the default command to be "python" instead of having to type "python3" every time. Is there a way to do this? I tried brew switch python 3.3 but I get a "python is not found in the Cellar" error.
Default working directory for Python IDLE?
49,071,459
0
11
37,378
0
python,python-idle
Here's a way to reset IDLE's default working directory for MacOS if you launch Idle as an application by double-clicking it. You need a different solution if you launch Idle from a command line in Terminal. This solution is a permanent fix. You don't have to rechange the directory everytime you launch IDLE. I wish it were easier. The idea is to edit a resource file inside of the IDLE package in Applications. Start by finding the the file. In Finder, go to IDLE in Applications (in the Python folder) as if you wanted to open it. Right click and select "show package contents". Open Contents, then open Resources. In Resources, you'll see a file called idlemain.py. This file executes when you launch idle and sets, among other things, the working directory. We're going to edit that. But before you can edit it, you need to give yourself permission to write to it. To do that, right click on the idlemain.py and select get info. Scroll to the bottom of the getinfo window and you'll see the Sharing & Permissions section. On the bottom right there's a lock icom. Click the lock and follow the prompts to unlock it. Once it's unlocked, look to the left for the + (under the list of users with permissions). Click it. That will bring up a window with a list of users you can add. Select yourself (probably the name of your computer or your user account) and click Select. You'll see yourself added to the list of names with permissions. Click where is says "Read only" next to your name and change it to "Read & Write". Be careful not to change anything else. When you're done, click the lock again to lock the changes. Now go back to idlemain.py and open it with any text editor (you could use Idle, TextEdit, or anything. Right under the import statement at the top is the code to change the default working directory. Read the comment if you like, then replace the single line of code under the comment with os.chdir('path of your desired working directory') Mine looks like this: os.chdir('/Users/MyName/Documents/Python') Save your changes (which should work because you gave yourself permission). Next time you start Idle, you should be in your desired working directory. You can check with the following commands: import os os.getcwd()
0
1
0
0
2013-03-12T17:11:00.000
9
0
false
15,367,688
0
0
0
3
Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it. In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed. It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance.
Default working directory for Python IDLE?
54,316,970
-1
11
37,378
0
python,python-idle
This ought to be the number one answer. I have been playing around this for an hour or more and nothing worked. Paul explains this perfectly. It's just like the PATH statement in Windows. I successfully imported a module by appending my personal "PythonModules" path/dir on my Mac (starting at "/users/etc") using a simple import xxxx command in Idle.
0
1
0
0
2013-03-12T17:11:00.000
9
-0.022219
false
15,367,688
0
0
0
3
Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it. In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed. It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance.
Default working directory for Python IDLE?
15,367,752
0
11
37,378
0
python,python-idle
It can change depending on where you installed Python. Open up IDLE, import os, then call os.getcwd() and that should tell you exactly where your IDLE is working on.
0
1
0
0
2013-03-12T17:11:00.000
9
0
false
15,367,688
0
0
0
3
Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it. In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed. It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance.
Embedded scripting language with C/C++ API for multithreading environment
15,369,077
7
0
338
0
c++,python,c,scripting,lua
Reconsider Lua: Yes. Yes. Lua does not create any OS threads at all. Garbage collection does not start until you've created lots of objects. You can simply turn it off. To destroy all variables once the script is executed, simply close the state. Yes. Yes.
0
1
0
0
2013-03-12T17:45:00.000
1
1
false
15,368,366
1
0
0
1
I'm looking for an embedded scripting language. I don't need anything fancy, just basic constructs like conditionals, loops, logic and arithmetic operations etc. I have the following requirements Thread friendly - i.e. without "global interpreter lock" (python is out for this reason) Cheap "interpreter instance" creation - I will have potentially 100s of these. I understand that lua creates a separate gc thread per every Lua_State which means lua is out. No gc or refcounting or any other "on the fly" memory management. It should simply destroy any variables once the script is executed. Again both python and lua are out. And of course it should be fast and have low memory footprint. Should work on windows, GNU/Linux and MacOS X Any help is highly appreciated.
Is it possible to get output from windows application through wine?
16,091,620
1
4
172
0
python,winapi,wine
So you're controlling a Windows downloader running in Wine. Is this downloader graphical? Is this icon in a window or what? Assuming yes to both: If your Python application is running natively in *nix (not Wine), the only sure way is to take a screenshot around the cursor and use Optical Character Recognition to recognize the numbers in the image :-). This is because under Wine, not every Windows window is an X11 window. If you are running your application on a Windows version of Python installed in Wine, you're in luck. Tooltips are just windows - you should be able to iterate over all windows in the downloader, and get the contained text.
0
1
0
0
2013-03-13T04:27:00.000
1
1.2
true
15,377,078
0
0
0
1
I'm writing a python GUI for a downloader in windows. Currently I can wine that application in some way to download things from the website. I want to write a GUI which calls the downloader so that it's easier for myself to use it. So one important thing for my GUI is to display the progress. When the downloader is running using wine, if I move the cursor onto the icon, it will display progress in percentage. That's the number I want for my code. So is there any way that I can get that information through some kind of API of wine?
OAuth to authenticate my app and allow it to access data at Google App Engine
15,399,334
0
2
556
0
python,google-app-engine,rest,mobile,oauth
I see you say you're not using Endpoints, but not why. It's likely the solution you want, as it's designed precisely for same-party (i.e. you own the backend and the client application) use cases, which is exactly what you've described.
0
1
0
0
2013-03-13T10:02:00.000
2
1.2
true
15,382,094
0
0
1
1
I have a web server at Google App Engine and I want to protect my data. My mobile app will access the server to get this data. The idea is with OAuth authenticate my app, when it requests some data via REST. After the first authentication, the app will always be able to access the content. I don't want user's data, as Google Account or Facebook. My mobile app will assume the role of user to my services. Is it possible? Someone has another idea to create these structure? I'm not using Google End Point and my GAE is developed with Python. Thank you in advance! Regards, Mario Jorge Valle
Windows "open with" python py2exe application
15,393,682
1
0
262
0
python,python-2.7,py2exe
Yep, the path to the file gets passed in as an argument and can be accessed via sys.argv[1].
0
1
0
0
2013-03-13T18:10:00.000
2
0.099668
false
15,393,202
1
0
0
1
I wonder how the Windows "Open file with..." feature works. Or rather, how would do if I write a program in python, compile a executable with py2exe and then want to be able to open certain files in that program by right-clicking and choose it in "Open with". Is the file simply passed as an argument, like "CMD>C:/myapp.exe file"?
How to make your own commands
15,393,788
1
1
240
0
python,linux
add the program to your /bin directory - it's where linux will search for the command
0
1
0
0
2013-03-13T18:32:00.000
2
0.099668
false
15,393,622
0
0
0
1
I am new to linux. I made a python script that takes two input Input 1> directory path ex:- ~/home/user/apps Input 2> file path File contains pattern in each line And the output of script is all the file that matches the pattern and are in directory or in subdirectories of input directory path. Now using this python script I want to make a command in Linux like: core_dump@core_dump-VPCCB15FG:~/python$search directory_path file_path
How to run a Python script from IDLE command line?
61,899,486
1
5
16,394
0
python,python-idle
If what you meant is executing in the Python IDLE's interactive shell instead of command prompt or command line, then I usually use this approach: python -m idlelib.idle -r "C:/dir1/dir2/Your script.py" It works well with me. Tested on my Windows 10, python 3.7.3. Please ensure that you have added your desired python version on your environment variables.
0
1
0
0
2013-03-14T01:10:00.000
3
0.066568
false
15,399,444
1
0
0
1
In a bash shell, I can use 'bash ' or 'source ' to invoke a script by hand. Can I do the similar thing in the Python IDLE's interactive shell? I know I can go to File >> Open Module, then run it in a separate window, but that's troublesome.
Python: get MAC address of default gateway
15,407,559
2
3
5,009
0
python,linux,ip,mac-address,arp
You can read from /proc/net/arp and parse the content, that will give you couples of known IP-MAC addresses. The gateway is probably known at all times, if not you should ping it, and an ARP request will be automatically generated. You can find the default gw in /proc/net/route
0
1
1
0
2013-03-14T10:59:00.000
3
0.132549
false
15,407,354
0
0
0
2
Is there any quick way in Python to get the MAC address of the default gateway? I can't make any ARP requests from the Linux machine I'm running my code on, so it has to come directly from the ARP table.
Python: get MAC address of default gateway
15,407,512
2
3
5,009
0
python,linux,ip,mac-address,arp
Are you using Linux? You could parse the /proc/net/arp file. It contains the HW address of your gateway.
0
1
1
0
2013-03-14T10:59:00.000
3
0.132549
false
15,407,354
0
0
0
2
Is there any quick way in Python to get the MAC address of the default gateway? I can't make any ARP requests from the Linux machine I'm running my code on, so it has to come directly from the ARP table.
Persistent in-memory Python object for nginx/uwsgi server
45,383,617
1
8
4,976
0
python,optimization,nginx,redis,uwsgi
"python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken." you are mistaken. the whole point of using uwsgi over, say, the CGI mechanism is to persist data across threads and save the overhead of initialization for each call. you must set processes = 1 in your .ini file, or, depending on how uwsgi is configured, it might launch more than 1 worker process on your behalf. log the env and look for 'wsgi.multiprocess': False and 'wsgi.multithread': True, and all uwsgi.core threads for the single worker should show the same data. you can also see how many worker processes, and "core" threads under each, you have by using the built-in stats-server. that's why uwsgi provides lock and unlock functions for manipulating data stores by multiple threads. you can easily test this by adding a /status route in your app that just dumps a json representation of your global data object, and view it every so often after actions that update the store.
0
1
0
1
2013-03-15T23:31:00.000
4
0.049958
false
15,443,732
0
0
1
2
I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question): I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB. Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit). Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second. All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second. But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken. Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above! So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly. P.S: Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
Persistent in-memory Python object for nginx/uwsgi server
45,384,113
1
8
4,976
0
python,optimization,nginx,redis,uwsgi
You said nothing about writing this data back, is it static? In this case, the solution is every simple, and I have no clue what is up with all the "it's not feasible" responses. Uwsgi workers are always-running applications. So data absolutely gets persisted between requests. All you need to do is store stuff in a global variable, that is it. And remember it's per-worker, and workers do restart from time to time, so you need proper loading/invalidation strategies. If the data is updated very rarely (rarely enough to restart the server when it does), you can save even more. Just create the objects during app construction. This way, they will be created exactly once, and then all the workers will fork off the master, and reuse the same data. Of course, it's copy-on-write, so if you update it, you will lose the memory benefits (same thing will happen if python decides to compact its memory during a gc run, so it's not super predictable).
0
1
0
1
2013-03-15T23:31:00.000
4
0.049958
false
15,443,732
0
0
1
2
I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question): I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB. Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit). Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second. All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second. But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken. Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above! So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly. P.S: Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
Is it possible to create Python-based Application in Xcode or equivalent?
15,469,982
0
3
3,884
0
python,xcode,interface
Open Automator Choose "Application" Drag a "Run Shell Script" onto the workflow panel Choose "/usr/bin/python" as the shell. Paste in your script, and select Pass Input: "to stdin" Or, choose bash as the shell, and simply have the automator script run your Python script with Pass Input "as arguments" selected on the top right. You'll then use the contents of $@ as your arguments. Save the application. Done. You have a .app onto which files can be dragged.
1
1
0
0
2013-03-18T04:31:00.000
3
1.2
true
15,469,799
0
0
0
1
So I have a lot of python scripts that I have written for my work but no one in my lab knows how to use Python so I wanted to be able to generate a simple Mac App where you can 'Browse' for a file on your computer and type in the name of the file that you want to save . . . everything else will be processed by the application for the python script I have generated. Does anyone know if this is possible? I watched some tutorials on people generating applications in Xcode with Objective C but I don't want to have to learn a new language to reconstruct my Python scripts. Thank you
Message queue with random read\write access to queue elements before dequeing (rabbitmq or)
15,501,068
0
0
875
0
python,rabbitmq,amqp
What you're describing sounds like a pretty typical middleware pipeline. While that achieves the same effect of modifying messages before they are delivered to their intended consumer, it doesn't work by accessing queues. The basic idea is that all messages first go into a special queue where they are delivered to the middleware. Th middleware then composes a new message, based on the one it just received, and the publishes that to the intended recipient's queue
0
1
0
0
2013-03-18T19:06:00.000
1
0
false
15,484,850
0
0
0
1
I use rabbitmq in Python via amqplib. I try to use AMQP for something more than just a queue, if that's possible - searching messages by ID, modifying them before dequeing, deleting from queue before dequeing. Those things are used to store/update a real users queue for a balancer, and that queue could be updated asynchronously by changing real user' state (for example, user is dead - his AMQP message must be deleted, or user changed it's state - and every such a change must be reflected in users' AMQP queue, in appropriate user's AMQP message) , and before the real dequeuing of a message happens. My questions are the following : Is there a way through amqplib to modify AMQP message body in some queueN before it would be dequed , searching it by some ID in it's header? I mean - i want to modify message body before dispatching it by receiver. Is there a way for a worker to pop excactly 5 (any number) last messages from queueN via amqplib? Can i asynchronously delete message from a queueN before it would be dequed, and it's neighbors would take it's place in the queueN? Which is the way for a message ID1 from queueN - to get it's real current queue position, counted from the beginning of the queueN? Does AMQP stores/updates for any message it's real queue position? Thanks in advance. UPDATE: according to rabbitmq documentation, there are problem with such a random access to messages in AMQP queue. Please advise another proper decision of a queue in Python, which supports fast asynchronous access to it's elements- searching a message by it's body, updating/deleting queue messages and getting fast queue index for any queue message. We tried deque + additional dict with user_info, but in this case we need to lock this deque+dict on each update, to avoid race conditions. Main purpose - is to serve a load balancer's queue and get rid of blocking when counting changes in queue.
Build package for OSX when on Windows (Python 3.3, tkinter)
15,486,937
0
0
98
0
python
I am pretty sure OSX build tools (XCode et. al.) exist only on Apple platforms and there is no business rationale why Apple would have ported them to Windows. So the probable answer is "buy Mac".
0
1
0
0
2013-03-18T19:47:00.000
1
0
false
15,485,567
1
0
0
1
Given that the code has been written indepdently of platform, how do I build a package for MAC OS when I am on Windows and the package has been successfully built there? I can use python setup.py bdist_msi on windows, but not python setup.py bdist_dmg, since I am not on MAC. What to do about that? Python 3.3, tkinter, cxFreeze, Windows 8.
How to run python in different directory?
15,487,877
1
2
1,363
0
python,linux
Do /aaa/python2.5 python_code.py. If you use Python 2.5 more often, consider changing the $PATH variable to make Python 2.5 the default.
0
1
0
1
2013-03-18T22:06:00.000
3
0.066568
false
15,487,848
1
0
0
1
I am doing maintenance for a python code. Python is installed in /usr/bin, the code installed in /aaa, a python 2.5 installed under /aaa/python2.5. Each time I run Python, it use /usr/bin one. How to make it run /aaa/python2.5? Also when I run Python -v; import bbb; bbb.__file__; it will automatically show it use bbb module under /usr/ccc/(don't know why), instead of use bbb module under /aaa/python2.5/lib How to let it run python2.5 and use `/aaa/python2.5/lib' module? The reason I asking this is if we maintain a code, but other people is still using it, we need to install the code under a new directory and modify it, run it and debug it.
Pymodbus (Serial) over a tcp serial connection
15,494,099
0
0
1,866
0
python,serial-port,tty,modbus
There is no straightforward solution to trick your linux server into thinking that a MODBUS RTU is actually of MODBUS TCP connection. In all cases, your modem will have to transfer data from TCP to serial (and the other way around). So I assume that: 1) somehow you can program your modem and instruct it to do whatever you want 2) the manufacturer of the modem has provided a built-in mechanism to do that. If 1): you should program your modem so that it can replace TCP ADUs by RTU ADUs (and the other way around) when copying data from the TCP connection to the RS link. If 2): simply provide your RTU frame to whatever API the manufacturer devised.
0
1
0
1
2013-03-19T00:20:00.000
2
0
false
15,489,371
0
0
0
2
I will be creating a connection between my Linux server and a cellular modem where the modem will act as a server for serial over TCP. The modem itself is connected to a modbus device (industrial protocol) via an RS232 connection. I would like to use pymodbus to facilitate talking to the end modbus device. However, I cannot use the TCP modbus option in PyModbus as the end device speaks serial modbus (Modbus RTU). And I cannot use the serial modbus option in Pymodbus as it expects to open an actual local serial port (tty device) on the linux server. How can I bridge the serial connection such that the pymodbus library will see the connection as a local serial device?
Pymodbus (Serial) over a tcp serial connection
16,742,894
0
0
1,866
0
python,serial-port,tty,modbus
I actually was working on something similar and decided to make my own Serial/TCP bridge. Using virtual serial ports to handle the communication with each of the modems. I used the minimalmodbus library although I had to modify it a little in order to handle the virtual serial ports. I hope you solved your problem and if you didn't I can try to help you out.
0
1
0
1
2013-03-19T00:20:00.000
2
0
false
15,489,371
0
0
0
2
I will be creating a connection between my Linux server and a cellular modem where the modem will act as a server for serial over TCP. The modem itself is connected to a modbus device (industrial protocol) via an RS232 connection. I would like to use pymodbus to facilitate talking to the end modbus device. However, I cannot use the TCP modbus option in PyModbus as the end device speaks serial modbus (Modbus RTU). And I cannot use the serial modbus option in Pymodbus as it expects to open an actual local serial port (tty device) on the linux server. How can I bridge the serial connection such that the pymodbus library will see the connection as a local serial device?
Create a fake TTY device from a serial-over TCP connection
15,680,046
1
1
1,308
0
python,serial-port,tty,modbus
if i do understand, you need make a connection of this manner: [pyModbus <-(fake serial)->process]<-(tcp/ip)->[modem<-(serial)->device] I suggest use socat for this
0
1
0
1
2013-03-19T04:07:00.000
1
0.197375
false
15,491,308
0
0
0
1
I have a library (PyModbus) I would like to use that requires a tty device as it will be communicating with a device using serial connection. However, the device I am going to talk to is going to be behind a modem that supports serial over tcp (the device plugs into a com port on the modem). Without the modem in the way it would be trivial. I would connect a usb serial cable to the device and the other end to the computer. With the modem in the way, the server has to connect to a tcp port on the modem and pump serial data through that. The modem passes the data received to the device connected to the com port. In linux, whats the best way to create a fake tty from the "serial over tcp connection" for momentary use and then be destroyed. This would happen periodically, and an individual linux server may have 10~500 of these emulated device open at any given time.
while downloading app from google app engine its throwing error <400>
15,499,700
0
0
304
0
python,google-app-engine,sdk
@Bharadwaj Please check if the version number you have specified in the command actually exists in the appengine. Also Make sure that you are providing your right appengine credentials.
0
1
0
0
2013-03-19T09:48:00.000
2
0
false
15,496,065
0
0
1
1
my app name is nfcVibe but still i am getting error like below.anyone suggest me to download my app. i think i gave the command correct only. but where it is going wrong that i dont know. C:\Program Files\Google\google_appengine>appcfg.py download_app -A nfcVibe -V 1 "e:\nfcvibe1" 03:11 PM Host: appengine.google.com 03:11 PM Fetching file list... Error 400: --- begin server output --- Client Error (400) The request is invalid for an unspecified reason. --- end server output ---
File Mod Time Discrepancies On Upload
15,529,123
1
0
77
0
python,unix,python-2.7,unix-timestamp,dropbox-api
The modified time on the Dropbox server isn't necessarily going to be the modified time on the client, but rather the time the file was uploaded to the server. You can use the 'rev' property on files from the /metadata call to keep track of files instead.
0
1
0
1
2013-03-19T20:55:00.000
1
0.197375
false
15,510,254
0
0
0
1
I'm doing a file sync between a client, server and Dropbox (Mac client, Debian server). I'm looking at the mod times of files to determine which is newest. On the client I'm using os.path.getmtime(filePath) to get the modified time. When I check the last modification time of the file on the client and then, after uploading I check again on the server or Dropbox there is a varying difference in the time between them all for the same file. I thought file mod times were associated with the file rather than os they are on, so if the file was last modified on the client, that mod time stamp should be the same when checked on the server? Could anyone clarify if uploading the file has an impact on the mod time, or suggest where this variation in time for one file could be coming from? Any advice would be greatly appreciated!
how to check the request is https in app engine python
44,672,760
0
1
428
0
google-app-engine,python-2.x
If you are using GAE Flex (where the secure: directive doesn't work), the only way I've found to detect this (to redirect http->https myself) is to check if request.environ['HTTP_X_FORWARDED_PROTO'] == 'https'
0
1
1
0
2013-03-20T04:24:00.000
3
0
false
15,515,299
0
0
1
1
I would like to know if is there a way to validate that a request (say a POST or a GET) was made over https, I need to check this in a webapp2.RequestHandler to invalidate every request that is not sent via https best regards
AppEngine 1.7.6 and Django 1.4.2 release
15,525,796
0
0
154
0
python,django,google-app-engine,python-2.7,django-nonrel
The django library built into GAE is straight up normal django that has an SQL ORM. So you can use this with Cloud SQL but not the HRD. django-nonrel is up to 1.4.5 according to the messages on the newsgroup. The documentation, unfortunately, is sorely behind.
0
1
0
0
2013-03-20T07:33:00.000
2
0
false
15,517,766
0
0
1
1
AppEngine 1.7.6 has promoted Django 1.4.2 to GA. I wonder how and if people this are using The reason for my question is that Django-nonrel seems to be stuck on Django 1.3 and there are no signs of an updated realease. What I would like to use from Djano are controllers, views and especially form validations.