Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
What is the possibility of using a Python app, deployed online, that has access to a users local disk?
9,560,968
4
0
139
0
python,google-app-engine,web.py
You'd have to create a "client" and "server" type of interface to do this. So it wouldn't be a solely JavaScript with Python on the server program. They'd have to have something running on their end as well, communicating in the background. HTML5 allows some local storage, but not what you're looking for.
0
1
0
0
2012-03-05T02:30:00.000
2
1.2
true
9,560,950
0
0
1
2
I currently have a local python application that scans a users drive, maps it into a tree and displays this information with javascript. I would really like to try to develop something with a Drop-Box like system to manage drive trees. I have searched and read that App Engine specifically doesn't allow access to a user's local disk. Is there a way to use webpy or something else to access a user's local drive to create a tree directory out of it?
Gevent installation error in Mac OS X Lion
11,092,390
0
2
590
0
python,macos,osx-lion,gevent
I encountered this error as well. I believe it is due to a conflict between libev and libevent (in my case, libev-4.11 and libevent-1.4.14b). libev replaces /usr/local/include/event.h and /usr/local/include/evutil.h with its own version of those files, and trying to compile gevent with the versions from libev results in the error: /usr/local/include/evhttp.h:223: error: expected specifier-qualifier-list before ‘ev_int64_t’ After removing libev and reinstalling libevent, I was able to install gevent using easy_install.
0
1
0
0
2012-03-05T09:20:00.000
1
0
false
9,564,203
1
0
0
1
Tried installing gevent using pip install gevent and also tried compiling from source. Both the times the installation stopped because of the following error. /usr/local/include/evhttp.h:223: error: expected specifier-qualifier-list before ‘ev_int64_t’ i have libevent installed in /usr/local/lib and its being picked up during installation. Any help would be highly appreciated. -Avinash
Feed input of MS-DOS interactive executable with a string value
9,567,478
0
0
373
0
python,perforce
Have you tried merely piping the input to the command? In cmd.exe: C:\> echo m | p4 resolve
0
1
0
0
2012-03-05T13:19:00.000
5
0
false
9,567,322
1
0
0
1
p4.exe is the perforce command line tool (git/cvs/svn like tool). I am trying to launch several ms-dos commands 'p4 resolve' in an python script. because I have a hundred files to resolve. However I cannot launch 'p4 resolve -m' as I want (which automatically opens my 3-way merge tool on the conflicting files). p4 doesn't accept the m as an executable parameter. Instead, manually, I must do 'p4 resolve', then wait for the prompt to ask me for an option, and then only type 'm' there. Do you know in python how I could feed the input since I cannot pass the 'm' parameter to the command line tool p4.exe. For the moment I use os.system(myDosCommand)
Why does Tornado take so long to die when I hit ctrl-c?
9,578,432
2
3
1,548
0
python,tornado
I don't know why it takes so long to exit with Ctrl+C, but it worked for me in some cases to press Ctrl+\ (Linux terminal)
0
1
0
0
2012-03-06T05:14:00.000
3
0.132549
false
9,578,245
0
0
0
1
When developing a Tornado application, I frequently want to restart the server to pick up new changes. I hit ctrl-c to stop the server, but with Tornado, this seems to be very slow. It waits for many seconds before shutting down, or doesn't shut down at all when issued a ctrl-c. What's weird, is if, after clicking ctrl-c, I make a new request to the server (by, for example, refreshing my browser that is pointing at the server), it shuts down right away. Anyone know how to explain this or fix it? Anyone experienced something similar? (Note, this is on Windows.)
One Command/Script to Upload X Files Using FTP
9,592,357
0
0
313
0
python,shell,upload,ftp
I would pick a different FTP client. The libcurl library (and its "curl" command line program) has a FTP upload method.
0
1
0
0
2012-03-06T21:18:00.000
1
0
false
9,592,083
0
0
0
1
If there is a thread that answers this question I'm sorry for being a fool (reading through the titles and the threads that look like they might have been [about] the same as mine, none of them were close enough for me to be able to figure it out for myself. I'm looking to have a script (I suppose bash or python would be preferable, since I'm working on learning Python; but at the moment really I'd settle for almost anything). From a directory (so I'm already "cd"'d to the directory; so it can adapt to whichever directory) I'd like to be able to: RUN_SCRIPT "File 1", "File 2", "File 3", "File 4", "File 5"..."File (n-1)", "File (n)" or RUN_SCRIPT "File *" and have it (in some form) accomplish the same thing I'm doing when (at the moment) from the directory I'm using: ftp -inv << FTP open ftp.HOST.com user USER_NAME PASSWORD mput "FILE NAME" bye FTP And having to do that in multiple shell windows (one for each file). Being able to have it run X instances of ftp at a time would be nice, but isn't at all necessary (just being able to tell it to do all of them at once is fine). Thanks in advance! This site is epic (can't wait for the time when I can be answering questions instead of just asking them). Side Question: If what I want to be able to do isn't possible for some reason I can't possibly comprehend; is there a way to smash the command list I have into one line?
How to change a python path in sudo state?
9,594,398
0
2
2,960
0
python,bash,sudo,pythonpath
It uses the first one found in $PATH try doing echo $PATH then sudo bash -c 'echo $PATH' I bet these are different. In any case, there is usually an rc script of some sort for the shell you use in both /root and your current user, just rearrange the paths in the environment variable for the one you want.
0
1
0
0
2012-03-07T01:05:00.000
3
0
false
9,594,369
0
0
0
1
My problem is that when I do : $ which python => I get /a/b/c/python as my directory but if I do $ sudo which python => I get /d/e/python as the result How do I change the sudo one to match with the normal case, it is making it impossible to install libraries from source.
Reducing the size of executable from py2exe
9,602,835
1
1
794
0
python,py2exe
There is an option to compress, which you can enable in the config file, but it will always be relatively large (~megabytes) because python interpreter must also be bundled in with the source code.
0
1
0
0
2012-03-07T13:48:00.000
1
1.2
true
9,602,691
1
0
0
1
Are there some practices to minimize the size of .exe file created by py2exe when creating executable of python script? My first impression upon using py2exe is, it creates relatively large size file.
apache mpm worker run only single process
9,608,216
1
0
452
0
python,django,apache,process
Another alternative could be to use something like mod_wsgi in daemon mode configured with only one process, then hand off to that. This is all assuming that your web server only ever needs to be single process, and that no other request should be served in parallel? Do you have other views which aren't rate limited? In which case, you could do this via some sort of lock file for this view only, rather than trying to make the web server single process?
0
1
0
0
2012-03-07T16:50:00.000
1
1.2
true
9,605,759
0
0
1
1
I need to make apache mpm worker use only a single process to run my django server . I have a view which needs to be run only once when the first request hits the apache . But i see it running twice. I made the runprocess configuration from 2 to 1 . what else should i do to make apache run only one process..
Python Daemons - Program Structure and Exception Control
9,611,336
3
2
476
0
python,daemon
Exceptions are designed for the purpose of (potentially) not being caught immediately-- that's how they differ from when a function returns a value that means "error". Each exception can be caught at the level where you want to (and can) do something about it. At a minimum, you could start by catching all exceptions at the main loop and logging a message. This is simple and ensures that your daemon won't die. At the main loop it's probably too late to fix most problems, so you can catch specific exceptions sooner. E.g. if a file has the wrong format, catch the exception in the routine that opens and tries to use the file, not deep in the parsing code where the problem is discovered; perhaps you can try another format. Basically if there's a place where you could recover from a particular error condition, catch it there and do so.
0
1
0
0
2012-03-07T18:16:00.000
2
0.291313
false
9,606,937
1
0
0
2
I've been doing amateur coding in Python for a while now and feel quite comfortable with it. Recently though I've been writing my first Daemon and am trying to come to terms with how my programs should flow. With my past programs, exceptions could be handled by simply aborting the program, perhaps after some minor cleaning up. The only consideration I had to give to program structure was the effective handling of non-exception input. In effect, "Garbage In, Nothing Out". In my Daemon, there is an outside loop that effectively never ends and a sleep statement within it to control the interval at which things happen. Processing of valid input data is easy but I'm struggling to understand the best practice for dealing with exceptions. Sometimes the exception may occur within several levels of nested functions and each needs to return something to its parent, which must, in turn, return something to its parent until control returns to the outer-most loop. Each function must be capable of handling any exception condition, not only for itself but also for all its subordinates. I apologise for the vagueness of my question but I'm wondering if anyone could offer me some general pointers into how these exceptions should be handled. Should I be looking at spawning sub-processes that can be terminated without impact to the parent? A (remote) possibility is that I'm doing things correctly and actually do need all that nested handling. Another very real possibility is that I haven't got a clue what I'm talking about. :) Steve
Python Daemons - Program Structure and Exception Control
9,610,525
0
2
476
0
python,daemon
The answer will be "it depends". If an exception occurs in some low-level function, it may be appropriate to catch it there if there is enough information available at this level to let the function complete successfully in spite of the exception. E.g. when reading triangles from an .stl file, the normal vector of the triangle it both explicitly given and implicitly given by the sequence of the three points that make up the triangle. So if the normal vector is given as (0,0,0), which is a 0-length vector and should trigger an exception in the constructor of a Normal vector class, that can be safely caught in the constructor of a Triangle class, because it can still be calculated by other means. If there is not enough information available to handle an exception, it should trickle upwards to a level where it can be handled. E.g. if you are writing a module to read and interpret a file format, it should raise an exception if the file it was given doesn't match the file format. In this case it is probably the top level of the program using that module that should handle the exception and communicate with the user. (Or in case of a daemon, log the error and carry on.)
0
1
0
0
2012-03-07T18:16:00.000
2
0
false
9,606,937
1
0
0
2
I've been doing amateur coding in Python for a while now and feel quite comfortable with it. Recently though I've been writing my first Daemon and am trying to come to terms with how my programs should flow. With my past programs, exceptions could be handled by simply aborting the program, perhaps after some minor cleaning up. The only consideration I had to give to program structure was the effective handling of non-exception input. In effect, "Garbage In, Nothing Out". In my Daemon, there is an outside loop that effectively never ends and a sleep statement within it to control the interval at which things happen. Processing of valid input data is easy but I'm struggling to understand the best practice for dealing with exceptions. Sometimes the exception may occur within several levels of nested functions and each needs to return something to its parent, which must, in turn, return something to its parent until control returns to the outer-most loop. Each function must be capable of handling any exception condition, not only for itself but also for all its subordinates. I apologise for the vagueness of my question but I'm wondering if anyone could offer me some general pointers into how these exceptions should be handled. Should I be looking at spawning sub-processes that can be terminated without impact to the parent? A (remote) possibility is that I'm doing things correctly and actually do need all that nested handling. Another very real possibility is that I haven't got a clue what I'm talking about. :) Steve
What is the best utility/library/strategy with Python to copy files across multiple computers?
9,619,361
0
0
836
0
python,file,rsync,unison
I think rsync is the solution. If you are concerned about data integrity, look at the explanation of the "--checksum" parameter in the man page. Other arguments that might come in handy are "--delete" and "--archive". Make sure the exit code of the command is checked properly.
0
1
0
1
2012-03-08T13:49:00.000
3
0
false
9,618,641
0
0
0
1
I have data across several computers stored in folders. Many of the folders contain 40-100 G of files of size from 500 K to 125 MB. There are some 4 TB of files which I need to archive, and build a unfied meta data system depending on meta data stored in each computer. All systems run Linux, and we want to use Python. What is the best way to copy the files, and archive it. We already have programs to analyze files, and fill the meta data tables and they are all running in Python. What we need to figure out is a way to successfully copy files wuthout data loss,and ensure that the files have been copied successfully. We have considered using rsync and unison use subprocess.POPEn to run them off, but they are essentially sync utilities. These are essentially copy once, but copy properly. Once files are copied the users would move to new storage system. My worries are 1) When the files are copied there should not be any corruption 2) the file copying must be efficient though no speed expectations are there. The LAN is 10/100 with ports being Gigabit. Is there any scripts which can be incorporated, or any suggestions. All computers will have ssh-keygen enabled so we can do passwordless connection. The directory structures would be maintained on the new server, which is very similar to that of old computers.
Environment variables getting preserved in Python script even after exiting
9,634,524
1
2
2,224
0
python,environment-variables
A subshell can change variables it inherited from the parent, but the changes made by the child don't affect the parent. When a new subshell is started, in which the variable exported from the parent is visible. The variable is unsetted by del os.environ['var'], but the value for this variable in the parent stays the same.
0
1
0
1
2012-03-09T13:00:00.000
2
1.2
true
9,634,473
0
0
0
1
I am running my Test Harness which is written in Python. Before running a test through this test harness, I am exporting some environment variables through a shell script which calls the test harness after exporting the variables. When the harness comes in picture, it checks if the variables are in the environment and does operations depending on the values in the env variables. However after the test is executed, I think the environment variables values aren't getting cleared as the next time, it picks up those values even if those aren't set through the shell script. If they are set explicitly, the harness picks up the new values but if we clear it next time, it again picks up the values set in 1st run. I tried clearing the variables using "del os.environ['var']" command after every test execution but that didn't solve the issue. Does anybody know why are these values getting preserved? On the shell these variables are not set as seen in the 'env' unix command. It is just in the test harness that it shows the values. None of the env variables store their values in any text files.
celery with multiple django instances
9,635,351
1
5
1,580
0
python,django,linux,celery
If you make changes in tasks.py for celery, then you will have to restart it once to apply changes by running command ./manage.py celeryd start or python manage.py celeryd start --settings=settings for using settings.py as configuration for celery. It will not be affected by the changes in your projects until you make changes in celery configuration.
0
1
0
0
2012-03-09T13:26:00.000
2
0.099668
false
9,634,800
0
0
1
1
I'm using several django instances, each in a virtualenv, on the same server. How can I start the celery server and make sure it is always running and updated? I.e. after a server restart or code update? The /etc/init.d script and the config file assume a single Django installation. Do I have to use the ./manage.py celeryd command? Regards Simon
stack dump in twisted app.py 'application' error when using twistd but works with python?
9,650,823
0
0
528
0
python,twisted,twistd
To use 'twistd -y', your .tac file must create a suitable object (e.g., by calling service.Application()) and store it in a variable named 'application'. twistd loads your .tac file and scans the global variables for one of this name. Please read the 'Using Application' HOWTO for details.
1
1
0
0
2012-03-10T20:24:00.000
2
0
false
9,649,879
0
0
0
1
I am trying to use twisted but when i try to run some of the example code provided with the twisted package, it seems to always crash when i use "twistd" instead of "python" for example, using the example code given with twisted, if i run to command : twisted -ny echoserv.py Unhandled Error Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/twisted/application/app.py", line 652, in run runApp(config) File "/usr/lib/python2.7/site-packages/twisted/scripts/twistd.py", line 23, in runApp _SomeApplicationRunner(config).run() File "/usr/lib/python2.7/site-packages/twisted/application/app.py", line 386, in run self.application = self.createOrGetApplication() File "/usr/lib/python2.7/site-packages/twisted/application/app.py", line 451, in createOrGetApplication application = getApplication(self.config, passphrase) --- --- File "/usr/lib/python2.7/site-packages/twisted/application/app.py", line 462, in getApplication application = service.loadApplication(filename, style, passphrase) File "/usr/lib/python2.7/site-packages/twisted/application/service.py", line 405, in loadApplication application = sob.loadValueFromFile(filename, 'application', passphrase) File "/usr/lib/python2.7/site-packages/twisted/persisted/sob.py", line 211, in loadValueFromFile value = d[variable] exceptions.KeyError: 'application' Failed to load application: 'application' Could not find 'application' in the file. To use 'twistd -y', your .tac file must create a suitable object (e.g., by calling service.Application()) and store it in a variable named 'application'. twistd loads your .tac file and scans the global variables for one of this name. Please read the 'Using Application' HOWTO for details. I was using Twisted version 11.0.0 but then i tried 12.0.0 but i have the same problem. The version of python i am using is 2.7.2 Any ideas on what to do would be helpful. I have been trying to deal with this problem for a few days now. thanks!
Not finding other files in the same workspace folder in Eclipse with PyDev
9,658,901
1
0
712
0
python,eclipse,python-3.x,pydev
If I understand you correctly, you need to import the other class from the other file into your original file to be able to use the class. For example, from otherfile import OtherClass Otherwise, please add more info.
0
1
0
0
2012-03-11T20:58:00.000
1
0.197375
false
9,658,846
1
0
0
1
This is probably a very simple question. But it's driving me nuts and I can't find the solution. I setup Eclipse and PyDev in my Windows box and I've written a class to do something. Then I created another .py file to run a program that users that class, but the program cannot find the class. It works well if I put all code in the same file, which is umanagable, but not in separate files. Also, I looked at my PYTHONPATH variable in Eclipse and it has the path of the folder i have my code in. Any ideas why it doesn't recognize all the files in the same folder?
getting compilation order from SCons
9,662,504
1
0
74
0
python,python-3.x,scons
You can use scons command-line parameter --no-exec would only print build commands but not execute them.
0
1
0
0
2012-03-11T21:21:00.000
1
1.2
true
9,659,004
0
0
0
1
Is there a quick way to get the order that SCons will process your program's files? I'd like to snag the ordered list of program file names and skip the compilation process. thanks in advance!
Python framework for task execution and dependencies handling
28,289,788
1
13
4,879
0
python,build,build-process,build-automation,jobs
Another option is to use make. Write a Makefile manually or let a python script write it use meaningful intermediate output file stages Run make, which should then call out the processes. The processes would be a python (build) script with parameters that tell it which files to work on and what task to do. parallel execution is supported with -j it also deletes output files if tasks fail This circumvents some of the python parallelisation problems (GIL, serialisation). Obviously only straightforward on *nix platforms.
0
1
0
0
2012-03-12T09:49:00.000
3
0.066568
false
9,664,809
0
0
0
1
I need a framework which will allow me to do the following: Allow to dynamically define tasks (I'll read an external configuration file and create the tasks/jobs; task=spawn an external command for instance) Provide a way of specifying dependencies on existing tasks (e.g. task A will be run after task B is finished) Be able to run tasks in parallel in multiple processes if the execution order allows it (i.e. no task interdependencies) Allow a task to depend on some external event (don't know exactly how to describe this, but some tasks finish and they will produce results after a while, like a background running job; I need to specify some of the tasks to depend on this background-job-completed event) Undo/Rollback support: if one tasks fail, try to undo everything that has been executed before (I don't expect this to be implemented in any framework, but I guess it's worth to ask..) So, obviously, this looks more or less like a build system, but I don't seem to be able to find something that will allow me to dynamically create tasks, most things I've seem already have them defined in the "Makefile". Any ideas?
Python - List issues (multiple lists?)
9,666,450
0
1
189
0
python,list
re.findall returns a list of matches. In your case you are getting lists with only one value. If that is always the case then @x539 answer will get the first item in the list.
0
1
0
0
2012-03-12T10:42:00.000
5
0
false
9,665,586
0
0
0
1
The below is a part of a script i'm trying to write. The script opens my iptables log, each line in the log contains the details in the example below. #example of a single line #Mar 9 14:57:51 machine kernel: [23780.638839] IPTABLES Denied UDP: IN=p21p1 OUT= MAC=ff:ff:ff:ff:ff:ff:00:00:00:00:00:00:00:00 SRC=10.100.1.4 DST=10.100.1.63 LEN=78 TOS=0x00 PREC=0x00 TTL=128 ID=10898 PROTO=UDP$ # Read file in a line at a time for line in iptables_log.readlines(): #find time based on 4 letters, 2 spaces, up to 2 numbers, 1 space, then standard 10:10:10 time format time = re.findall('(^\w{1,4}\s\s\d{1,2}\s\d\d:\d\d:\d\d)', line) #mac lookup mac = re.findall('MAC=(?:\w\w:\w\w:\w\w:\w\w\:\w\w:\w\w:\w\w:\w\w:\w\w:\w\w:\w\w:\w\w:\w\w:\w\w)', line) #source port src = re.findall('SRC=(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})', line) #destination port dst = re.findall('DST=(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})', line) #protocol proto = re.findall('PROTO=(?:\w{3,4})', line) #sourceport sourceport = re.findall('SPT=(?:\w{1,5})', line) #destport destport = re.findall('DPT=(?:\w{1,5})', line) print time, mac, src, dst, proto, sourceport, destport print '======================================================' I'm trying to get the script to print only the items i want, but when its output by the script it looks like this, which would seem to be a list. I want it to print without the [] ''. Looking online it seems like every variable (time, mac, src, etc) are a list themselves. I'm not sure how to combine them. I have seen reference to join but am not sure how to use it this example. Can someone assist please? ['Mar 9 14:57:51'] ['MAC=ff:ff:ff:ff:ff:ff:00:00:00:00:00:00:00:00'] ['SRC=10.100.1.4'] ['DST=10.100.1.63'] ['PROTO=UDP'] ['SPT=137'] ['DPT=137']
Hadoop-streaming : PYTHONPATH not working when mapper runs
10,921,996
0
2
1,189
0
python,hadoop,mapreduce,hadoop-streaming
I had the same issue, and I think the problem is that the Hadoop virtual environments won't recognize your system's pythonpath. If you install packages to /Library/Python/2.7/site-packages, Hadoop will pick them up and it will work.
0
1
0
0
2012-03-12T18:14:00.000
2
0
false
9,672,495
0
0
0
2
I have a PYTHONPATH set up in and it works fine too except when I run map-reduce job It fails saying Traceback (most recent call last): File "/work/app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201203091218_0006/attempt_201203091218_0006_m_000020_0/work/./mapper.py", line 57, in from src.utilities import utilities ImportError: No module named src.utilities java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545) at org.apache.hadoop.streaming.PipeMapper.map(PipeMapper.java:121) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.Child$4.run(Child.java:261) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.mapred.Child.main(Child.java:255) java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.Child$4.run(Child.java:261) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.mapred.Child.main(Child.java:255) Question: - Is it that during hadoop-streaming we have to setup Python path specifically? where?
Hadoop-streaming : PYTHONPATH not working when mapper runs
21,020,422
0
2
1,189
0
python,hadoop,mapreduce,hadoop-streaming
we need to add MapReduce Service Environment Safety Valve, In mycase we are using cloudera-manager GUI, I added PYTHONPATH it's working.
0
1
0
0
2012-03-12T18:14:00.000
2
0
false
9,672,495
0
0
0
2
I have a PYTHONPATH set up in and it works fine too except when I run map-reduce job It fails saying Traceback (most recent call last): File "/work/app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201203091218_0006/attempt_201203091218_0006_m_000020_0/work/./mapper.py", line 57, in from src.utilities import utilities ImportError: No module named src.utilities java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545) at org.apache.hadoop.streaming.PipeMapper.map(PipeMapper.java:121) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.Child$4.run(Child.java:261) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.mapred.Child.main(Child.java:255) java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.Child$4.run(Child.java:261) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.mapred.Child.main(Child.java:255) Question: - Is it that during hadoop-streaming we have to setup Python path specifically? where?
PyDev: how remove command line switch -u
9,687,892
0
0
336
0
python,eclipse,command-line,pydev
Actually, so far you can't do that without grabbing the code and changing it yourself (i.e.: that's hardcoded). But still, if you don't use the unbuffered output (i.e.: -u option), the PyDev console will end up not showing the I/O output as it's printed (as it'll be buffered). So, what is it that breaks because -u? (maybe it'd be better fixing that then changing PyDev to launch without the -u as you may end up without the output until the run is finished if you do that).
0
1
0
0
2012-03-13T08:48:00.000
3
0
false
9,680,762
1
0
0
1
My code is broken by eclipse but works normaly if I launch it from command prompt with python and no option. I need to delete -u option when the python interpreter is launched in eclipse and pydev, how can I do that?
Asynchronous versions of Google APIs?
9,948,934
0
6
2,692
0
python,google-app-engine,google-docs-api
Currently the Documents List API library for Python (The GData Library) is rigidly synchronous. One solution would be to serialize the requests as tasks for a task queue and run them later, but the library itself won't help, I'm afraid.
0
1
0
0
2012-03-13T14:54:00.000
2
1.2
true
9,686,505
0
0
1
1
Is there any way to queue up document list API requests and handle them asynchronously (similar to the google app engine async urlfetch requests)? I could conceivably copy/rewrite a lot of the client request modification logic in DocsClient around a urlfetch request, but I'd rather avoid that if there's some other method already available. The target environment is google app engine, and I'm aware of the async datastore APIs. EDIT I've now implemented basic functionality on DocsClient.request to accept a callback kwarg, so any higher-level client request will use async urlfetch and call the callback function with the result of the call.
Get the memory address pointed to by a ctypes pointer
9,784,508
16
11
12,060
0
python,ctypes
I have fixed this myself by reading the documentation. I wanted to know the memory location of a block of memory allocated by a library. I had the ctypes pointer that pointed to said block. To get the memory address of the block I used ctypes.addressof(p_block.contents). The confusion arose around my understanding that p_block.contents != p_block.contents, but then I realised all p_block.contents objects have the same underlying buffer. The address of the underlying buffer is obtained with ctypes.addressof.
1
1
0
0
2012-03-13T15:26:00.000
2
1.2
true
9,687,002
0
0
0
1
Short version: How can I get the address that a ctypes pointer points to? Long version: I have registered a python function as a callback with a C library. The C library expects function signature of void (*p_func)(char stat, char * buf, short buf_len) so I register an appropriate python function. When I get into the python function, I want to know the memory address pointed to by buf. How can I do this?
Does app engine automatically cache frequent queries?
9,689,883
1
3
1,313
1
python,google-app-engine,memcached,bigtable
I think that app engine does not cache anything for you. While it could be that, internally, it caches some things for a split second, I don't think you should rely on that. I think you will be charged the normal number of read operations for every entity you read from every query.
0
1
0
0
2012-03-13T18:06:00.000
3
0.066568
false
9,689,588
0
0
1
2
I seem to remember reading somewhere that google app engine automatically caches the results of very frequent queries into memory so that they are retrieved faster. Is this correct? If so, is there still a charge for datastore reads on these queries?
Does app engine automatically cache frequent queries?
9,690,080
1
3
1,313
1
python,google-app-engine,memcached,bigtable
No, it doesn't. However depending on what framework you use for access to the datastore, memcache will be used. Are you developing in java or python? On the java side, Objectify will cache GETs automatically but not Queries. Keep in mind that there is a big difference in terms of performance and cachability between gets and queries in both python and java. You are not charged for datastore reads for memcache hits.
0
1
0
0
2012-03-13T18:06:00.000
3
0.066568
false
9,689,588
0
0
1
2
I seem to remember reading somewhere that google app engine automatically caches the results of very frequent queries into memory so that they are retrieved faster. Is this correct? If so, is there still a charge for datastore reads on these queries?
Checking "liveness" of a Windows application?
9,694,585
3
5
368
0
python,windows,winapi,pywin32
You'll need to get a HWND handle to the window in question (EnumWindowHandles might be a good start), and then try calling IsHungAppWindow to see if the system thinks it's unresponsive.
0
1
0
0
2012-03-13T20:09:00.000
2
0.291313
false
9,691,306
0
0
0
1
I've got a Windows application running some expensive equipment; this application dies in a variety of creative ways. Usually when it goes, the process dies completely. I wrote a little monitoring program which looks for the process' name in the list of things which are currently running, and that works great for those failures. But sometimes it just becomes completely nonresponsive and requires termination via the task manager, but is still "running" in some unhelpful sense. I'm completely unfamiliar with the Windows API, so this is perhaps quite a stretch, but is there anything I can do to programmatically check the "liveness" of other processes? Or which I might use to make guesses? (Watching for it to stop processing events from the OS, or for all disk access/memory allocation to halt, etc, etc) Preferably it would be something I could do via the Python win32 module, but I'll branch out to anything that can successfully detect when this thing locks up. And, I realize "liveness" is vague, but I don't want to rule anything out, particularly when I don't have any insight into how this thing is really failing.
How do I know timestamp when my Python app was deployed on GAE?
9,696,354
4
3
266
0
python,google-app-engine
Wrap appcfg.py in a shell script. Before actually running appcfg.py update, save the current time, possibly adjusting for your time zone, if necessary, in a file that's marked as a resource. You can open and read that file from the deployed app. Alternatively, have that script substitute the current time directly into code, obviating the need for a file open.
0
1
0
0
2012-03-14T02:59:00.000
2
0.379949
false
9,695,320
0
0
0
1
I need to know a value holding timestamp when my app was deployed on the GAE server. In runtime. Surely I could generate some Python constant in the deployment script. But is there an easier and more correct way to reach the goal? (I'd like not to use data store for that.)
Google app engine how to schedule Crons one after another
9,715,283
1
0
112
0
python,google-app-engine
Though I agree with suggestions in comment, I think I have a better solution to your problem (Hopefully :)) Although it's not necessary you can use pull queue in your application, to facilitate design of your problem. The pattern I am suggesting is like this: 1) A servlet centrally handles execution (Let's call it controller) of various tasks and is exposed at a URL 2) The jobs are initiated by the controller by hitting the URL of the job (Assuming pull queue again) 3) After job completion, the job hits back at controller URL to report completion of job 4) Controller in turn deletes the job from queue which is done, and adds next logical job to queue And this is repeated. In this case your job code is unchanged even if logic of sequence changes or new jobs are added. You might need to make changes to controller only.
0
1
0
0
2012-03-15T04:16:00.000
2
0.099668
false
9,713,908
0
0
0
1
Hi um struggling with a problem . I created number of crons and i and i want to run them one after another in a specific order . Lets say i have A , B , C and D crons and want to Run Cron B after Completion of Cron A and after that want to run Cron D and after that cron C. I searched for a way to accomplish this task but could not find any . Can any one help?
Popen-ing a python call that invokes a script using multiprocessing (pgrp issue)?
10,034,142
1
1
241
0
python,unix
You're asking about something pretty messy here. I suspect that none of this is what you want to do at all, and that you really want to accomplish this some simpler way. However, presuming you really want to mess with process groups... Generally, a new process group is created only by the setpgrp(2) system call. Otherwise, processes created by fork(2) are always members of the current process group. That said, upon creating a new process group, the processes in that group aren't even controlled by any tty and doing what you appear to want to do properly requires understanding the whole process group model. A good reference for how all this works is Stevens, "Advanced Programming in the Unix Environment", which goes into it in gory detail. If you really want to go down this route, you're going to have to implement popen or the equivalent yourself with all the appropriate system calls made.
0
1
0
1
2012-03-15T15:21:00.000
1
1.2
true
9,722,778
0
0
0
1
I'm writing a unittesting framework for servers that uses popen to basically execute "python myserver.py" with shell=False, run some tests, and then proceed to take the server down by killpg. This myserver.py can and will use multiprocessing to spawn subprocesses of its own. The problem is, from my tests, it seems that the pgrp pid of the server processes shares the same group pid as the actual main thread running the unittests, therefore doing an os.killpg on the group pid will not only take down the server but also the process calling the popen (not what I want to do). Why does it do this? And how can I make them be on separate group pids that I can kill independently?
bottle framework: getting requests and routing to work
10,681,349
0
1
693
0
python,bottle
I actually resolved the issue. The Bottle framework tutorial encourages first-time users to set up the server on a high port (to avoid conflict with apache, etc) for development. I was missing two parts of the process: 1. import the python script so that it can be called from the main bottle file 2. in the main bottle file, add a route to the api link (for the javascript to work) I'm not sure if I would have had to add the route if I was running the server on port 80
0
1
0
0
2012-03-15T20:25:00.000
1
1.2
true
9,727,608
0
0
1
1
I have written a webapp using traditional cgi. I'm now trying to rewrite it with bottle The page is simple...the user fills out a form, hits submit and the data object is sent to a python script that used to live in my cgi-bin The python script generates an image, and prints the url for that image out to standard out On callback, I use javascript to display the newly generated image on the page formatted with html. The issue that I'm having with bottle is getting the image-generating script to execute when it receives the post request. I'm used to handling the post request and callback with javascript (or jquery). should I be using a bottle method instead?
Python subprocess in parallel
9,743,899
4
22
25,715
0
python,subprocess
You don't need to run a thread for each process. You can peek at the stdout streams for each process without blocking on them, and only read from them if they have data available to read. You do have to be careful not to accidentally block on them, though, if you're not intending to.
0
1
0
0
2012-03-16T20:08:00.000
4
0.197375
false
9,743,838
1
0
0
1
I want to run many processes in parallel with ability to take stdout in any time. How should I do it? Do I need to run thread for each subprocess.Popen() call, a what?
Google App Engine library imports
9,748,040
0
0
2,099
0
python,google-app-engine,google-api,google-api-client,google-api-python-client
The packages needs to be locally available, where did you put the packages, in the Python folder or in your project folder?
0
1
1
0
2012-03-17T04:30:00.000
2
0
false
9,747,258
0
0
1
1
I've been experimenting with the Google App Engine, and I'm trying to import certain libraries in order to execute API commands. I've been having trouble importing, however. When I tried to execute "from apiclient.discovery import build", my website doesn't load anymore. When I test locally in IDLE, this command works.
change directory (python) doesnt work in localhost
9,757,219
6
0
213
0
python,google-app-engine,python-2.7
AppEngine restricts you from doing things that don't make sense. Your AppEngine application can't go wandering all over the filesystem once it is running on Google's servers, and Google's servers certainly don't have a C: drive. Whatever you are trying to accomplish by changing directories, it's something that you need to accomplish in a different way in an AppEngine application.
0
1
0
0
2012-03-18T09:15:00.000
1
1
false
9,757,203
0
0
1
1
import os os.chdir("c:\Users") works in the command prompt but not on localhost (google app engine.) can anyone help.
default python does not locate modules installed with homebrew
10,824,368
0
0
311
0
python,macos,module,homebrew
From the Homebrew page: "Homebrew installs packages into their own isolated prefix and then symlinks everything into /usr/local" I think that the OS X preinstalled python looks for modules in /Library/Frameworks/Python.framework/Versions/Current//lib/python2.7/site-packages So maybe you need to symlink your Homebrew installed packages to there.
0
1
0
0
2012-03-18T22:54:00.000
1
0
false
9,763,056
0
0
0
1
I am installing modules with homebrew and other installers, and they are not recognized by my default python. Module installations with easy_install (such as pip) appear to be available for my system and system python). My default python is located here and is this version: 15:49 [~]: which python /usr/local/bin/python 15:49 [~]: python -d Python 2.7.2 (default, Mar 18 2012, 15:13:08) [GCC 4.2.1 (Apple Inc. build 5577)] on darwin Type "help", "copyright", "credits" or "license" for more information. The packages do appear to be located in /library/frameworks/, GEOS.framework is one example. What do I need to modify to gain access to my modules? System: Mac os x 10.5.8
How to use Coffeescript on Google App Engine
9,764,949
2
2
1,031
0
python,google-app-engine,coffeescript,go
Coffeescript compiles to Javascript, which can be run in a web browser. In that case, App Engine can serve up the resulting javascript. I don't know of any way to compile coffeescript to python, java or go though, so you can't use it as a server side language.
0
1
0
0
2012-03-19T04:03:00.000
2
1.2
true
9,764,895
0
0
1
1
Does anyone know if it is possible to use Coffeescript on Google App Engine? If so how can this be done with the app engine Python or Go platforms?
Practical server side includes with Python on Google App Engine
9,782,676
0
0
1,024
0
python,google-app-engine,server-side-includes,static-files
Or use a framework like django, which will help in inheritance of templates.
0
1
0
0
2012-03-19T15:43:00.000
2
0
false
9,773,232
0
0
1
1
Is there a decent way to "simulate" server side includes using Python on Google App Engine? I would really like to split my static html files up into smaller pieces for two reasons: They will be easier to manage from a development perspective HTML that is redundant across multiple pages can be more easily re-used and updates to the HTML will show on all pages instead of having to copy and paste updates
How to connect android device to specific AP with adb shell or monkeyrunner
10,211,905
0
1
799
0
android,python,android-intent,android-emulator,monkeyrunner
wpa_cli should work.Open wpa_cli>> add_network set_network ssid "APSSID" set_network key_mgmt NONE \if ap is confgrd in open none save_config enable these set of commands should work if WiFI is ON in UI. using Monkeyrunner navigate using keycode is the only option OR you need to make an APK for ur specific operations
0
1
0
1
2012-03-19T19:24:00.000
1
0
false
9,776,529
0
0
0
1
I am trying to connect an android device to specific AP without keycodes. I am looking for adb shell commands or monkeyrunner script that can perform the same. Hope you guys can help me with this. PS. After researching for days only way I found is using wpa_cli in adb shell. But couldnt exactly connect because I was not able to find the exact codes.
Is writing a daemon in Python a good idea?
9,779,553
1
18
2,480
0
python,daemon
I've written many things in C/C++ and Perl that are initiated when a LINUX box O.S. boots, launching them using the rc.d. Also I've written a couple of java and python scripts that are started the same way I've mentioned above, but I needed a little shell-script (.sh file) to launch them and I used rc.5. Let me tell you that your concerns about their runtime environments are completely valid, you will have to be careful about wich runlevel you'll use... (only from rc.2 to rc.5, because rc.1 and rc.6 are for the System). If the runlevel is too low, the python runtime might not be up at the time you are launching your program and it could flop. e.g.: In a LAMP Server MySQL and Apache are started in rc.3 where the Network is already available. I think your best shot is to make your script in python and launch it using a .sh file from rc.5. Good luck!
0
1
0
1
2012-03-19T23:00:00.000
3
0.066568
false
9,779,200
1
0
0
2
I have to write a daemon program that constantly runs in the background and performs some simple tasks. The logic is not complicated at all, however it has to run for extended periods of time and be stable. I think C++ would be a good choice for writing this kind of application, however I'm also considering Python since it's easier to write and test something quickly in it. The problem that I have with Python is that I'm not sure how its runtime environment is going to behave over extended periods of time. Can it eat up more and more memory because of some GC quirks? Can it crash unexpectedly? I've never written daemons in Python before, so if anyone here did, please share your experience. Thanks!
Is writing a daemon in Python a good idea?
9,779,293
14
18
2,480
0
python,daemon
I've written a number of daemons in Python for my last company. The short answer is, it works just fine. As long as the code itself doesn't have some huge memory bomb, I've never seen any gradual degradation or memory hogging. Be mindful of anything in the global or class scopes, because they'll live on, so use del more liberally than you might normally. Otherwise, like I said, no issues I can personally report. And in case you're wondering, they ran for months and months (let's say 6 months usually) between routine reboots with zero problems.
0
1
0
1
2012-03-19T23:00:00.000
3
1.2
true
9,779,200
1
0
0
2
I have to write a daemon program that constantly runs in the background and performs some simple tasks. The logic is not complicated at all, however it has to run for extended periods of time and be stable. I think C++ would be a good choice for writing this kind of application, however I'm also considering Python since it's easier to write and test something quickly in it. The problem that I have with Python is that I'm not sure how its runtime environment is going to behave over extended periods of time. Can it eat up more and more memory because of some GC quirks? Can it crash unexpectedly? I've never written daemons in Python before, so if anyone here did, please share your experience. Thanks!
bash: pip: command not found
40,450,261
2
580
1,748,193
0
python,macos,pip,python-2.6
(Context: My OS is Amazon linux using AWS. It seems similar to RedHat but it's stripped down a bit, it seems.) Exit the shell, then open a new shell. The pip command now works. That's what solved the problem at this location. You might want to know as well: The pip commands to install software then needed to be written like this example (jupyter for example) to work correctly on my system: pip install jupyter --user Specifically, note the lack of sudo, and the presence of --user Would be real nice if pip docs had said anything about all this, but that would take typing in more characters I guess.
0
1
0
0
2012-03-20T02:43:00.000
36
0.011111
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
51,316,425
1
580
1,748,193
0
python,macos,pip,python-2.6
What I did to overcome this was sudo apt install python-pip. It turned out my virtual machine did not have pip installed yet. It's conceivable that other people could have this scenario too.
0
1
0
0
2012-03-20T02:43:00.000
36
0.005555
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
54,471,268
2
580
1,748,193
0
python,macos,pip,python-2.6
Not sure why this wasnt mentioned before, but the only thing that worked for me (on my NVIDIA Xavier) was: sudo apt-get install python3-pip (or sudo apt-get install python-pip for python 2)
0
1
0
0
2012-03-20T02:43:00.000
36
0.011111
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
62,098,214
1
580
1,748,193
0
python,macos,pip,python-2.6
The problem seems that your python version and the library yoıu want to install is not matching versionally. Ex: If Django is Django3 and your python version is 2.7, you may get this error. "After installing is running 'python' still ran Python 2.6 and PATH was not updated." 1- Install latest version of Python 2- Change your PATH manually as python38 and compare them. 3- Try to reinstall. I solved this problem as replacing PATH manually with the latest version of Python. As for Windows: ;C:\python38\Scripts
0
1
0
0
2012-03-20T02:43:00.000
36
0.005555
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
60,697,532
4
580
1,748,193
0
python,macos,pip,python-2.6
To overcome the issue "bash: pip: command not found" in Mac Found two versions on Mac 1 is 2.7 and the other is 3.7 when I say sudo easy_install pip, pip got installed under 2.7 when I say sudo easy_install-3.7 pip, pip got installed under 3.7 But, whenever I would require to do pip install , I wanted to install the package under python3.7, so I have set an alias (alias pip=pip3)in .bash_profile so now, whenever I do pip install , it gets installed under python3.7
0
1
0
0
2012-03-20T02:43:00.000
36
0.022219
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
62,380,015
6
580
1,748,193
0
python,macos,pip,python-2.6
It solved my problem by using sudo easy_install pip
0
1
0
0
2012-03-20T02:43:00.000
36
1
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
64,437,923
15
580
1,748,193
0
python,macos,pip,python-2.6
Latest update 2021. In Ubuntu 20 64bit works perfectly Installation of python3 sudo apt install python3 Pip Installation sudo apt install python3-pip Add following alias in $HOME/.bash_aliases in some cases file may be hidden. alias pip="/usr/bin/python3 -m pip " Refresh current terminal session. . ~/.profile check pip usage: pip Install a package: pip install {{package_name}} extra info to get Home path echo $HOME you will get your home path.
0
1
0
0
2012-03-20T02:43:00.000
36
1
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
38,888,588
23
580
1,748,193
0
python,macos,pip,python-2.6
Installing using apt-get installs a system wide pip, not just a local one for your user. Try this command to get pip running on your system ... $ sudo apt-get install python-pip python-dev build-essential Then pip will be installed without any issues and you will be able to use "sudo pip...".
0
1
0
0
2012-03-20T02:43:00.000
36
1
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
9,781,267
15
580
1,748,193
0
python,macos,pip,python-2.6
To solve: Add this line to ~/.bash_profile export PATH="/usr/local/bin:$PATH" In a terminal window, run source ~/.bash_profile
0
1
0
0
2012-03-20T02:43:00.000
36
1
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
bash: pip: command not found
69,341,076
0
580
1,748,193
0
python,macos,pip,python-2.6
If on Windows and using the Python installer, make sure to check the "Add Python to environment variables" option. After installation, restart your shell and retry to see if pip exists.
0
1
0
0
2012-03-20T02:43:00.000
36
0
false
9,780,717
1
0
0
10
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found". This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem? EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
Should Python library modules start with #!/usr/bin/env python?
9,783,492
0
8
1,029
0
python,coding-style,shebang
if you want your script to be an executable, you have to include this line
0
1
0
1
2012-03-20T08:27:00.000
3
0
false
9,783,482
1
0
0
1
Should Python library modules start with #!/usr/bin/env python? Looking at first lines of *.py in /usr/share/pyshared (where Python libs are stored in Debian) reveals that there are both files that start with the hashbang line and those that do not. Is there a reason to include or omit this line?
Google App Engine Development and Production Environment Setup
9,793,302
0
1
398
0
django,google-app-engine,github,development-environment,python-2.7
I'm on a pretty similar setup, though I'm still runing on py2.5, django-nonrel. 1) I usually use 'git status' or 'git gui' to see if I forgot to check in files. 2) I personally don't check in my datastore. Are you familiar with .gitignore? It's a text file in which you list files for git to ignore when you run 'git status' and other functions. I put in .gaedata as well as .pyc and backup files. To manage the database I use "python manage.py dumpdata > file" which dumps the database to a json encoded file. Then I can reload it using "python manage.py loaddata". 3) I don't know of any deploy from git. You can probably write a little python script to check whether git is up to date before you deploy. Personally though, I deploy stuff to test to make sure it's working, before I check it in.
0
1
0
0
2012-03-20T13:58:00.000
1
1.2
true
9,788,264
0
0
1
1
Here is my current setup: GitHub repository, a branch for dev. myappdev.appspot.com (not real url) myapp.appspot.com (not real url) App written on GAE Python 2.7, using django-nonrel Development is performed on a local dev server. When I'm ready to release to dev, I increment the version, commit, and run "manage.py upload" to the myappdev.appspot.com Once testing is satisfactory, I merge the changes from dev to main repo. I then run "manage.py upload" to upload the main repo code to the myapp.appspot.com domain. Is this setup good? Here are a few issues I've run into. 1) I'm new to git, so sometimes I forget to add files, and the commit doesn't notify me. So I deploy code to dev that works, but does not match what is in the dev branch. (This is bad practice). 2) The datastore file in the git repo causes issues. Merging binary files? Is it ok to migrate this file between local machines, or will it get messed up? 3) Should I be using "manage.py upload" for each release to the dev or prod environment, or is there a better way to do this? Heroku looks like it can pull right from GitHub. The way I'm doing it now seems like there is too much room for human error. Any overall suggestions on how to improve my setup? Thanks!
Google AppEngine and Threaded Workers
9,790,858
5
0
152
0
python,multithreading,google-app-engine,queue
You can use "backends" or "task queues" to run processes in the background. Tasks have a 10-minute run time limit, and backends have no run time limit. There's also a cronjob mechanism which can trigger requests at regular intervals. You can fetch the data from external servers with the "URLFetch" service.
0
1
0
0
2012-03-20T14:21:00.000
2
0.462117
false
9,788,635
0
0
1
2
I am currently trying to develop something using Google AppEngine, I am using Python as my runtime and require some advise on setting up the following. I am running a webserver that provides JSON data to clients, The data comes from an external service in which I have to pull the data from. What I need to be able to do is run a background system that will check the memcache to see if there are any required ID's, if there is an ID I need to fetch some data for that ID from the external source and place the data in the memecache. If there are multiple id's, > 30 I need to be able to pull all 30 request as quickly and efficiently as possible. I am new to Python Development and AppEngine so any advise you guys could give would be great. Thanks.
Google AppEngine and Threaded Workers
9,809,659
1
0
152
0
python,multithreading,google-app-engine,queue
Note that using memcache as the communication mechanism between front-end and back-end is unreliable -- the contents of memcache may be partially or fully erased at any time (and it does happen from time to time). Also note that you can't query memcache of you don't know the exact keys ahead of time. It's probably better to use the task queue to queue up requests instead of using memcache, or using the datastore as a storage mechanism.
0
1
0
0
2012-03-20T14:21:00.000
2
0.099668
false
9,788,635
0
0
1
2
I am currently trying to develop something using Google AppEngine, I am using Python as my runtime and require some advise on setting up the following. I am running a webserver that provides JSON data to clients, The data comes from an external service in which I have to pull the data from. What I need to be able to do is run a background system that will check the memcache to see if there are any required ID's, if there is an ID I need to fetch some data for that ID from the external source and place the data in the memecache. If there are multiple id's, > 30 I need to be able to pull all 30 request as quickly and efficiently as possible. I am new to Python Development and AppEngine so any advise you guys could give would be great. Thanks.
Is it possible to find out which USB port a MIDI device is connected to in portmidi / pyportmidi
9,790,821
0
3
455
0
python,usb,midi,pyportmidi
lsusb should do the trick. All devices and their respective hubs are listed there.
0
1
0
1
2012-03-20T16:19:00.000
1
0
false
9,790,715
0
0
0
1
I'm connecting a several identical USB-MIDI devices and talking to them using Python and pyportmidi. I have noticed that when I run my code on Linux, occasionally the MIDI ports of the devices are enumerated in a different order, so I send messages to the wrong devices. As the devices do not have unique identifiers, I am told that I should identify them by which USB port they are connected to. Is there any way to retrieve this information? My app will run on Linux, but Mac OS support is useful for development. It's annoying because they usually enumerate in a sensible order - the first device in the hub is the first device in portmidi, but sometimes they don't - usually the first 2 devices are switched. I have to physically move the devices without unplugging to fix them.
hook into wndproc of another application?
10,608,464
2
7
1,449
0
python,winapi,wndproc
You are going about this the wrong way. If you think about it, you'll realize that responding to menu events with your custom "actions" must require some code to run in the process that you're targeting. This means you'll need to inject code into the other process in order to achieve what you want. Since you're going to need to inject code anyway, I strongly suggest you look at DLL-injecting into the other process (search "Dll Injection example"). This will bootstrap your code into the other process, and you can construct your menu there. This also has the advantage that the foreign app won't be reliant on your app being responsive - it'll all be in-process.
1
1
0
0
2012-03-20T21:58:00.000
1
1.2
true
9,795,645
0
0
0
1
I have a small question hoping someone will help me. Is there any way to hook into other applications WNDPROC? The situation is that I want to insert a menu in the other app menubar and I want to define the commands for every menu item. I was able to insert the menu with menu items using some Win32 API functions (user32.dll), but I can't set the commands of that menu item so that it actually does something if clicked. With some googling, I got some information about wndprocess, and I should intercept the ID Command sent and trigger some function, but I'm stuck. Can anyone help me?
GAE wait response from other source
9,808,831
1
0
68
0
android,python,http,google-app-engine
You don't seem to have understood how web applications work. They don't wait for signals - or, rather, that's all they do. Every page served by a Web service is in response to a signal, ie a request. Your web service just needs to respond to normal requests in the normal way.
0
1
0
0
2012-03-21T15:56:00.000
1
0.197375
false
9,808,250
0
0
1
1
I don't know how to present the title more clearly. I want build a site on Google app engine (GAE), and one app on android. The site on GAE should wait the signal which is from the app on android and handle the signal to do something. I don't know how to search the "keyword" on Google of waiting signal on GAE, or what method should I use. Is here have anyone who have some related experiment on it ? Thank you very much.
Apache httpd doesn't load .bashrc
9,815,735
0
2
2,452
0
python,bash,cgi,apache
What? Surely you don't mean that your scripts rely on configurations in some account's personal home directory. Apache config files can export environment variables to CGI scripts, etc. Maybe your program is too dependent on too many environment variables. How about supporting a configuration file: /etc/mypythonprogram.rc. There can be a single environment variable telling the program to use an alternative config file, for flexibility.
0
1
0
1
2012-03-22T02:33:00.000
2
0
false
9,815,655
0
0
0
1
I'm running Python scripts as a CGI under Apache 2.2. These scripts rely on environment variables set in my .bashrc to run properly. The .bashrc is never loaded, and my scripts fail. I don't want to duplicate my bashrc by using a bunch of SETENV commands; the configuration files will easily get out of sync and cause hard-to-find bugs. I'm running apache as my user, not as root. I'm starting/stopping it manually, so the /etc/init.d script shouldn't matter at all (I think). Given these constraints, what can I do to have my .bashrc loaded when my CGI is called? Edit: I use /usr/sbin/apache2ctl to do the restarting.
How do I see the Python doc on Linux?
9,818,314
6
31
15,843
0
python,linux,doc
If you use the Fedora distribution, then yum install python-docs. Other distributions may provide similar packages.
0
1
0
0
2012-03-22T06:56:00.000
9
1
false
9,817,712
0
0
0
1
In Windows, Python has a chm type document, and it is very convenient to read. But in the Linux, is there any document let me to read?
Model to create threaded comments on Google Appengine with Python
9,819,945
0
1
380
0
python,google-app-engine
It depends on how nested threads you expect to get and if you want to optimize reading or writing. If we guess that the threads normally are quite shallow and that you want to optimize reading all subcomments on all levels of a comment, I think you should store each comment in a separate entry and then put a reference to it in a list on the parent and the parent's parent and so forth. That way getting all subcomments are always just one call, but writing a new comment is a bit slower, since you have to modify all parents.
0
1
0
0
2012-03-22T09:15:00.000
2
0
false
9,819,260
0
0
1
1
I am trying to write a comments model which can be threaded (no limits to the number of child threads). How do I do this in Appengine? and what is the fastest way to read all comments? I am trying to do this in a scalable way so that app engine's new pricing does not kill my startup :)
When invoking a Python script, what is the difference between "./script.py" and "python script.py"
9,826,394
0
6
2,513
0
python,django,posix,sh
./script.py runs the interpreter defined in the #! at the beginning of the file. For example, the first line might be #! /usr/bin/env python or #! /usr/bin/python or something else like that. If you look at what interpreter is invoked, you might be able to fix that problem.
0
1
0
1
2012-03-22T16:16:00.000
4
0
false
9,826,313
0
0
0
2
One difference is that "./script.py" only works if script.py is executable (as in file permissions), but "python script.py" works regardless. However, I strongly suspect there are more differences, and I want to know what they are. I have a Django website, and "python manage.py syncdb" works just fine, but "./manage.py syncdb" creates a broken database for some reason that remains a mystery to me. Maybe it has to do with the fact that syncdb prompts for a superuser name and password from the command line, and maybe using "./manage.py syncdb" changes the way it interacts with the command line, thus mangling the password. Maybe? I am just baffled by this bug. "python manage.py syncdb" totally fixes it, so this is just curiosity. Thanks. Edit: Right, right, I forgot about the necessity of the shebang line #!/usr/bin/python. But I just checked, "python manage.py syncdb" and "./manage.py syncdb" are using the same Python interpreter (2.7.2, the only one installed, on Linux Mint 12). Yet the former works and the latter does not. Could the environment variables seen by the Python code be different? My code does require $LD_LOADER_PATH and $PYTHON_PATH to be set special for each shell.
When invoking a Python script, what is the difference between "./script.py" and "python script.py"
9,826,923
1
6
2,513
0
python,django,posix,sh
In Linux using terminal you can execute any file -if the user has execute permission- by typing ./fileName. When the OS sees a valid header like #! /usr/bin/python (or for perl #! /usr/bin/python), It will call the python or perl (appropriate) interpreter to execute the program. You can use the command python script.py directly because, python is a executable program located at /usr/bin (or somewhere else) which is in a environmental variable $PATH, that corresponding to directory of executables.
0
1
0
1
2012-03-22T16:16:00.000
4
0.049958
false
9,826,313
0
0
0
2
One difference is that "./script.py" only works if script.py is executable (as in file permissions), but "python script.py" works regardless. However, I strongly suspect there are more differences, and I want to know what they are. I have a Django website, and "python manage.py syncdb" works just fine, but "./manage.py syncdb" creates a broken database for some reason that remains a mystery to me. Maybe it has to do with the fact that syncdb prompts for a superuser name and password from the command line, and maybe using "./manage.py syncdb" changes the way it interacts with the command line, thus mangling the password. Maybe? I am just baffled by this bug. "python manage.py syncdb" totally fixes it, so this is just curiosity. Thanks. Edit: Right, right, I forgot about the necessity of the shebang line #!/usr/bin/python. But I just checked, "python manage.py syncdb" and "./manage.py syncdb" are using the same Python interpreter (2.7.2, the only one installed, on Linux Mint 12). Yet the former works and the latter does not. Could the environment variables seen by the Python code be different? My code does require $LD_LOADER_PATH and $PYTHON_PATH to be set special for each shell.
Storing constant data: python module v. the datastore
9,838,594
2
0
269
0
python,google-app-engine,google-cloud-datastore
If it's small and read-only, it's a much better idea to store data in locally - nothing beats the latency of local memory. Note you don't have to store it as a Python module - any data file will work, if you write the code to read it into memory.
0
1
0
0
2012-03-23T08:14:00.000
1
1.2
true
9,835,895
1
0
0
1
I am developing a simple data visualization application in python with Google AppEngine. The data has the following properties: structure : simple key - tuple-of-int size : in the order (1-10mb on disk or in memory when loaded by the python interpreter) read-only (uploaded once and for all by me, not modified by users) This data could be stored in: the datastore a large (1-10mb) python module Since imported python module are cached, the costly import would be rare and the data would be held directly in memory most of the time which is bound to be more efficient (in time and money) than placing datastore requests. Has anybody debated this before? Any experience to share? Would there be any cons to using the python module approach with that use case? Many thanks, Nic
How to determine if Python script was run via command line?
9,839,781
1
36
35,970
0
python,command-line
This is typically done manually/, I don't think there is an automatic way to do it that works for every case. You should add a --pause argument to your script that does the prompt for a key at the end. When the script is invoked from a command line by hand, then the user can add --pause if desired, but by default there won't be any wait. When the script is launched from an icon, the arguments in the icon should include the --pause, so that there is a wait. Unfortunately you will need to either document the use of this option so that the user knows that it needs to be added when creating an icon, or else, provide an icon creation function in your script that works for your target OS.
0
1
0
1
2012-03-23T12:32:00.000
13
0.015383
false
9,839,240
0
0
0
1
Background I would like my Python script to pause before exiting using something similar to: raw_input("Press enter to close.") but only if it is NOT run via command line. Command line programs shouldn't behave this way. Question Is there a way to determine if my Python script was invoked from the command line: $ python myscript.py verses double-clicking myscript.py to open it with the default interpreter in the OS?
how to show an object's type in google app engine
9,885,834
1
0
89
0
python,google-app-engine,types
When browser renders html it thinks that <type 'str'> is a (unknown) tag, so it renders it as <type 'str'></type>, hence it becomes part of your page markup... You can see this with Firebug or any similar tool.
0
1
0
0
2012-03-26T12:22:00.000
1
1.2
true
9,872,029
0
0
1
1
For debug use I want to show the type of a variable in Google App Engine. In traditional environment, I will use "print type( x )" to do it. But in GAE I just don't know why I can't use self.response.out.write( str( type( x ) ) ) to echo it in the browser. I got confused because I did transform the <type 'type'> to < type 'str' >. Since that doesn't work I have to use self.response.out.write( str( type( x) == type( "123" ) ) ) instead of directly echoing the type. So what did I miss in thinking? I am also using logging module to echo the type which works well. But I still want to know why self.response.out.write( ) doesn't work. Thanks all for help!!
How to send emails from django App in Google App Engine
9,875,058
2
3
1,520
0
python,django,google-app-engine
Google only allows you to send emails from a domain name that they control in the google app engine. So you will either have to send it from the test domain they give you, a gmail account, or you need to use their name servers for your domain name.
0
1
0
0
2012-03-26T15:26:00.000
4
0.099668
false
9,874,959
0
0
1
1
I have created one Django Application,also hosted it on the google app engine. I can send emails from django application, but after hosting it to app engine I cant do that.I really stuck with this problem, so please tell me if there is any solution for using django email functions in Google app engine. I have tried appengine_django but it not working. Django version 1.3.1 python version 2.6.5
SGE script: print to file during execution (not just at the end)?
15,344,911
3
5
4,083
0
python,qsub,sungridengine
This is SGE buffering the output of your process, it happens whether its a python process or any other. In general you can decrease or disable the buffering in SGE by changing it and recompiling. But its not a great thing to do, all that data is going to be slowly written to disk affecting your overall performance.
0
1
0
0
2012-03-26T17:40:00.000
8
0.07486
false
9,876,967
0
0
0
1
I have an SGE script to execute some python code, submitted to the queue using qsub. In the python script, I have a few print statements (updating me on the progress of the program). When I run the python script from the command line, the print statements are sent to stdout. For the sge script, I use the -o option to redirect the output to a file. However, it seems that the script will only send these to the file after the python script has completed running. This is annoying because (a) I can no longer see real time updates on the program and (b) if my job does not terminate correctly (for example if my job gets kicked off the queue) none of the updates are printed. How can I make sure that the script is writing to the file each time it I want to print something, as opposed to lumping it all together at the end?
enthought python distribution wx
11,151,178
1
1
682
0
python,64-bit,pygame,wxwidgets,enthought
I had the same problem. The only way around it that has worked for me is to uninstall your EPD version ($ sudo remove-EPD-7.2-1, or whichever version you have) and reinstall the 32 bit version. Wx comes as part of the EPD package, so once you have downloaded the 32 bit version there is no need to download and install wx.
0
1
0
1
2012-03-27T19:53:00.000
1
1.2
true
9,896,687
0
0
0
1
For some reason my 64 bit EPD can't import wx. I also tried to install the wxPython2.8-osx-unicode-py2.7 version from the wx site. It installed successfully, but is no where to be found on my harddrive. I checked the sitepackes for 2.7 and the EPD 7.2.2. where all the modules usually should be installed. I am confused. This raises a similar question. How can I install modules that are not part of EPD ? I also didn't have luck to install other modules. And every time I try to import older modules it doesn't work as well. Often I get error message that architectures in universal rapper is wrong. For example pygame doesn’t have a 64 bit version that works with 2.7, so I installed the 32 bit version. If I try to do the trick arch -i386 /Path to python , I get "Bad CPU type in executable". I am running a 64bit version of Python on a 64 bit Mac OS. I wonder if the Enthougt 7.2 is equivalent with the 2.7 Python. And if not, what I assume, what the differences are. Any hints who can solve this, would be awesome. Thanks for your patients.
How do i run the python 'sdist' command from within a python automated script without using subprocess?
9,918,260
0
6
1,605
0
python,subprocess,packaging,pip,distutils
If you don’t have a real reason to avoid subprocesses (i.e. lack of platform support, not just aesthetics (“I see no point”)), then I suggest you should just not care and run in a subprocess. There are a few ways to achieve what you request, but they have their downsides (like having to catch exceptions and reporting errors).
0
1
0
0
2012-03-28T10:36:00.000
2
0
false
9,905,743
1
0
0
1
I am writing a script to automate the packaging of a 'home-made' python module and distributing it on a remote machine. i am using Pip and have created a setup.py file but i then have to call the subprocess module to call the "python setup.py sdist" command. i have looked at the "run_setup" method in distutils.core but i am trying to avoid using the subprocess module alltogether. (i see no point in opening a shell to run a python command if i am already in python...) is there a way to import the distutils module into my script and pass the setup information directly to one of its methods and avoid using the shell command entirely? or any other suggestions that may help me thanks
Trouble running waitress as a service on Windows Server 2003
13,188,519
0
1
659
0
python,windows-services,waitress
Hmm, March, a bit late. You can't debug without more info. I'd turn the logging all the way to debug, if not already done. I'd also check the NT event log. The only things coming to mind are the firewall, and extra restrictions on apps in server. If possible, try adding an exception for your app.
0
1
0
0
2012-03-28T15:27:00.000
1
0
false
9,910,788
0
0
0
1
So I've written a windows service in python which starts a subprocess that runs a Waitress server, monitors a directory for changes, and restarts the server when a change is detected. On Windows 7, everything works fine. On Windows Server 2003, where I have to deploy this server, the server fails to bind to its port. I've tried running the service as several different users, including NetworkService, but nothing seems to work. There's nothing in the waitress logs, either. How can I even debug this?
Determing PID of CGI Script on Windows
9,912,713
0
0
222
0
python,windows,process,kill
Why are you assured that your CGI script are still working when you try to kill it? Web server starts one instance of CGI script for one request and when script finishes it... just finishes.
0
1
0
1
2012-03-28T16:47:00.000
1
0
false
9,912,121
0
0
0
1
I'm obtaining a PID, using python, of a CGI script, however, the PID is not valid i.e. can't Taskkill it from CL. I get: "Process: no process found with pid xxxx" where xxxx is the pid I thought maybe I have to kill a parent python shell instance, but os.ppid doesn't work in windows. So then I installed psutil python module and can now get parent PID, but it just shows the parent as the actual WebServer (Abyss), which I don't think I want to kill, since it is the http process that I notice runs constantly and not just a CGI interpreter instance. Using psutil I CAN get the process status of the actual script, using the pid returned by os.getpid(), and see that it is running. So the pid works for purposes of process information retrieval using psutil. But this gets me no further to obtaining the actual PID I need to kill the script, using EITHER Taskkill on the CL or via kill() from psutil! What exactly is a cgi shell script from a process perspective, and if it is not a process, why is os.getpid() returning a pid?
different PYSTARTUP variables for different python installations
9,915,173
0
0
184
0
python,environment-variables
You can always export PYSTARTUP="whatever" on the shell before starting your script. You can also put PYSTARTUP="whatever" in front of the command you want to run, e.g. PYSTARTUP="whatever" python something.py
0
1
0
0
2012-03-28T20:14:00.000
1
0
false
9,914,985
1
0
0
1
We have an in-house developed network-based application that is based on python, it provides a command-line environment in the same way like python does. I also have a local python installation on my machine. Sometimes for simple python tasks, i prefer using my local python install... Is possible to to have a different PYSTARTUP env variable for each of installation?
How can I get gevent-py2.7-win64.egg
9,923,444
0
0
377
0
python,gevent
first test cmd easy_install gevent if failed, just go google python setuptools and then installed it and easy_install gevent just let the setuptools to help you choose the suitable module
0
1
0
1
2012-03-29T10:03:00.000
2
0
false
9,923,175
0
0
0
1
I'm new for python. How can I get gevent-py2.7-win64.egg, my system is win32, and I need a 64bit module of gevent
eclipse pydev refuses to parse a .py file
9,926,632
2
1
439
0
python,eclipse,pydev
Right click on the python file you care about and see what the default editor is. You can manually switch any file type here. If it's plain text, just switch it back to PyDev. To ensure that the setting is global, go to Window > Preferences > General > Editors > File associations and look for .py, .pyw, and .pyx. They should all be set to "Python Editor (default)". If not, just select it and select default. If it's not there at all, you can select the "add" button and add it from there.
0
1
0
1
2012-03-29T13:14:00.000
1
1.2
true
9,926,088
0
0
0
1
I'm having a problem with eclipse+pydev. It suddenly refuses to parse a .py file as a python script, which means no syntax highlighting, code completion etc. It worked up until now, but I couldn't find a way to convince it to re parse it. re-opening the file, re-starting the IDE does not help. I suspect deleting some kind of meta-data file would do the trick. Has anyone here encountered this and has a quick solution? I would greatly appreciate that!
How do I create Python eggs from distutils source packages?
9,960,426
26
17
15,618
0
python,setuptools,distutils
setuptools monkey-patches some parts of distutils when it is imported. When you use easy_install to get a distutils-based project from PyPI, it will create an egg (pip may do that too). To do the same thing locally (i.e. in a directory that’s a code checkout or an unpacked tarball), use this trick: python -c "import setuptools; execfile('setup.py')" bdist_egg.
0
1
0
0
2012-03-30T21:01:00.000
2
1.2
true
9,950,362
1
0
0
1
I vaguely remember some sort of setuptools wrapper that would generate .egg files from distutils source. Can someone jog my memory?
How do I generate code under Eclipse+PyDev?
9,998,578
1
0
339
0
eclipse,python-3.x,pydev,generated-code
It should be possible to do what you want using an external builder inside Eclipse... Right click project > properties > builders > new > program, then configure the program as python having as a parameter the module to run and receiving as arguments also the ${build_files} variable (if it's a python script, you have to put your Python.exe as the executable, your main file as an argument and then the ${build_files} variable).
0
1
0
1
2012-03-31T01:49:00.000
2
1.2
true
9,952,327
0
0
1
1
I'm developing a system, and I have build a code generator that emits a bunch of classes based on a configuration file. I would like to configure PyDev to invoke the generator for me whenever the configuration file (or the generator source) changes. I know that this is possible "in theory" because e.g., the ANTLR plugin for Eclipse does this in Java. Is there any kind of support in PyDev for doing this? If not, is there some other Eclipse hackery that I can use to get this working?
How to simulate Google login using gaeunit
9,953,084
0
0
401
0
python,google-app-engine,google-apps,google-signin,gaeunit
Two situations: Local Dev server: login is mocked via a simple web form. You can do a http POST to log in. Production server: login goes through the Google auth infrastructure. No way to mock this. To make this work you'd need to code around it.
0
1
0
1
2012-03-31T03:50:00.000
2
0
false
9,952,873
0
0
1
1
I am currently using gaeunit to perform automated test on my google app engine application. I am wondering whether it's possible to simulate the user login action using his/her google account using gaeunit? Thank you very much.
How do you make an installer for your python program
9,960,652
3
15
28,302
0
python,installation,cross-platform,python-3.2
Python is an interpreted, not a compiled language. So any standalone program you may want to distribute must bundle the entire interpreter inside it plus the libraries you used. This will result in an enormous file size for a single program. Also, this huge install cannot be used to run other python programs. An only marginally more complicated, but far better solution than this, is to just install the interpreter once and run any python program with it. I understand that the point of your question was to make a standalone executable, so I know I'm not answering it. But not being able to create executable standalones is one of the caveats of interpreted languages. However, a fundamental point here is about the whole objective of an interpreted language, which is a single interpreter with all the generic functions required to run any python code (which now happily needn't be any longer than they need to be). So you see, it's not really a caveat, this was the whole intention of interpreted languages. You might finally find a way to create a standalone python executable that runs on your friends' computers, but that would defeat the entire framework and idea behind an interpreted language.
0
1
0
0
2012-03-31T23:24:00.000
5
0.119427
false
9,960,583
1
0
0
1
Im new to python, but I was thinking about making a program with python to give to my friends. They don't know much about computers so if I asked them to install python by them selves they couldn't do it, but what if I could make an installer that downloads some version of python that only has what is needed for my file to run and make an exe file that would run the .py file in its own python interpreter . I also did a Google search and saw the freezing applications I could use to make the code into exe files to distribute (cx_freeze I use python 3.2), but not all of my friends have Windows computers and I rather Have my program so in each new version it auto updates by making a patch to .py file and not completely re-installing it . ** I am not looking for anything to make a stand alone executable . Just some kind of installer that bundles a minimalistic version of the python version your using . And an option to have an exe that is just a link to run the python file in the portable python interpreter, just for windows and a .sh file that would do the same for linux.
Python + Tornado Restart after editing files
9,961,405
12
12
6,556
0
python,reload,tornado,restart
If you are looking for automatic reloading of .py files during development. In your tornado.web.Application() put debug=True after your handlers. I don't think you should do this in production environment, because such implementation typically use a background thread to actively scan files for changes, which may slow down your application.
0
1
0
0
2012-04-01T02:03:00.000
2
1.2
true
9,961,357
0
0
0
1
I just start learning Python + Tornado for my web servers. Every time I modify some code on my python scripts or templates I have to stop the in my terminal (CTRL+C) and restart it (python server.py) and I want a more effective way to do this, that after modifying code in some files the server automatically restarts. Previously I working with NodeJS and using supervisor to do this. Also there is a way to reload my tab in Google Chrome so I can see the changes without reloading (F5) Currently I'm using Ubuntu 11.10 and Sublime Text 2 and using CTRL+B on sublime text, but if the server is already running generates an error because the address and port is in use. There is a fix for that without changing the port. Thanks.
Is it possible to have shared python library in GAE?
9,965,775
0
2
161
0
python,google-app-engine,shared-libraries
The deploy tools for Google App Engine (in the Python world at least) don't know anything about shared projects in Eclipse. You point the deploy script to a folder and that's about it. You can create a script that will copy the projects into a deploy folder and deploy that folder.
0
1
0
0
2012-04-01T15:41:00.000
2
0
false
9,965,725
1
0
0
2
I want to share some modules between a python GAE project, and another python project but want to use the same source files so that I can change them without having to worry about keeping the source files in each project up to date. Is there a way to let two python projects share the same source files outside of their root? Also let GAE know which source files exist outside of the source tree so it deploys them to the server. I'm using PyDev on Eclipse.
Is it possible to have shared python library in GAE?
9,968,299
4
2
161
0
python,google-app-engine,shared-libraries
IF your dev environment is linux, you can use symlinks in your project folder to the shared source. The deployment will treat the symlinks as actual files/folders.
0
1
0
0
2012-04-01T15:41:00.000
2
1.2
true
9,965,725
1
0
0
2
I want to share some modules between a python GAE project, and another python project but want to use the same source files so that I can change them without having to worry about keeping the source files in each project up to date. Is there a way to let two python projects share the same source files outside of their root? Also let GAE know which source files exist outside of the source tree so it deploys them to the server. I'm using PyDev on Eclipse.
Take all input in Python (like UAC)
9,965,932
3
0
507
0
python
You cannot do this without cooperation with operating system. Whatever you do, Ctrl-Alt-Del will allow the user to circumvent your lock.
0
1
0
0
2012-04-01T16:02:00.000
4
0.148885
false
9,965,881
1
0
0
3
Is there any way I can create a UAC-like environment in Python? I want to basically lock the workstation without actually using the Windows lock screen. The user should not be able to do anything except, say, type a password to unlock the workstation.
Take all input in Python (like UAC)
9,968,755
1
0
507
0
python
I would try with pygame, because it can lock mouse to itself and thus keep all input to itself, but i wouldn't call this secure without much testing, ctr-alt-del probably escape it, can't try on windows right now. (not very different of Bryan Oakley's answer, except with pygame)
0
1
0
0
2012-04-01T16:02:00.000
4
0.049958
false
9,965,881
1
0
0
3
Is there any way I can create a UAC-like environment in Python? I want to basically lock the workstation without actually using the Windows lock screen. The user should not be able to do anything except, say, type a password to unlock the workstation.
Take all input in Python (like UAC)
9,966,066
1
0
507
0
python
You might be able to get the effect you desire using a GUI toolkit that draws a window that covers the entire screen, then do a global grab of the keyboard events. I'm not sure if it will catch something like ctrl-alt-del on windows, however. For example, with Tkinter you can create a main window, then call the overrideredirect method to turn off all window decorations (the standard window titlebar and window borders, assuming your window manager has such things). You can query the size of the monitor, then set this window to that size. I'm not sure if this will let you overlay the OSX menubar, though. Finally, you can do a grab which will force all input to a specific window. How effective this is depends on just how "locked out" you want the user to be. On a *nix/X11 system you can pretty much completely lock them out (so make sure you can remotely log in while testing, or you may have to forcibly reboot if your code has a bug). On windows or OSX the effectiveness might be a little less.
0
1
0
0
2012-04-01T16:02:00.000
4
1.2
true
9,965,881
1
0
0
3
Is there any way I can create a UAC-like environment in Python? I want to basically lock the workstation without actually using the Windows lock screen. The user should not be able to do anything except, say, type a password to unlock the workstation.
Securely passing credentials to a program via plaintext file or command line
9,986,152
0
0
1,344
0
python,security,passwords,pipe
Tough problem. If you have access to source code of the program in question, you can change argv[0] after startup. On most flavors of *nix, this will work. The config file approach may be better from a security perspective. If the config file can be specified at run time, you could generate a temp file (see mkstemp), write password there, and invoke subprocess. You could even add a small delay (to give subprocess time to do its thing) and possibly even remove the config file. Of course the best solution is to change the program in question to read password from stdin (but it sounds like you already knew that)
0
1
0
1
2012-04-03T01:38:00.000
2
0
false
9,986,059
0
0
0
2
I have a program (not written by me) that I would like to use. It authenticates to an online service using a username and password that I would like to keep private. The authentication information may be passed to the program in two ways: either directly as command-line arguments or via a plaintext configuration file, neither of which seem particularly secure. I would like to write a Python script to manage the launching of this program and keep my credentials away from the prying eyes of other users of the machine. I am running in a Linux environment. My concerns with the command-line approach are that the command line used to run the program is visible to other users via the /proc filesystem. Likewise, a plaintext configuration file could be vulnerable to reading by someone with the appropriate permissions, like a sysadmin. Does anyone have any suggestions as to a good way to do this? If I had some way of obscuring the arguments used at the command line from the rest of the system, or a way to generate a configuration file that could be read just once (conceptually, if I could pipe the configuration data from the script to the program), I would avoid the situation where my credentials are sitting around potentially readable by someone else on the system.
Securely passing credentials to a program via plaintext file or command line
9,986,183
0
0
1,344
0
python,security,passwords,pipe
About the only thing you can do in this situation is store the credentials in a text file and then deny all other users of the machine permissions to read or write it. Create a user just for this script, in fact. Encrypting doesn't do much because you still have to have the key in the script, or somewhere the script can read it, so it's the same basic attack.
0
1
0
1
2012-04-03T01:38:00.000
2
0
false
9,986,059
0
0
0
2
I have a program (not written by me) that I would like to use. It authenticates to an online service using a username and password that I would like to keep private. The authentication information may be passed to the program in two ways: either directly as command-line arguments or via a plaintext configuration file, neither of which seem particularly secure. I would like to write a Python script to manage the launching of this program and keep my credentials away from the prying eyes of other users of the machine. I am running in a Linux environment. My concerns with the command-line approach are that the command line used to run the program is visible to other users via the /proc filesystem. Likewise, a plaintext configuration file could be vulnerable to reading by someone with the appropriate permissions, like a sysadmin. Does anyone have any suggestions as to a good way to do this? If I had some way of obscuring the arguments used at the command line from the rest of the system, or a way to generate a configuration file that could be read just once (conceptually, if I could pipe the configuration data from the script to the program), I would avoid the situation where my credentials are sitting around potentially readable by someone else on the system.
dll in a different directory than c:\windows\system32
10,012,400
1
0
239
0
python,dll,windows-7,pylucene
The answer is probably, that if something like that happens to you, there is a wrong DLL somewhere in the path that is called first (and puttting the DLL in system32 ensures it is first). So if you put the relevant DLL in your PATH - make sure it is the first (or look into each element to find out who is getting ahead of you) Answered myself for future reference for people
0
1
0
0
2012-04-03T09:15:00.000
1
1.2
true
9,990,358
0
0
0
1
Finally I managed to get pylucene working on my windows7 machine, Which raised a more general question: How come that when I have a dll on a directory in the path, python couldn't find it, and when I put the dll in c:\windows\system32 - it did work? Using windows 7 32 bit
Generating a unique data store key from a federated identity
10,023,490
2
2
194
1
python,google-app-engine,authentication
User.federated_identity() "Returns the user's OpenID identifier.", which is unique by definition (it's a URL that uniquely identifies the user).
0
1
0
0
2012-04-03T22:13:00.000
1
1.2
true
10,002,209
0
0
1
1
I need a unique datastore key for users authenticated via openid with the python 2.7 runtime for the google apps engine. Should I use User.federated_identity() or User.federated_provider() + User.federated_identity()? In other words is User.federated_identity() unique for ALL providers or just one specific provider?
How to install the py2exe modul in Linux
10,009,715
9
9
23,189
0
python,installation,py2exe
Py2exe has to run on Windows, you can not run it in Linux. (Maybe wine can help, but I'm not sure)
0
1
0
0
2012-04-04T10:53:00.000
2
1
false
10,009,660
0
0
0
1
I downloaded the actual py2exe package. But I've no idea how to get it in my system. I mean I can follow the tutorial a 100% but I can't find anything how to install py2exe to my kubuntu 11.10. I also can't find a py2exe.py which I could include to my workingfolder. Could some please help me the project has to be finish till tomorrow? Thanks for your help cheers, Chris
Completely locking down Windows 7 using Python 3.2?
28,282,261
-1
0
545
0
winapi,windows-7,python-3.x,block,pywin32
write a while loop Into the while loop write the command to taskkill dwm.exe It's a poor solution, but the only one I know. Regards!
0
1
0
0
2012-04-04T22:52:00.000
1
-0.197375
false
10,020,389
0
0
0
1
This might be a more difficult question since I don't even know how to do it outside of Python... I want to write a terminal program that completely locks up my PC until a password is entered. In the locked state no one should be able to do anything outside the terminal. In it, the user may rampage and write silly commands, but he should not be able to switch windows, click outside of it, open the task manager, open the menu with Ctrl + Alt Gr + Del and so on. While searching for a way to accomplish this I've thought of two approaches that Python is also able to do: Modifying the registry -> can disable task manager and some other functions, but not the app switcher neither the menu Task-killing explorer.exe and dwm.exe -> killing explorer.exe just removed the taskbar, killing dwm.exe seems like the right way, but as it's the window manager it automatically boots up itself again as soon as it gets killed I know this is a kind of weird question and doesn't contain any code snippet, but the front-end is no problem and as said I don't even got a working approach for the back-end.
Determine a JDK install directory through Python
10,029,856
3
0
970
0
python,path,directory,java
You can check for JAVA_HOME or JDK_HOME system variables. But there is no guarantee that there are no other JDKs installed. You have to be more specific what you are willing to achieve.
0
1
0
0
2012-04-05T13:47:00.000
1
1.2
true
10,029,823
1
0
0
1
I'm currently writing a program in Python, and I would like to determine the path to the JDK install directory if it's on the system. Is there a way of doing this in Python? If not, is there a way of doing it in Java (or another language)? If it is the latter, I could open a subprocess from within Python to obtain it. Thanks.
Android Back End Technology - Language (Java, Python) & IDE (CoderBuddy, exo Cloud, Cloud 9)
10,033,297
1
5
2,053
0
java,android,python,ide,cloud
My guess is that using Java you will have lots of frameworks to find solutions and I really don't think Python will offer you that. About IDE, I don't think you should worry about it with Python, you can use SublimeText 2 or Eclipse(have to install python editor). Both work great and Python is easy to deploy. With Java I use Eclipse but a friend is using NetBeans and it has some "shortcuts" to create things like services, for instance. Also with Java, you'll be more familiarized because of Android so I think it is a plus, makes more sense. You need to at least start so you can have a better idea of what is best for you. And get ready, it will be a LOT different from Delphi ;)
0
1
0
0
2012-04-05T16:09:00.000
1
1.2
true
10,032,093
0
0
1
1
I've done my research and narrowed this down. OK, so I am deciding on the language and and tool to use for backend (server side) of developing cloud based android applications.. I've decided on Google App Engine as my framework. As I am going to be developing on my android tablet I want a cloud based IDE. (I am going to use a native android IDE app for client side). App Engine supports the Go Programming Language, Java and Python. As there doesn't appear to be a stable cloud IDE that supports Go, I am left with Java & Python. I've narrowed my vast list of IDEs down to: Coderbuddy - (Designed for App Engine but Python only) exo Cloud - (Java & Python supported) Cloud 9 - (Java & Python supported) I know neither language. I have to learn Java in any case for Android client side development. I understand that Python is faster to code in and so that's definately a factor but I absolutely don't want to sacrifice performance or scalability. I will be doing lots of SQL database stuff. Finally if you think I am way off and should look in another direction please let me know. Thanks! Edit: My background language is Delphi (Object Pascal)
Authenticate with sudo on a crontab job
10,198,435
1
2
489
0
python,crontab
Everything solved with the sudo crontab -u username -e credits to 9000
0
1
0
1
2012-04-05T17:19:00.000
1
1.2
true
10,033,057
0
0
0
1
I have a call to a python script that is sudo -u user_name python python_scipt.py and I need to schedule it to run every 30 minutes with crontab. The problem is how can I authenticate it with sudo on crontab?
how to track revoked tasks in across multiple celeryd processes
10,063,062
0
2
803
0
python,celery,celeryd
I implemented something similar to this some time ago, and the solution I came up with was very similar to yours. The way I solved this problem was to have the worker fetch the Task object from the database when the job ran (by passing it the primary key, as the documentation recommends). In your case, before the reminder is sent the worker should perform a check to ensure that the task is "ready" to be run. If not, it should simply return without doing any work (assuming that the ETA has changed and another worker will pick up the new job).
0
1
0
0
2012-04-08T13:01:00.000
3
0
false
10,062,999
0
0
0
1
I have a reminder type app that schedules tasks in celery using the "eta" argument. If the parameters in the reminder object changes (e.g. time of reminder), then I revoke the task previously sent and queue a new task. I was wondering if there's any good way of keeping track of revoked tasks across celeryd restarts. I'd like to have the ability to scale celeryd processes up/down on the fly, and it seems that any celeryd processes started after the revoke command was sent will still execute that task. One way of doing it is to keep a list of revoked task ids, but this method will result in the list growing arbitrarily. Pruning this list requires guarantees that the task is no longer in the RabbitMQ queue, which doesn't seem to be possible. I've also tried using a shared --statedb file for each of the celeryd workers, but it seems that the statedb file is only updated on termination of the workers and thus not suitable for what I would like to accomplish. Thanks in advance!
how to compile a python .app from ubuntu or windows
10,861,704
0
2
1,261
0
python,ubuntu,osx-snow-leopard,cx-freeze
As far as I remember, last I used Mac OSX, it already had Python installed. This was with Mac OSX Snow Leopard.
0
1
0
0
2012-04-08T16:52:00.000
2
0
false
10,064,560
0
0
0
1
I am still very new to Python and any freezing programs. Recently I made a very short text adventure game I'd like to send to a few friends, most of whom have Snow Leopard. I don't want them to have to bother downloading python just to play the game though. Is there a way to convert my game from Ubuntu so that it is playable on Mac? That is, make an .app file from ubuntu? Or even from Windows, I suppose. I tried using cx_freeze on Windows but that just compiles an exe which is not playable on Mac. Thanks for any help and suggestions. EDIT: I am using Python 3.2.2. I think Macs come standard with an older version else there would be no problem just sending them the game, I imagine.
How to get program to not throw "Error: Can't load Floyd's algorithm library"
10,067,738
0
0
144
0
python,windows,linux,isometric
This source code is over 5 years old and the build script for floyd looks to assume hard-coded python2.4. It seems pretty clear that your floyd module did not build. You will most likely have to go back to the build step and ensure that you are properly generating a _floyd.so. If you built it correctly, then this should not fail for you: python -c "import _floyd"
0
1
0
0
2012-04-08T21:11:00.000
3
0
false
10,066,554
0
0
0
1
another question for all of you- So i am trying to get a program called Pysomap to work (its basically ISOMAP but for python[http://web.vscht.cz/spiwokv/pysomap/]), i follow the directions best as i can, building it on Ubuntu, Windows, and Fedora (prebuilt libraries), but cant seem to get it to work. On windows (which is the preferred implementation platform), every time i go to python and import pysomap, it gives me the above error. Anybody know how to solve this? Thanks -J
Python 2.5.6 build error on Mac Lion
10,090,778
2
1
483
0
python,build,osx-lion,web2py
web2py works fine with Lion's stock Python 2.7. Unless you have a compelling reason to use 2.5, stick with 2.7.
0
1
0
0
2012-04-10T06:56:00.000
3
1.2
true
10,084,379
0
0
0
2
Here is what I would like to do. Use web2py with MySQL. To do that, I need to use source web2py rather than web2py.app To use web2py, I need Python 2.5 I am having trouble building and installing Python 2.5 I downloaded Python-2.5.6.tgz from Python release page. Now, I did ./configure and then make Then, I get the following error. gcc -c -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c cc1: error: unrecognized command line option "-Wno-long-double" make: * [Modules/python.o] Error 1 Can anybody help me how to get rid of this error and install Python 2.5? Here is gcc I am using gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00) Your help would be greatly appreciated. Thanks.
Python 2.5.6 build error on Mac Lion
10,114,744
0
1
483
0
python,build,osx-lion,web2py
I have web2py on my iMac OSX Lion using the web2py app and MySQL. I haven't run into any reason why you can't use the app with MySQL.
0
1
0
0
2012-04-10T06:56:00.000
3
0
false
10,084,379
0
0
0
2
Here is what I would like to do. Use web2py with MySQL. To do that, I need to use source web2py rather than web2py.app To use web2py, I need Python 2.5 I am having trouble building and installing Python 2.5 I downloaded Python-2.5.6.tgz from Python release page. Now, I did ./configure and then make Then, I get the following error. gcc -c -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c cc1: error: unrecognized command line option "-Wno-long-double" make: * [Modules/python.o] Error 1 Can anybody help me how to get rid of this error and install Python 2.5? Here is gcc I am using gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00) Your help would be greatly appreciated. Thanks.
Securing data in the google app engine datastore
10,097,467
3
17
3,792
0
python,security,google-app-engine,rsa,sha
You can increase your hashing algorithm security by using HMAC, a secret key, and a unique salt per entry (I know people will disagree with me on this but it's my belief from my research that it helps avoid certain attacks). You can also use bcrypt or scrypt to hash which will make reversing the hash an extremely time consuming process (but you'll also have to factor this in as time it takes your app to compute the hash). By disabling code downloads and keeping your secret key protected, I can't imagine how someone can get a hold of it. Just make sure your code is kept protected under similar safe guards or that you remove the secret key from your code during development and only pull it out to deploy. I assume you will keep your secret key in your code (I've heard many people say to keep it in memory to be ultra secure but given the nature of AppEngine and instances, this isn't feasible). Update: Be sure to enable 2-factor authentication for all Google accounts that have admin rights to your app. Google offers this so not sure if your restriction for enabling this was imposed by an outside force or not.
0
1
0
0
2012-04-10T20:50:00.000
3
0.197375
false
10,096,268
0
0
1
2
Our google app engine app stores a fair amount of personally identifying information (email, ssn, etc) to identify users. I'm looking for advice as to how to secure that data. My current strategy Store the sensitive data in two forms: Hashed - using SHA-2 and a salt Encrypted - using public/private key RSA When we need to do look ups: Do look-ups on the hashed data (hash the PII in a query, compare it to the hashed PII in the datastore). If we ever need to re-hash the data or otherwise deal with it in a raw form: Decrypt the encrypted version with our private key. Never store it in raw form, just process it then re-hash & re-encrypt it. My concerns Keeping our hash salt secret If an attacker gets ahold of the data in the datastore, as well as our hash salt, I'm worried they could brute force the sensitive data. Some of it (like SSN, a 9-digit number) does not have a big key space, so even with a modern hash algorithm I believe it could be done if the attacker knew the salt. My current idea is to keep the salt out of source control and in it's own file. That file gets loaded on to GAE during deployment and the app reads the file when it needs to hash incoming data. In between deployments the salt file lives on a USB key protected by an angry bear (or a safe deposit box). With the salt only living in two places The USB key Deployed to google apps and with code download permanently disabled, I can't think of a way for someone to get ahold of the salt without stealing that USB key. Am I missing something? Keeping our private RSA key secret Less worried about this. It will be rare that we'll need to decrypt the encrypted version (only if we change the hash algorithm or data format). The private key never has to touch the GAE server, we can pull down the encrypted data, decrypt it locally, process it, and re-upload the encrypted / hashed versions. We can keep our RSA private key on a USB stick guarded by a bear AND a tiger, and only bring it out when we need it. I realize this question isn't exactly google apps specific, but I think GAE makes the situation somewhat unique. If I had total control, I'd do things like lock down deployment access and access to the datastore viewer with two-factor authentication, but those options aren't available at the moment (Having a GAE specific password is good, but I like having RSA tokens involved). I'm also neither a GAE expert nor a security expert, so if there's a hole I'm missing or something I'm not thinking of specific to the platform, I would love to hear it.
Securing data in the google app engine datastore
10,098,781
11
17
3,792
0
python,security,google-app-engine,rsa,sha
When deciding on a security architecture, the first thing in your mind should always be threat models. Who are your potential attackers, what are their capabilities, and how can you defend against them? Without a clear idea of your threat model, you've got no way to assess whether or not your proposed security measures are sufficient, or even if they're necessary. From your text, I'm guessing you're seeking to protect against some subset of the following: An attacker who compromises your datastore data, but not your application code. An attacker who obtains access to credentials to access the admin console of your app and can deploy new code. For the former, encrypting or hashing your datastore data is likely sufficient (but see the caveats later in this answer). Protecting against the latter is tougher, but as long as your admin users can't execute arbitrary code without deploying a new app version, storing your keys in a module that's not checked in to source control, as you suggest, ought to work just fine, since even with admin access, they can't recover the keys, nor can they deploy a new version that reveals the keys to them. Make sure to disable downloading of source, obviously. You rightly note some concerns about hashing of data with a limited amount of entropy - and you're right to be concerned. To some degree, salts can help with this by preventing precomputation attacks, and key stretching, such as that employed in PBKDF2, scrypt, and bcrypt, can make your attacker's life harder by increasing the amount of work they have to do. However, with something like SSN, your keyspace is simply so small that no amount of key stretching is going to help - if you hash the data, and the attacker gets the hash, they will be able to determine the original SSN. In such situations, your only viable approach is to encrypt the data with a secret key. Now your attacker is forced to brute-force the key in order to get the data, a challenge that is orders of magnitude harder. In short, my recommendation would be to encrypt your data using a standard (private key) cipher, with the key stored in a module not in source control. Using hashing instead will only weaken your data, while using public key cryptography doesn't provide appreciable security against any plausible threat model that you don't already have by using a standard cipher. Of course, the number one way to protect your users' data is to not store it in the first place, if you can. :)
0
1
0
0
2012-04-10T20:50:00.000
3
1.2
true
10,096,268
0
0
1
2
Our google app engine app stores a fair amount of personally identifying information (email, ssn, etc) to identify users. I'm looking for advice as to how to secure that data. My current strategy Store the sensitive data in two forms: Hashed - using SHA-2 and a salt Encrypted - using public/private key RSA When we need to do look ups: Do look-ups on the hashed data (hash the PII in a query, compare it to the hashed PII in the datastore). If we ever need to re-hash the data or otherwise deal with it in a raw form: Decrypt the encrypted version with our private key. Never store it in raw form, just process it then re-hash & re-encrypt it. My concerns Keeping our hash salt secret If an attacker gets ahold of the data in the datastore, as well as our hash salt, I'm worried they could brute force the sensitive data. Some of it (like SSN, a 9-digit number) does not have a big key space, so even with a modern hash algorithm I believe it could be done if the attacker knew the salt. My current idea is to keep the salt out of source control and in it's own file. That file gets loaded on to GAE during deployment and the app reads the file when it needs to hash incoming data. In between deployments the salt file lives on a USB key protected by an angry bear (or a safe deposit box). With the salt only living in two places The USB key Deployed to google apps and with code download permanently disabled, I can't think of a way for someone to get ahold of the salt without stealing that USB key. Am I missing something? Keeping our private RSA key secret Less worried about this. It will be rare that we'll need to decrypt the encrypted version (only if we change the hash algorithm or data format). The private key never has to touch the GAE server, we can pull down the encrypted data, decrypt it locally, process it, and re-upload the encrypted / hashed versions. We can keep our RSA private key on a USB stick guarded by a bear AND a tiger, and only bring it out when we need it. I realize this question isn't exactly google apps specific, but I think GAE makes the situation somewhat unique. If I had total control, I'd do things like lock down deployment access and access to the datastore viewer with two-factor authentication, but those options aren't available at the moment (Having a GAE specific password is good, but I like having RSA tokens involved). I'm also neither a GAE expert nor a security expert, so if there's a hole I'm missing or something I'm not thinking of specific to the platform, I would love to hear it.
Google App Engine & Google Storage
10,115,635
2
2
644
0
python,google-app-engine,google-cloud-storage
Since you created the API console project with an Apps account (one @yourdomain.com), the project is automatically treated as an Apps project, and only users from your domain can be added to it. To avoid this, create a new project using a @gmail.com account, and then add all the developers you want to have access to it. You can then remove the @gmail.com account.
0
1
0
0
2012-04-11T13:41:00.000
2
1.2
true
10,107,136
0
0
1
1
I try to enable Cloud Storage for my GAE app. I read in the docs that: Add the service account as a project editor to the Google APIs Console project that the bucket belongs to. For information about permissions in Cloud Storage, see Scopes and Permissions on the Cloud Storage documentation. However when I try to the service account to Team Members at the API Console I get the following message: User *@*.gserviceaccount.com may not be added to Project "**". Only members from domain *.com may be added. Any ideas?