Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Set environment variables in GAE control panel
| 35,254,560
| 4
| 2
| 2,796
| 0
|
python,google-app-engine
|
You can store your keys in datastore. Later when you need them in the code, you can fetch them from datastore and cache them by memcache.
| 0
| 1
| 0
| 0
|
2014-07-10T10:06:00.000
| 3
| 0.26052
| false
| 24,673,772
| 0
| 0
| 1
| 1
|
I deploy my project to GAE over Github. There is some foreign API-key which I don't want to save in repository and make them public. Is it possible to set an environment variable for a project in GAE control panel so I can catch it in my application?
|
exiting a program with a cached exit code
| 24,686,774
| 1
| 0
| 56
| 0
|
python,c,caching
|
Since no one has really proposed anything I'll drop my idea here. If you need an example let me know and I'll include one.
The easiest thing to do would be to serialize a dictionary that contains the system health and last time.time() it was checked. At the beginning of your program unpickle the dictionary, check the time, if it's less then your 60 second time interval, quit. Otherwise check the health like normal and cache it (with the time).
| 0
| 1
| 0
| 0
|
2014-07-10T21:16:00.000
| 1
| 0.197375
| false
| 24,686,448
| 0
| 0
| 0
| 1
|
I have a "healthchecker" program, that calls a "prober" every 10 seconds to check if a service is running. If the prober exits with return code 0, the healthchecker considers the tested service fine. Otherwise, it considers it's not working.
I can't change the healthchecker (I can't make it check with a bigger interval, or using a better communication protocol than spawning a process and checking its exit code).
That said, I don't want to really probe the service every 10 seconds because it's overkill. I just wanna probe it every minute.
My solution to that is to make the prober keep a "cache" of the last answer valid for 1 minute, and then just really probe when this cache expires.
That seems fine, but I'm having trouble thinking on a decent approach to do that, considering the program must exit (to return an exit code). My best bet so far would be to transform my prober in a daemon (that will keep the cache in memory) and create a client to just query it and exit with its response, but it seems too much work (and dealing with threads, and so on).
Another approach would be to use SQLite/memcached/redis.
Any other ideas?
|
Check to see if data is the same before writing over it
| 24,715,910
| 0
| 0
| 51
| 0
|
python,bash,arduino
|
Just save the output from the arduino to a temporary variable and compare it to another that is the last value written to a file. If it is different, change the last value written to the new temperature and write it to the file.
| 0
| 1
| 0
| 1
|
2014-07-11T15:10:00.000
| 3
| 0
| false
| 24,700,966
| 0
| 0
| 0
| 1
|
I have the temperature coming from my arduino through the serial port on my mac. I need to write the data to a file, i don't want my script to write the data from /dev/tty.usbserial-A5025XZE (serial port) if the data is the same or if it is nothing. The temperature is the the format "12.32" and is sent every 5s.
|
python subprocess popen starts immediately
| 24,713,362
| 2
| 0
| 524
| 0
|
python,linux,process
|
Just fork and before exec of the shell you call ptrace() with PTRACE_TRACEME so the exec doesn't start immediately, giving the parent all the time it needs to prepare before it tells the child to continue (PTRACE_CONT, PTRACE_SYSCALL, or PTRACE_SINGLESTEP).
When using subprocess.Popen() you may use the preexec_fn argument mentioned by @RuiSilva to do the PTRACE_TRACEME call.
| 0
| 1
| 0
| 0
|
2014-07-12T07:53:00.000
| 2
| 0.197375
| false
| 24,710,900
| 0
| 0
| 0
| 2
|
My goal is to be able to start shell script in separate process and inspect it by linux ptrace syscall.
The problem is that I need to get process PID before it even starts. Stuff like subprocess.Popen(['ls', '-l']) or python-sh runs command immediately, so in a time I am trying to inspect this process by its PID it is likely finished.
On the other hand I cant use os.fork + exec because bash command I start overrides python code.
|
python subprocess popen starts immediately
| 24,711,374
| 3
| 0
| 524
| 0
|
python,linux,process
|
If you're using Unix, I think that you can use the preexec_fn argument in the Popen constructor.
According to the documentation of subprocess:
If preexec_fn is set to a callable object, this object will be called in the child process just before the child is executed. (Unix only)
As it runs in the child process, you can use os.getpid() to get the child pid.
| 0
| 1
| 0
| 0
|
2014-07-12T07:53:00.000
| 2
| 1.2
| true
| 24,710,900
| 0
| 0
| 0
| 2
|
My goal is to be able to start shell script in separate process and inspect it by linux ptrace syscall.
The problem is that I need to get process PID before it even starts. Stuff like subprocess.Popen(['ls', '-l']) or python-sh runs command immediately, so in a time I am trying to inspect this process by its PID it is likely finished.
On the other hand I cant use os.fork + exec because bash command I start overrides python code.
|
Logging Handlers Empty - Why Logging TimeRoatingFileHandler doesn't work
| 24,719,986
| 1
| 4
| 3,512
| 0
|
python,windows,logging,logrotate,log-rotation
|
Firstly the issue is that, if you use a config file to initialise logging with file and console handlers, then it does not populate logging.handlers list, so you can not iterate over it and close+flush the streams prior to opening new one with a new logging file name.
If you want to use TimeRotatingFileHandler or RotatingFileHandler, it sits under logging/handler.py and when it tries to do a roll over, it only closes its own stream, as it has no idea what streams the parent logging (mostly singleton) class may have open. And so when you do a roll over, there is a file lock (file filehandler) and boom it all fails.
So the solution (for me) is to initialise logging programatically and use addHandlers on logging, which also populates logging.handlers [], which I then use to iterate over my console/file handler and close them prior to manually rotating the file.
It to me looks like an obvious bug with the logging class, and if its working on unix - it really shouldn't.
Thanks everyone, especially @falsetru for your help.
| 0
| 1
| 0
| 0
|
2014-07-13T03:59:00.000
| 4
| 0.049958
| false
| 24,719,421
| 1
| 0
| 0
| 2
|
So I do logging.config.fileConfig to setup my logging from a file config that has console and file handler. Then I do logging.getLogger(name) to get my logger and log. At certain times I want the filehandler's filename to change i.e. log rotate (I can't use time rotator because of some issues with Windows platform) so to do that I call logger.handlers - it shows an empty list, so I cant close them!! However when I step through the debugger, its clearly not empty (well of course without it I wouldn't be able to log right)
Not sure whats going on here, any gotchas that I'm missing?
Appreciate any help. Thanks.
|
Logging Handlers Empty - Why Logging TimeRoatingFileHandler doesn't work
| 54,827,449
| 0
| 4
| 3,512
| 0
|
python,windows,logging,logrotate,log-rotation
|
Maybe there is no such name as 'TimeRoatingFileHandler' because you missed 'd' in word 'Timed'. So it must be: 'TimedRoatingFileHandler'
| 0
| 1
| 0
| 0
|
2014-07-13T03:59:00.000
| 4
| 0
| false
| 24,719,421
| 1
| 0
| 0
| 2
|
So I do logging.config.fileConfig to setup my logging from a file config that has console and file handler. Then I do logging.getLogger(name) to get my logger and log. At certain times I want the filehandler's filename to change i.e. log rotate (I can't use time rotator because of some issues with Windows platform) so to do that I call logger.handlers - it shows an empty list, so I cant close them!! However when I step through the debugger, its clearly not empty (well of course without it I wouldn't be able to log right)
Not sure whats going on here, any gotchas that I'm missing?
Appreciate any help. Thanks.
|
Received 'can't find '__main__' module in '' with python package
| 61,834,365
| 4
| 19
| 65,827
| 0
|
python,python-2.7,command-line,packaging
|
Just change the name __init__.py file to __main__.py
| 0
| 1
| 0
| 0
|
2014-07-13T14:10:00.000
| 4
| 0.197375
| false
| 24,723,547
| 1
| 0
| 0
| 2
|
I'm trying to release my first Python package in the wild and I was successful in setting it up on PyPi and able to do a pip install. When I try to run the package via the command line ($ python etlTest), I receive the following error:
/usr/bin/python: can't find '__main__' module in 'etlTest'
When I run the code directly from my IDE, it works without issue. I am using Python 2.7 and have __init__.py scripts where required. What do I need to do to get this working?
|
Received 'can't find '__main__' module in '' with python package
| 67,251,553
| 1
| 19
| 65,827
| 0
|
python,python-2.7,command-line,packaging
|
I had the same problem and solved it by making sure I'm in the correct directory of the package you are trying to run.
For Windows, type dir in the console, while on Linux/macOS - ls to see your current directory
| 0
| 1
| 0
| 0
|
2014-07-13T14:10:00.000
| 4
| 0.049958
| false
| 24,723,547
| 1
| 0
| 0
| 2
|
I'm trying to release my first Python package in the wild and I was successful in setting it up on PyPi and able to do a pip install. When I try to run the package via the command line ($ python etlTest), I receive the following error:
/usr/bin/python: can't find '__main__' module in 'etlTest'
When I run the code directly from my IDE, it works without issue. I am using Python 2.7 and have __init__.py scripts where required. What do I need to do to get this working?
|
could not save preference file google-apps-engine
| 24,765,014
| 0
| 1
| 53
| 0
|
python,google-app-engine
|
I think I have found the answer to my own question.
I have a small app I have written to backup my stuff to Google Drive, this app would appear to have an error in it that does not stop it from running but does cause it to make a file called
C:\Usera\myname\Google
Therefore GAE can not create a directory called C:\Usera\myname/Google nor a file called C:\Usera\myname/Google\google_appengine_launcher.ini
I deleted the file Google, made a directory called Google and ran the GAE, saved pereferences and all working
| 0
| 1
| 0
| 0
|
2014-07-14T04:13:00.000
| 1
| 0
| false
| 24,729,427
| 0
| 0
| 1
| 1
|
Just installed Google Apps Engine and am getting "could not save" errors.
Specifically if I go in to preferences I get
Could not save into preference file
C:\Usera\myname/Google\google_appengine_launcher.ini:No such file or directory.
So some how I have a weird path, would like to know where and how to change this. I have search but found nothing, I have done a repair reinstall of GAE
Can find nothing in the registry for google_appengine_launcher.ini
I first saw the error when I created my first Application
Called hellowd
Parent Directory: C:\Users\myname\workspace
Runtime 2.7 (PATH has this path)
Port 8080
Admin port 8080
click create
Error:
Could not save into project file
C:\Users\myname/Google\google_appengine_launcher.ini:No such file or directory.
Thanks
|
GStreamer timing in Python
| 24,807,444
| 0
| 0
| 739
| 0
|
python,gstreamer,playbin2
|
The best way to do it really synchronized with the video would be to use something like the cairooverlay element and do the rendering yourself directly inside the pipeline, based on the actual timestamps of the frames. Or alternatively write your own element for doing that.
The easiest solution if timing is not needed to be super accurate would be to use the pipeline clock. You can get it from the pipeline once it started, and then could create single shot (or periodic) clock ids for the time or interval you want. And then use the async_wait() method on the clock.
To get the clock time that corresponds to e.g. the position 1 second of the pipeline you would add 1 second (i.e. 1000000000) to the pipeline's base time. You can use that value then when creating the clock ids.
| 0
| 1
| 0
| 0
|
2014-07-14T14:10:00.000
| 1
| 1.2
| true
| 24,738,503
| 0
| 0
| 0
| 1
|
In my Python program I use GStreamer's playbin in combination with a textoverlay to play a video file and show some text on top of it.
This works fine: If I change the text property of the textoverlay then the new text is shown.
But now I want to set the text based on the video's current position/time (like subtitles).
I read about a pipeline's clock, buffer's timestamps, segment-events and external timers which query the current time every x millisecs. But what is the best practice to get informed about time-changes so that I can show the correct text as soon as possible?
|
iPython notebook won't open some files
| 27,664,732
| 2
| 2
| 1,542
| 0
|
ipython,tornado,ipython-notebook
|
Errno 5 is a low level error usually reported when your disk has bad sectors.
I don't think the error is related to the file or ipython, check your disk with an appropriate tool (fsck if you are using Linux).
| 0
| 1
| 0
| 0
|
2014-07-15T04:15:00.000
| 2
| 0.197375
| false
| 24,749,764
| 1
| 0
| 0
| 1
|
I have a git folder with several ipython notebook files in it. I've just got a new comp and installed ipython. When I open some files, it works fine, others, however, display this error:
Error loading notebook, bad request.
The log looks like:
2014-07-16 00:20:11.523 [NotebookApp] WARNING | Unreadable Notebook: /nas-6000/wclab/Ahmed/Notebook/01 - Boundary Layer.ipynb [Errno 5] Input/output error
WARNING:tornado.access:400 GET /api/notebooks/01%20-%20Boundary%20Layer.ipynb?_=1405434011080 (127.0.0.1) 3.00ms referer=linktofile
The read/write and owner permissions are the same for each of the files. The files open fine on my other computers, it's just this new one. Any ideas?
Cheers,
James
|
MongoDB Not Running from Command Prompt
| 45,618,082
| 0
| 1
| 254
| 0
|
python,mongodb
|
You need to make sure you're running mongod in another terminal tab first.
| 0
| 1
| 0
| 0
|
2014-07-15T09:17:00.000
| 1
| 0
| false
| 24,754,321
| 0
| 0
| 0
| 1
|
I am a newbie in Python and has installed MongoDB but each time I try to run mongo.exe from command prompt C:\Program Files\MongoDB 2.6 Standard\bin>mongo.exe, it issues the following:
MongoDB shell version: 2.6.3
connecting to: test
2014-07-15T10:02:02.670+0100 warning: Failed to connect to 127.0.0.1:27017, reas
on: errno:10061 No connection could be made because the target machine actively
refused it.
2014-07-15T10:02:02.672+0100 Error: couldn't connect to server 127.0.0.1:27017 (
127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
How can I resolve this? Thank you.
|
Why wouldn't I want to add Python.exe to my System Path at install time?
| 24,786,526
| 7
| 6
| 9,230
| 0
|
python,windows,path-variables,python-install
|
If you only have one version of Python installed, it won't matter.
If you have multiple versions installed, then the first one that appears in your system Path will be executed when you use the "python" command. Additionally, it can make older versions inaccessible without extra work. For example, I had a system with Python 2.7 installed and I added 3.2 on top of that and checked the option to to add Python.exe to the path during installation. After doing that, entering both "python" and "python3" on the command line opened up Python 3.2, so I would need to enter the full path to the 2.7 interpreter when I needed to execute 2.x scripts.
| 0
| 1
| 0
| 0
|
2014-07-16T16:07:00.000
| 2
| 1.2
| true
| 24,785,562
| 1
| 0
| 0
| 2
|
I'm reinstalling Python, on Windows 7, and one of the first dialog boxes is the Customize Python screen.
The default setting for "Add Python.exe to Path" is "Entire feature will be unavailable."
I always change this to "Will be installed on local hard drive."
It's not an issue, changing the system environment variables is a snap, but is there any upside to leaving this un-ticked?
|
Why wouldn't I want to add Python.exe to my System Path at install time?
| 24,786,463
| 1
| 6
| 9,230
| 0
|
python,windows,path-variables,python-install
|
One upside I can think of is if you run multiple python versions in windows. So, you have c:\python34 and c:\python27 but both are in the path, you'll get whichever comes first, leading you to a possibly unexpected result.
| 0
| 1
| 0
| 0
|
2014-07-16T16:07:00.000
| 2
| 0.099668
| false
| 24,785,562
| 1
| 0
| 0
| 2
|
I'm reinstalling Python, on Windows 7, and one of the first dialog boxes is the Customize Python screen.
The default setting for "Add Python.exe to Path" is "Entire feature will be unavailable."
I always change this to "Will be installed on local hard drive."
It's not an issue, changing the system environment variables is a snap, but is there any upside to leaving this un-ticked?
|
Simplest pub-sub for golang <--> python communication, possibly across machines?
| 24,791,213
| 0
| 3
| 3,206
| 0
|
python,python-2.7,go,publish-subscribe
|
For your specific pattern, simply spawning the process from Go and reading the stdout is the most efficient, there's no point adding an over head.
It highly highly depends on what your python script does, if it's one specific task then simply spawning the process and checking the exit code is more than enough, if you have to keep the script in the background at all time and communicate with it then Redis or ZeroMQ are good, and very mature on both Go and Python.
If it's on a different server then ZeroMQ/RPC or just a plain http server in python should be fine, the overhead should be minimal.
| 0
| 1
| 0
| 0
|
2014-07-16T21:30:00.000
| 2
| 0
| false
| 24,791,113
| 0
| 0
| 0
| 1
|
I'm working on a web application written in Golang that needs to call a Python program/module to do some heavy work. Since that is very memory/CPU intensive, it may be on a separate machine. Since Golang and Python can't talk directly, there are 3 ways to achieve this:
Just execute the python program as an OS process from Go (if on same machine) (or RPC?)
Wrap Python process in a service and expose it for it to be called from Go (may be a simple CRUD like service - A Bottle/flask restful service)
Have a simple pub-sub system in place to achieve this (Redis or some MQ system) - Adding Redis based caching is on the radar so maybe a good reason to go this way. Not sure.
The main thing is that the python process that takes really long to finish must "inform" the web application that it has finished. The data could either be in a file/DB or 'returned' by the process.
What could be the simplest way to achieve this in a pub/sub like environment?
UPDATE
REST seems like one way but would incur the cost of implementing server side push which may or may not be easily doable with existing micro web frameworks. The pub/sub would add an additional layer of complexity for maintainability and a learning curve nevertheless. I'm not sure if an RPC like invocation could be achieved across machines. What would be a good choice in this regard?
|
Google App Engine send batch email
| 24,814,266
| 1
| 0
| 88
| 0
|
python,google-app-engine,email
|
You can set up a CRON job to run every few minutes and process your email queue. It will require an endpoint where you can send a POST request, but you can use a secret token (like just any random guid) to verify the request is legitimate before you send the email.
| 0
| 1
| 0
| 0
|
2014-07-17T14:45:00.000
| 1
| 1.2
| true
| 24,806,675
| 0
| 0
| 1
| 1
|
I was wondering how I would go about emailing user emails stored in a python datastore.
Should I create a sort of maintenance page where I can log in as an administrator and then send an email or is there a way for me to execute a python script without needing a handler pointing to a separate webpage so I don't have to worry about the page being discovered and exploited.
|
Streaming audio from webserver
| 24,811,718
| 0
| 0
| 452
| 0
|
python,ios,audio,streaming
|
What happens when the playback is restarted? Print the HTTP URLs on the server. Does the player start from index=0, go to index=4000, then back to index=0 again?
| 0
| 1
| 0
| 0
|
2014-07-17T18:27:00.000
| 1
| 0
| false
| 24,810,929
| 0
| 0
| 1
| 1
|
I'm creating a simple app that can play audio files (currently only mp3 files) located on a webserver.
Currently, I'm using Python's SimpleHTTPServer server side, and the AVAudioPlayer for iOS.
It sort of works, since the file is streamed over HTTP instead of just being downloaded from the webserver. But I often experience that the playback of a file is suddenly restarted.
I'm considering using another method of streaming, eg. RTMP, but on the other hand I want to keep things simple. I'm wondering if another HTTP server might do the trick? Any other experiences/suggestions?
|
Python: how to access 3.3 if 3.4 is the default?
| 24,814,742
| 2
| 1
| 238
| 0
|
python,python-3.x,path
|
If you want to male a file specifically open with a version you can start the file with #! python3.x the x being the version you want. If you want to be able to right click and edit with that version youll need to do some tweaking in the registry
| 0
| 1
| 0
| 0
|
2014-07-17T22:43:00.000
| 2
| 0.197375
| false
| 24,814,724
| 1
| 0
| 0
| 1
|
Running Windows 7. 2.7, 3.3 and 3.4 installed.
I just installed Python 3.3 for a recent project. In the command prompt, python launches 3.4, and py launches 3.3. I can access 3.3 using the 3.3 version of IDLE, but how can I access it via the command prompt?
Is there a shortcut like py that I can use? Do I need to define this on my own like an alias?
Or is the best route to somehow change the path to temporarily make 3.3 the default?
Just downloaded virtualenv, maybe that might be part of the solution.
|
Using gpg --search-keys in --batch mode
| 24,832,638
| 0
| 1
| 3,077
| 0
|
python,batch-file,gnupg,pgp
|
Use --recv-keys to get the keys without prompting.
| 0
| 1
| 0
| 0
|
2014-07-18T19:34:00.000
| 5
| 0
| false
| 24,832,498
| 0
| 0
| 0
| 1
|
I'm working on an application that will eventually graph the gpg signature connections between a predefined set of email addresses. I need it to programmatically collect the public keys from a key server. I have a working model that will use the --search-keys option to gpg. However, when run with the --batch flag, I get the error "gpg: Sorry, we are in batchmode - can't get input". When I run with out the --batch flag, gpg expects input.
I'm hoping there is some flag to gpg that I've missed. Alternatively, a library (preferably python) that will interact with a key server would do.
|
C program performance depreciation after multiple runs
| 24,866,756
| 1
| 2
| 108
| 0
|
python,c,performance,testing,memory-leaks
|
this could be:
some leak in the python script
waiting on a resource in the c script/python script
writing to a file that gets bigger during the run
C software doesn't close properly
and so on. You could elaborate on what the C software does to get us more clues, also state whether other software also runs more slowly.
| 0
| 1
| 0
| 0
|
2014-07-21T13:06:00.000
| 1
| 0.197375
| false
| 24,865,899
| 0
| 0
| 0
| 1
|
To properly test a piece of software (written in C) I have been working on, I have to run a high volume of tests. I've been doing this with a python script that executes my software a given number of times (generally in the range of 1000 - 10000 repititions), one after the other. I am working on a debian virtual machine (500mb ram). I've been noticing that over time the performance of the program depreciates significantly. Usually I have to go so far as rebooting the vm to get back to normal performance levels.
My first thought was a memory leak, but valgrind did not discover any in my C program. Furthermore, I would have thought the OS would take care of that after program termination either way. When I run top or free -m, I see that free ram is fairly low (20-70mb), but does not drop much while running my script, instead fluctuating around where it started.
Edit: A full rundown on what my files are doing is as follows:
C software
Many files, developed by various people
Features a loop that continues until given destination IP has been discovered
Constructs packets based off of given destination and information received from previously sent packets
Sends packets
Waits for packet replies
Python script emulating network topology
Stores fake networks
Intercepts outgoing packets and sends replies based off of said topology
Python testing script
For a given number of repetitions,
Launch network emulator
Launch C software (wait until terminated - the process launches are actually done with a bash script)
Exit network emulator
Output for the emulator and the c software are both dumped to log files, which are overwritten at each execution (so they should be kept decently short).
Can anyone give me some pointers as to what this could be?
|
How to build python extension with Xcode
| 24,876,618
| 1
| 0
| 408
| 0
|
python,xcode,macos,python-extensions
|
The Python executable in current versions of Mac OS X is 64-bit only, so any extensions it loads must also be 64-bit. If your libraries are only available for 32-bit systems, you will be unable to link to it from a Python extension. One possible solution might be to have a separate 32-bit executable that loads the library, then communicate with that executable from your Python extension.
| 0
| 1
| 0
| 0
|
2014-07-21T23:32:00.000
| 1
| 0.197375
| false
| 24,876,555
| 1
| 0
| 0
| 1
|
Request: could someone post a recipe, from top to bottom, for creating an Xcode project that will compile C code to build a Python extension? I've seen several posts here that touch upon the subject, but they seem confusing and incomplete, and they disagree.
Specific questions:
Can Mac Python 2.7 load a .dylib? Mine coldly ignores them.
Can one really solve the problem by renaming a .dylib to a .so filename extension? Various posts disagree on this question.
Are .dylib and .so actually different formats? Are there settings I could make in Xcode to make it output a true .so format?
If Python is failing to load an extension file, are there tools to diagnose it? Is there any way to poke into the file, look at its format, and see if it does or does not match what is needed?
When I renamed my .dylib to .so, I got the following error message:
ImportError: dlopen(/Library/Python/2.7/site-packages/pypower.so, 2): no suitable image found. Did find:
/Library/Python/2.7/site-packages/pypower.so: mach-o, but wrong architecture
My project is targeted to "32-bit Intel" architecture. And I really need to use 32-bit, because of some of the old libraries I'm linking to. Is Python going to have a problem loading a 32-bit library? Is there a way to bridge the gap?
|
Redirecting stdout to stderr in Python's subprocess/Popen
| 24,885,123
| 3
| 3
| 1,523
| 0
|
python,subprocess,stdout,stderr
|
You can pass in any file descriptor or file object. So use sys.stderr.
| 0
| 1
| 0
| 0
|
2014-07-22T10:17:00.000
| 1
| 1.2
| true
| 24,884,654
| 0
| 0
| 0
| 1
|
The subprocess module says that you can pass STDOUT to the stderr argument to get the standard error redirected to the standard out file handle. However, there is no STDERR constant. Is there a way to go the other way? I want everything on stderr and stdout to be redirected to the stderr of the parent process.
|
Catching error output from distutils using mingw
| 25,080,919
| 1
| 1
| 161
| 0
|
python,mingw,stdout,distutils,stderr
|
Woops.
It turns out this was something really simple: capturing stdout and stderr output was working just fine, but the particular error message I was looking to catch (which was windows specific) wasn't part of the printed output but the error message of the raised SystemExit exception.
Big waste of time :(
| 0
| 1
| 0
| 0
|
2014-07-23T00:54:00.000
| 1
| 1.2
| true
| 24,900,200
| 0
| 0
| 0
| 1
|
I'm using distutils to compile C code via a python script. If things go wrong, I want to be able to catch the error output. To this end, I've redirected stdout and stderr into temporary files before running the setup() command (you need to use os.dup2 for this).
On linux, it works fine. On windows + mingw I get some really weird behaviour:
Without trying to capture, stdout and stderr are both written to the command prompt.
When I try to capture, stdout works fine but the output to stderr disappears.
Does anybody understand what's going on here?
|
How to ignore SIGKILL or force a process into 'D' sleep state?
| 24,922,209
| 5
| 6
| 2,985
| 0
|
python,c++,linux
|
It is not possible to ignore SIGKILL or handle it in any way.
From man sigaction:
The sa_mask field specified in act is not allowed to block SIGKILL or SIGSTOP. Any attempt to do so will be silently ignored.
| 0
| 1
| 0
| 0
|
2014-07-23T22:34:00.000
| 2
| 0.462117
| false
| 24,922,174
| 1
| 0
| 0
| 1
|
I am trying to figure out how to get a process to ignore SIGKILL. The way I understand it, this isn't normally possible. My idea is to get a process into the 'D' state permanently. I want to do this for testing purposes (the corner case isn't really reproducible). I'm not sure this is possible programatically (I don't want to go damage hardware). I'm working in C++ and Python, but any language should be fine. I have root access.
I don't have any code to show because I don't know how to get started with this, or if it's even possible. Could I possibly set up a bad NFS and try reading from it?
Apologies in advance if this is a duplicate question; I didn't find anyone else trying to induce the D state.
Many thanks.
|
Sending automated email using Pywin32 & outlook in a python script works but when automating it through windows task scheduler doesn't work
| 54,351,677
| 0
| 3
| 3,507
| 0
|
python,outlook,pywin32
|
My similar issue has been cleared up. I used task scheduler to call a python script (via batch file) that has the pywin32com module. The python code opens excel and calls a macro. It will run fine from python, cmd and the batch file, but wasn't working when ran through task scheduler. It traced back to errors like:
"EnsureDispatch disp = win32com.client.Dispatch(prog_id)"
As noted on this thread, I changed the option to "Run only when user is logged on" and it ran successfully!
The only drawback is that I schedule the task for a time that I'm away from the computer. I suppose I just have to not log off and hope that the cpu doesn't go into sleep mode, but that's not really a big deal in this case.
| 0
| 1
| 0
| 1
|
2014-07-24T06:40:00.000
| 3
| 0
| false
| 24,926,733
| 0
| 0
| 0
| 1
|
I wrote a python script that uses win32com.client.Dispatch("Outlook.Application") to send automated emails through outlook.
If I run the script myself everything works perfectly fine. But if I run it through Window's task scheduler it doesn't send the emails.
Just to check if I am running the script properly I made the script output a random text file and that works but email doesn't. Why?
|
Having error queues in celery
| 24,946,459
| 0
| 4
| 1,656
| 0
|
python,celery,reliability
|
I had a similar problem and i solved it may be not in a most efficient way but however my solution is as follows:
I have created a django model to keep all my celery task-ids and that is capable of checking the task state.
Then i have created another celery task that is running in an infinite cycle and checks all tasks that are 'RUNNING' on their actual state and if the state is 'FAILED' it just reruns it. Im not actually changing the queue for the task which i rerun but i think you can implement some custom logic to decide where to put every task you rerun this way.
| 0
| 1
| 0
| 0
|
2014-07-24T14:35:00.000
| 2
| 0
| false
| 24,936,671
| 0
| 0
| 1
| 1
|
Is there any way in celery by which if a task execution fails I can automatically put it into another queue.
For example it the task is running in a queue x, on exception enqueue it to another queue named error_x
Edit:
Currently I am using celery==3.0.13 along with django 1.4, Rabbitmq as broker.
Some times the task fails. Is there a way in celery to add messages to an error queue and process it later.
The problem when celery task fails is that I don't have access to the message queue name. So I can't use self.retry retry to put it to a different error queue.
|
Python - "IOError: [Errno 13] Permission denied" when running cron job but not when running from command line
| 29,074,961
| 0
| 0
| 1,112
| 0
|
python,permissions,cron
|
Maybe an other software opens the file which you want to overwrite?
| 0
| 1
| 0
| 0
|
2014-07-24T16:54:00.000
| 1
| 0
| false
| 24,939,676
| 0
| 0
| 0
| 1
|
I have SSHed from my local machine (a Mac) to a remote machine called “ten-thousand-dollar-bill” as the user “chilge”.
I want to run a Python script in the folder “/afs/athena.mit.edu/c/h/chilge/web_scripts” that generates and saves a .png image to the folder “/afs/athena.mit.edu/c/h/chilge/www/TAF_figures/KORD/1407”. When I run the script from the command line, the image is generated and saved without any issues. When I run the script as cron job, though (the crontab resides in “/afs/athena.mit.edu/c/h/chilge/cron_scripts”), I get the following error:
Traceback (most recent call last):
File "/afs/athena.mit.edu/user/c/h/chilge/web_scripts/generate_plots.py", line 15, in
save_taffig(taf,fig)
File "/afs/athena.mit.edu/user/c/h/chilge/web_scripts/plotting.py", line 928, in save_taffig
fig.savefig(os.getcwd()+'/'+savename+'.png')
File "/usr/lib64/python2.7/site-packages/matplotlib/figure.py", line 1084, in savefig
self.canvas.print_figure(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/matplotlib/backend_bases.py", line 1923, in print_figure
**kwargs)
File "/usr/lib64/python2.7/site-packages/matplotlib/backends/backend_agg.py", line 443, in print_png
filename_or_obj = file(filename_or_obj, 'wb')
IOError: [Errno 13] Permission denied: '/afs/athena.mit.edu/user/c/h/chilge/www/TAF_figures/KORD/1407/140723-1200_AMD_140723-1558.png'
I believe I’ve correctly changed the permissions of all of the necessary directories, but I’m still getting this error. I am not sure why the script would run fine from the command line, but fail when I try to run the script as a cron job.
(Also, I’m not sure if this will be relevant, but don’t have sudo permissions on the remote machine.)
|
Run Python project from shell
| 24,947,778
| 0
| 0
| 880
| 0
|
python,eclipse,shell,python-3.x,project
|
Based on current information, I would suggest you to run it this way in OSX
1) Bring up the Terminal app
2) cd to the location where bla lives
3) run python bla/blah/projMain.py
Show us stacktrace if the above failed.
| 0
| 1
| 0
| 1
|
2014-07-25T01:44:00.000
| 2
| 0
| false
| 24,946,778
| 0
| 0
| 0
| 1
|
Eclipse can run a python project rather than just one .py file. Is it possible to run an entire project from Python 3.x shell. I looked into it a little, but I didn't really find a way. I tried just running the .py file with the main using exec(open('bla/blah/projMain.py')) like you would any python file. All of my modules (including the main) is in one package, but when I ran the main I got a no module named 'blah' (the package it is in). Also, as a side note there is in fact aninit.pyand even apycache' directory.
Maybe I didn't structure it correctly with Eclipse (or rather maybe Eclipse didn't structure it properly), but Eclipse can run it, so how can I with a Python 3.4.1 shell? Do I have to put something in __init__.py, perhaps, and then run that file?
|
Has PyDev an interactive shell (during debugging) as in Komodo?
| 24,958,918
| 0
| 0
| 50
| 0
|
python,pydev
|
Yes you can do that. Just type in the console what ever commands you want :). I usually have to right click then
Debug As >> Python run
PyDev is a little bit quirky, but you get used to it.
| 0
| 1
| 0
| 0
|
2014-07-25T14:36:00.000
| 2
| 1.2
| true
| 24,958,237
| 0
| 0
| 0
| 1
|
so far I used the Komodo IDE for Python development, but I'm now testing Eclipse with PyDev. Everything works fine, but there is one Komodo feature that I'm missing.
In Komodo I can inspect the running application in a debugger shell. I.e. after hitting a breakpoint I can not only read the content of variables, but I can execute arbitrary Python code (e.g. changing the value of variables) and then continue program execution.
PyDev has also some interactive shell during debugging, but I can only read variables and not change their content. Is this feature not available in PyDev or am I missing something here?
Many thanks,
Axel
|
Emacs, bash, bashrc, functions and paths
| 24,980,728
| 1
| 1
| 594
| 0
|
bash,emacs,path,pythonpath,.bash-profile
|
The issue might be that emacs, as many other programs you run, reads your login shell rc files, such as ~/.bash_login or ~/.profile, but not ~/.bashrc, where as your terminal also reads you user shell rc file: ~/.bashrc.
| 0
| 1
| 0
| 1
|
2014-07-26T14:51:00.000
| 1
| 0.197375
| false
| 24,972,238
| 0
| 0
| 0
| 1
|
Usually I use my .bashrc file to load some functions for my bash environment. When I call these functions (that I created based on some frameworks I use). So, I play around with variables such as PATH and PYTHONPATH when I use the functions depending on the environment I'm working on.
So far so well with the terminal. The problem is that when I use emacs these functions and these environmental variables that I activate with my functions, they don't exist. .bashrc is not read by emacs, and therefore I don't have the functions loaded by .bashrc don't work. I would like them to work.
Any Ideas?
|
Set Python IDLE as Default Program to Open .py Extensions
| 24,988,918
| -2
| 8
| 22,665
| 0
|
python,python-2.7,python-idle
|
Right click on any .py file
Click Open With...
Click Choose a default program...
IfIDLEis on the list, click it.
else
Click Browse, and find the IDLE program
Click OK and voila!
| 0
| 1
| 0
| 0
|
2014-07-28T05:27:00.000
| 2
| -0.197375
| false
| 24,988,880
| 1
| 0
| 0
| 1
|
I am on Windows 7. I have Python 2.7.8 (64 bit) installed. Today, I changed the default program that opens .py files from IDLE to Windows Command Processor and stupidly selected the checkbox that said "always use the selected program to open this kind of file".
What I want to do is change my default program back to IDLE.
When I attempt to change it back to IDLE, I go to Control Panel\Programs\Default Programs\Set Associations and select the .py name and click Change Program. I do see python.exe but selecting that does nothing. I then use the "Browse" button to navigate to C:\Python27\Lib\idlelib but don't know if I should select idle.py, idle.pyw, idle.bat or some other IDLE program that will force the default program to be IDLE!
Nothing happens after I select one of these.
How do I make IDLE be the default program that opens .py files and now disassociate Windows Command Processor from being the default?
|
Run script using a certain version of Abaqus
| 25,001,330
| 1
| 0
| 1,939
| 0
|
python,abaqus
|
on windows using abaqus cae will call the most recent version of abaqus installed. If you want to run on a specific version use this call instead abq6121 cae nogui = scriptname.py
| 0
| 1
| 0
| 0
|
2014-07-28T08:52:00.000
| 1
| 0.197375
| false
| 24,991,571
| 1
| 0
| 0
| 1
|
I want to run a python script without Abaqus GUI and this already works well with the command:
abaqus cae nogui = scriptname.py
As I want to include a subroutine I have to run it in Abaqus version 12-1 but I have also version 13-1 installed (running the script in cae, I always got an error while using 13-1 but not with 12-1).
With the command above I don't know which version will be used. Is there a way to specify the used version in the cmd?
|
Use named pipes to send input to program based on output
| 25,047,902
| 1
| 0
| 289
| 0
|
python,linux,bash,gdb,named-pipes
|
My recommendation is not to do this. Instead there are two more supportable ways to go:
Write your code in Python directly in gdb. Gdb has been extensible in Python for several years now.
Use the gdb MI ("Machine Interface") approach. There are libraries available to parse this already (not sure if there is one in Python but I assume so). This is better than parsing gdb's command-line output because some pains are taken to avoid gratuitous breakage -- this is the preferred way for programs to interact with gdb.
| 0
| 1
| 0
| 0
|
2014-07-28T18:27:00.000
| 1
| 1.2
| true
| 25,001,824
| 0
| 0
| 0
| 1
|
Here's a general example of what I need to do:
For example, I would initiate a back trace by sending the command "bt" to GDB from the program. Then I would search for a word such as "pardrivr" and get the line number associated with it by using regular expressions. Then I would input "f [line_number_of_pardriver]" into GDB. This process would be repeated until the correct information is eventually extracted.
I want to use named pipes in bash or python to accomplish this.
Could someone please provide a simple example of how to do this?
|
pexpect to login to custom shell
| 25,003,819
| 0
| 0
| 203
| 0
|
python,ssh,expect,spawn,pexpect
|
The prompt should be needed after having written a command and waiting for it to finish. You have to tell "readline" what it should expect (in your case "testuser:").
| 0
| 1
| 0
| 0
|
2014-07-28T20:26:00.000
| 1
| 0
| false
| 25,003,730
| 0
| 0
| 0
| 1
|
I have a custom shell which looks like below.
testuser:
How do I set custom PROMPT attribute to login to a shell which look like
I'm reusing the hive.py code from samples section and set original_prompt to :.
original_prompt='[:]'
The result is it skips the host as it fails to connect with
ERROR could not synchronize with original prompt
What am I missing?
Thanking in anticipation.
|
Making python script executable from any directory
| 25,003,823
| 1
| 0
| 4,406
| 0
|
python,linux,shell,command,executable
|
You should add the folder that contains the script to your system's $PATH variable (I assume you're on Linux). This variable contains all of the directories that are searched looking for a specific command. You can add to it by typing PATH=/path/to/folder:$PATH. Alternately, you need to move the script into a folder that's already in the $PATH variable (which is generally a better idea than messing with system variables).
| 0
| 1
| 0
| 1
|
2014-07-28T20:27:00.000
| 2
| 0.099668
| false
| 25,003,748
| 1
| 0
| 0
| 2
|
I have a python script, which has multiple command line options. I want to make this script runnable without having to type "python myscript.py" and without having to be in the same directory as the script. For example, if one installs git on linux, regardless of which directory the user is in, the user can do "git add X, etc..". So, an example input I would like is "myscript -o a,b,c -i" instead of "python myscript.py -o a,b,c -i". I already added "#! /usr/bin/env python" to the top of my script's code, which makes it executable when I type "./myscript", however I don't want the ./, and I want this to work from any directory.
|
Making python script executable from any directory
| 25,003,818
| 0
| 0
| 4,406
| 0
|
python,linux,shell,command,executable
|
Your script needs to be in a location searchable via your PATH. On Unix/Linux systems, the generally accepted location for locally-produced programs and scripts that are not part of the system is /usr/local/bin. So, make sure your script is executable by running chmod +x myscript, then move it to the right place with sudo mv myscript /usr/local/bin (while in the directory containing myscript). You'll need to enter an admin's password, then you should be all set.
| 0
| 1
| 0
| 1
|
2014-07-28T20:27:00.000
| 2
| 0
| false
| 25,003,748
| 1
| 0
| 0
| 2
|
I have a python script, which has multiple command line options. I want to make this script runnable without having to type "python myscript.py" and without having to be in the same directory as the script. For example, if one installs git on linux, regardless of which directory the user is in, the user can do "git add X, etc..". So, an example input I would like is "myscript -o a,b,c -i" instead of "python myscript.py -o a,b,c -i". I already added "#! /usr/bin/env python" to the top of my script's code, which makes it executable when I type "./myscript", however I don't want the ./, and I want this to work from any directory.
|
How can I get the path to the calling python executable
| 25,026,388
| 2
| 3
| 313
| 0
|
python
|
The current python executable is always available as sys.executable, which should give full path (but you can ensure this using os.path functions).
| 0
| 1
| 0
| 1
|
2014-07-29T20:06:00.000
| 1
| 1.2
| true
| 25,024,010
| 0
| 0
| 0
| 1
|
I have a series of unit tests that are meant to run in two contexts:
1) On a buildbot server
2) in developer's home environments
In both our development procedure and in the buildbot server we use virtualenv. The tests run fine in the developer environments, but with buildbot the tests are being run from the python executable in the virtualenv without activating the virtualenv.
This works out for most tests, but there are a few that shell out to run scripts, and I want them to run the scripts with the virtualenv's python executable. Is there a way to pull the path to the current python executable inside the tests themselves to build the shell commands that way?
|
Force directory to be created in Python
| 25,025,400
| 1
| 1
| 147
| 0
|
python
|
Creating a new directory is effectively the same as writing small amount of data. It adds an inode.
The only way mkdir (or os.mkdirs) should fail is if the directory exists - otherwise the directory will always be created. In terms of the data being buffered - it's unlikely that this would happen - even journaled filesystems will sync out pretty regularly.
If you're having non-deterministic behavior, just wrap your directory creation / writing a file into that directory inside a try / except / finally that makes a few efforts? But really - the need for such code hints at something much more sinister and is likely a bigger issue.
| 0
| 1
| 0
| 0
|
2014-07-29T21:27:00.000
| 1
| 0.197375
| false
| 25,025,299
| 1
| 0
| 0
| 1
|
I am running Python with MPI on a supercomputing cluster. I am getting strange nondeterministic behavior that I think is a result of I/O complications that are not present on the single machines I'm used to working with.
One of the things my code does is to create directories using os.makedirs somewhat frequently. I know also that I generally should not write small amounts of data to the filesystem-- this can end up with the data getting stuck in some buffer and not written for a long time. I suspect this may be happening with my directory creation calls, and then later code tries to write to files inside the directory before it exists. Two questions:
is creating a new directory effectively the same thing as writing a small amount of data?
When forcing data to be written, I use flush and os.fsync. These require a file object. Is there an equivalent to make sure the directory has been created?
|
"Cannot access setup.py: No such file or directory" - can't run any .py files?
| 39,276,446
| 0
| 23
| 102,215
| 0
|
python
|
You need to go into the directory that you are going to "setup". For example, if you are installing numpy, and you have git-cloned it, then it probably is located at ~/numpy. So first cd into ~/numpy, and the type the commend like "python setup.py build" there.
| 0
| 1
| 0
| 0
|
2014-07-30T12:21:00.000
| 3
| 0
| false
| 25,036,688
| 1
| 0
| 0
| 1
|
This problem started while I was installing pyswip and needed to run a setup.py file. Using the command "python setup.py", I'm greeted with the following message: "python: can't open file 'setup.py': [Errno 2] No such file or directory."
I know this question's been asked a lot before, so I've tried everything in previous answers. Including #!/usr/bin/env python or #!/usr/bin/env python-3.3.0 at the very top of the script and then trying "chmod +x setup.py"
gives the following: "chmod: cannot access setup.py': No such file or directory".
Trying to run other .py files from the terminal gives the same result.
Running the file in the Python Shell from IDLE doesn't do anything.
Running the "ls -d */" command shows that the Python-3.3.0/ directory, where the .py files in question are, is definitely there.
Am I missing something really obvious? (If it helps, I have Elementary OS 0.2.)
|
How do I prevent the raw WSGI python file from being read?
| 25,056,888
| 0
| 1
| 77
| 0
|
python,apache,.htaccess,wsgi
|
You should not stick the WSGI file in the DocumentRoot directory in the first place. You have created the situation yourself. It doesn't need to be in that directory for WSGIScriptAlias to work.
| 0
| 1
| 0
| 1
|
2014-07-30T16:33:00.000
| 2
| 0
| false
| 25,042,205
| 0
| 0
| 1
| 2
|
I am using mod_wsgi with apache to serve the python application. I have a directive in the VirtualHost entry as follows WSGIScriptAlias /app /home/ubuntu/www/app.wsgi. I also have DocumentRoot /home/ubuntu/www/. Therefore, if the user attempts to read /app.wsgi it gets the raw file. If I try to block access to it via .htaccess, the application becomes unusable. How do I fix this? Is there a way to do so without moving the file out of the DocumentRoot?
|
How do I prevent the raw WSGI python file from being read?
| 25,045,724
| 0
| 1
| 77
| 0
|
python,apache,.htaccess,wsgi
|
This is far from the best option, but it does seem to work: I added WSGIScriptAlias /app.wsgi /home/ubuntu/www/app.wsgi to the VirtualHost as well so that it will run the app on that uri instead of returning the raw file.
| 0
| 1
| 0
| 1
|
2014-07-30T16:33:00.000
| 2
| 0
| false
| 25,042,205
| 0
| 0
| 1
| 2
|
I am using mod_wsgi with apache to serve the python application. I have a directive in the VirtualHost entry as follows WSGIScriptAlias /app /home/ubuntu/www/app.wsgi. I also have DocumentRoot /home/ubuntu/www/. Therefore, if the user attempts to read /app.wsgi it gets the raw file. If I try to block access to it via .htaccess, the application becomes unusable. How do I fix this? Is there a way to do so without moving the file out of the DocumentRoot?
|
Get entry point script file location in setuputils package?
| 25,168,476
| 4
| 13
| 5,445
| 0
|
python,setuptools
|
Have you tried `os.path.abspath(__file__)' in your entry point script? It'll return yours entry point absolute path.
Or call find_executable from distutils.spawn:
import distutils.spawn
distutils.spawn.find_executable('executable')
| 0
| 1
| 0
| 0
|
2014-07-31T18:11:00.000
| 3
| 0.26052
| false
| 25,066,084
| 1
| 0
| 0
| 1
|
So I have an entry point defined in my setup.py [console_scripts] section. The command is properly installed and works fine, but I need a way to programatically find out the path to the script (e.g. on windows it'll be something like C:/my/virtual/env/scripts/my_console_script.exe). I need this so I can pass that script path as an argument to other commands, regardless of where the package is installed. Setuputils provides the pkg_resources, but that doesn't seem to expose any way of actually getting at the raw installed paths, only loadable objects.
Edit: To make the use case plain here's the setup.
I have a plugin-driven application that communicates with various local services. One of these plug-ins ties into the alerting interface of an NMS package. The only way this alerting package can get alerts out to an arbitrary handler is to call a script - the path to execute (the console_scripts entry point in this case) is register as a complete path - that's the path I need to get.
|
SQLAlchemy after_insert triggering celery tasks
| 25,086,833
| 0
| 0
| 209
| 1
|
python,sqlalchemy,celery
|
It wasn't so complicated, subclass Session, providing a list for appending tasks via after_insert. Then run through the list in after_commit.
| 0
| 1
| 0
| 0
|
2014-08-01T11:01:00.000
| 1
| 1.2
| true
| 25,078,815
| 0
| 0
| 0
| 1
|
I'm initiating celery tasks via after_insert events.
Some of the celery tasks end up updating the db and therefore need the id of the newly inserted row. This is quite error-prone because it appears that if the celery task starts running immediately sometimes sqlalchemy will not have finished committing to the db and celery won't find the row.
What are my other options?
I guess I could gather these celery tasks up somehow and only send them on "after_commit" but it feels unnecessarily complicated.
|
Script for changing one line of *.csv file for a whole directory of files?
| 25,089,847
| 0
| 0
| 115
| 0
|
python,datetime,batch-file,csv,ubuntu
|
So your options, roughly, are:
Python
Windows 'cmd' script
Transfer the files to a *nix environment and do it there with those tools if you are more familiar
If using Python, look at:
the os module, os.listdir(), os.path etc.
Regex replace using a function (re.sub taking a function rather than a string as a replacement)
datetime.datetime.strptime and datetime.datetime.strftime
| 0
| 1
| 0
| 0
|
2014-08-01T22:39:00.000
| 2
| 0
| false
| 25,089,702
| 0
| 0
| 0
| 1
|
I have a question about writing a command prompt script (DOS) in Windows 7.
The task I have:
I have a directory of raw data files (*.csv) where the 38th line is where the date and time are saved.
Example File cell A38:
Start Date/Time: 6/20/2014 13:26:16
However, this date format is M/DD/YYYY because it was saved using a sampling computer where the date of the computer was set-up as such.
I know there is a way to write a script that can be executed on a directory of these files so that none of the other information (text or actual time stamp) is changed,
but the Date format switched to the UK style of DD/MM/YYYY.
Intended product:
The file is unchanged in any way but line 38 reads
Start Date/Time: 20/06/2014 13:26:16
I really do not want to go through and do this to 800 plus files, and more coming, so any help would be very appreciated in helping do this format change
in a script format that could be executed on the entire directory of *.csv files.
I also think it is an important note that the entire text as well as the actual date and time are in one Cell in Excel (A38) (Start Date/Time: M/D/YYYY HH:MM:SS)
and that I DO want to keep the time as 24hour time.
Any guidance/pointers would be great. I am very new to command line programming in Windows. Also happy to see if such a script is available for an Ubuntu environment, or a python script, or anything really that would automate this tedious task of changing one part of one line close to 1000 times, as switching the changed directory back to the Windows computer is no big deal at all. Just easier (and Im sure possible using cmd.exe)
Cheers,
Wal
|
Run my python3 program on remote Ubuntu server every 30 min
| 25,100,208
| 0
| 0
| 492
| 0
|
python,ubuntu,digital-ocean
|
First install and enable fcron. Then, sudo -s into root and run fcrontab -e. In the editor, enter */30 * * * /path/to/script.py and save the file. Change 30 to 15 if every 15 minutes is what you're after.
| 0
| 1
| 0
| 1
|
2014-08-02T21:54:00.000
| 2
| 0
| false
| 25,099,749
| 0
| 0
| 0
| 1
|
I have correct python3 program looking like *.py.
I have Digital Ocean(DO) droplet with Ubuntu 14.04.
My program post message to my twitter account.
I just copy my *.py in some directory on DO droplet and run it with ssh and all works fine.
But I need to post message(rum my program) automatically every 15-30 min for example.
Iam newbie with this all.
What should i do? Step-by-step please!
|
a proactive folder watcher in Linux
| 25,104,864
| 0
| 0
| 74
| 0
|
python,linux
|
There is no existing solution to my knowledge. But:
There is the inotify API, but that only gives out notifications of what just happened, i.e. you don't have any means to influence the result.
If that is an absolutely necessary requirement, intercepting operations on a filesystem level is the only universal choice, hacking either the kernel itself or using FUSE.
If you only want to monitor operations of a single process, you could intercept some calls using LD_PRELOAD to intercept some function calls like fopen() and fwrite().
| 0
| 1
| 0
| 0
|
2014-08-03T12:25:00.000
| 1
| 0
| false
| 25,104,606
| 0
| 0
| 0
| 1
|
folder watcher functions in a way that after a file comes into a folder it does something ( it reacts ). Is there a method , such that, before a file enters the folder, a check is made, if it's a successful check then only the file enters folder, otherwise it does not.
|
Google App Engine, Datastore and Task Queues, Performance Bottlenecks?
| 25,110,135
| 1
| 2
| 158
| 0
|
python,json,google-app-engine,rest,google-cloud-datastore
|
This is a really good question, one that I've been asked in interviews, seen pop up in a lot of different situations as well. Your system essentially consists of two things:
Savings (or writing) models to the data store
Reading from the data store.
From my experience of this problem, when you view these two things differently you're able to come up with solid solutions to both. I typically use a cache, such as memcachd, in order to keep data easily accessible for reading. At the same time, for writing, I try to have a main db and a few slave instances as well. All the writes will go to the slave instances (thereby not locking up the main db for reads that sync to the cache), and the writes to the slave db's can be distributed in a round robin approach there by ensuring that your insert statements are not skewed by any of the model's attributes having a high occurance.
| 0
| 1
| 0
| 0
|
2014-08-03T22:30:00.000
| 2
| 0.099668
| false
| 25,109,746
| 0
| 0
| 1
| 1
|
We're designing a system that will take thousands of rows at a time and send them via JSON to a REST API built on Google App Engine. Typically 3-300KB of data but let's say in extreme cases a few MB.
The REST API app will then adapt this data to models on the server and save them to the Datastore. Are we likely to (eventually if not immediately) encounter any performance bottlenecks here with Google App Engine, whether it's working with that many models or saving so many rows of data at a time to the datastore?
The client does a GET to get thousands of records, then a PUT with thousands of records. Is there any reason for this to take more than a few seconds, and necessitate the need for a Task queues API?
|
how to write a multi-command cronjob on a raspberry pi or any other unix system
| 25,110,706
| 1
| 0
| 822
| 0
|
python,linux,unix,cron,raspberry-pi
|
It looks like you may have a stray . in there that would likely cause an error in the command chain.
Try this:
cd usr/local/sbin/cronjobs && virtualenv/secret_ciphers/bin/activate
&& cd csgostatsbot && python3 CSGO_STATS_BOT_TASK.py && deactivate
Assuming that the virtualenv directory is in the cronjobs directory.
Also, you may want to skip the activate/deactivate, and simply run the python3 interpreter right out of the virtualenv. i.e.
/usr/local/sbin/cronjobs/virtualenv/secret_ciphers/bin/python3 /usr/local/sbin/cronjobs/csgostatsbot/CSGO_STATS_BOT_TASK.py
Edit in response to comments from OP:
The activate call is what activates the virtualenv. Not sure what the . would do aside from cause shell command parsing issues.
Both examples involve the use of the virtualenv. You don't need to explicitly call activate. As long as you invoke the interpreter out of the virtualenv's directory, you're using the virtualenv. activate is essentially a convenience method that tweaks your PATH to make python3 and other bin files refer to the virtualenv's directory instead of the system install.
2nd Edit in response to add'l comment from OP:
You should redirect stderr, i.e.:
/usr/local/sbin/cronjobs/virtualenv/secret_ciphers/bin/python3
/usr/local/sbin/cronjobs/csgostatsbot/CSGO_STATS_BOT_TASK.py >
/tmp/botlog.log 2>&1
And see if that yields any additional info.
Also, 5 asterisks in cron will run the script every minute 24/7/365. Is that really what you want?
3rd Edit in response to add'l comment from OP:
If you want it to always be running, I'm not sure you really want to use cron. Even with 5 asterisks, it will run it once per minute. That means it's not always running. It runs once per minute, and if it takes longer than a minute to run, you could get multiple copies running (which may or may not be an issue, depending on your code), and if it runs really quickly, say in a couple seconds, you'll have the rest of the minute to wait before it runs again.
It sounds like you want the script to essentially be a daemon. That is, just run the main script in a while (True) loop, and then just launch it once. Then you can quit it via <crtl>+c, else it just perpetually runs.
| 0
| 1
| 0
| 1
|
2014-08-04T01:23:00.000
| 2
| 1.2
| true
| 25,110,635
| 0
| 0
| 0
| 1
|
I am trying to run a cron script in python 3 so I had to setup a virtual environment (if there is an easier way, please let me know) and in order to run the script I need to be in the script's parent folder as it writes to text files there. So here is the long string of commands I have come up with and it works in console but does not work in cron (or I can't find the output..)
I can't type the 5 asterisks without it turning into bullet points.. but I have them in the cron tab.
cd usr/local/sbin/cronjobs && . virtualenv/secret_ciphers/bin/activate
&& cd csgostatsbot && python3 CSGO_STATS_BOT_TASK.py && deactivate
|
Adding PyDev eclipse pulgin manually
| 25,133,261
| 0
| 0
| 347
| 0
|
java,python,eclipse,plugins
|
I solved this problem:
There was no issue with either eclipse or with PyDev. It was about the Java version I had. PyDev works with JDK 7, but I had JDK 6. Due to this even after I copied the PyDev to droppins, nothing was shown up in Preferences. Once I used JDK 7, its working.
Thanks,
| 0
| 1
| 0
| 0
|
2014-08-04T12:04:00.000
| 2
| 0
| false
| 25,118,283
| 0
| 0
| 1
| 1
|
I am on an assignment to work with Jython. I tried to install PyDev plugin to my eclipse ( Kepler service release 2 on Linux 64 bit machine )manually (dev machine doesnot have internet connection). But when I do manually by downloading .zip file and adding it as following:
Help-> Install new software ->Add -> Archieve:
but I am getting an error.
No repository found at file:/home/lhananth/eclipse/dropins/PyDev%203.6.0.zip!.
No repository found at file:/home/lhananth/eclipse/dropins/PyDev%203.6.0.zip!.
I tried to manually add the unziped folders to dropin folder of eclipse but its not working as well- Python is not appearing as a selection in the Eclipse, Window, Preferences.
Can some body help me out ? (I tried all the replies for similar posts available in stack overflow)
|
Django manage.py runserver fails to respond
| 25,291,353
| 2
| 2
| 2,089
| 0
|
python,django,python-2.7
|
Okay, so to reiterate my last post.
There was a call to a Django service that was failing on application startup. No error was thrown, instead it was absorbed by Sentry. Those who were already using the VM on their local machines had worked around the issue.
The issue was identified by importing ipdb and calling its set_trace() function. From the console, I stepped through the application, testing likely variables and return values until it refused to continue. This narrowed it down to the misbehaving service and its unthrown error.
The code has been updated with proper try/catch blocks and the error is now handled gracefully.
So to summarise: Not a malfunctioning VM, but a problem with code.
| 0
| 1
| 0
| 0
|
2014-08-05T16:14:00.000
| 1
| 1.2
| true
| 25,143,621
| 0
| 0
| 1
| 1
|
I'm running a vagrant box on Mac OS X. The VM is running Ubuntu 12.04, with Python 2.7 and Django 1.4.5. When I start up manage.py, I call it like this:
./manage.py runserver 0.0.0.0:8000
And if I visit http://127.0.0.1:8000 from within the VM, the text browsers I've tried report that the HTTP request has been sent and then wait for a response until the request times out. No response ever comes.
I can telnet to the port like this:
telnet 127.0.0.1 8000
And enter random gibberish, which manage.py reports as the following:
127.0.0.1 - - [05/Aug/2014 17:06:26] code 400, message Bad request syntax ('asdfasdfadsfasd')
127.0.0.1 - - [05/Aug/2014 17:06:26] "asdfasdfadsfasd" 400 -
So manage.py is listening on that port. But a standard HTTP request generates no response from manage.py, either in the console or in the browser.
I've tried using different ports which hasn't had any effect. Does anyone have any ideas?
UPDATE
Some additional curl output.
Executing 'curl -v http://127.0.0.1:8000' returns
'* About to connect() to 127.0.0.1 port 8000 (#0)
* Trying 127.0.0.1... connected
GET / HTTP/1.1
User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
Host: 127.0.0.1:8000
Accept: /
'
Executing 'curl -v http://somefakedomain' results in
'* getaddrinfo(3) failed for somefakedomain:80
* Couldn't resolve host 'somefakedomain'
* Closing connection #0
curl: (6) Couldn't resolve host somefakedomain'
|
Using PYTHONPATH to use a virtualenv
| 25,150,279
| 0
| 0
| 103
| 0
|
python
|
The PYTHONPATH environmenbt variable is not used to select the path of the Python executable - which executable is selected depends, as in all other cases, on the shell's PATH environment variable. PYTHONPATH is used to augment the search list of directories (sys.path in Python) in which Python will look for modules to satisfy imports.
Since the interpreter puts certain directories on sys.path before it actions PYTHONPATH precisely to ensure that replacement modules with standard names do not shadow the standard library names. So any standard library module will be imported from the library associated with the interpreter it was installed with (unless you do some manual furkling, which I wouldn't recommend).
venv/bin/activate does a lot of stuff that needs to be handled in the calling shell's namespace, which can make tailoring code rather difficult if you can't find a way to source the script..
| 0
| 1
| 0
| 0
|
2014-08-05T23:08:00.000
| 3
| 0
| false
| 25,149,761
| 1
| 0
| 0
| 2
|
I have a virtualenv in a structure like this:
venv/
src/
project_files
I want to run a makefile (which calls out to Python) in the project_files, but I want to run it from a virtual environment. Because of the way my deployment orchestration works, I can't simply do a source venv/bin/activate.
Instead, I've tried to export PYTHONPATH={project_path}/venv/bin/python2.7. When I try to run the makefile, however, the python scripts aren't finding the dependencies installed in the virtualenv. Am I missing something obvious?
|
Using PYTHONPATH to use a virtualenv
| 25,150,471
| 0
| 0
| 103
| 0
|
python
|
You can actually just call the Python interpreter in your virtual environment. So, in your Makefile, instead of calling python, call venv/bin/python.
| 0
| 1
| 0
| 0
|
2014-08-05T23:08:00.000
| 3
| 0
| false
| 25,149,761
| 1
| 0
| 0
| 2
|
I have a virtualenv in a structure like this:
venv/
src/
project_files
I want to run a makefile (which calls out to Python) in the project_files, but I want to run it from a virtual environment. Because of the way my deployment orchestration works, I can't simply do a source venv/bin/activate.
Instead, I've tried to export PYTHONPATH={project_path}/venv/bin/python2.7. When I try to run the makefile, however, the python scripts aren't finding the dependencies installed in the virtualenv. Am I missing something obvious?
|
Install python module for non-default version on linux
| 25,162,261
| 3
| 3
| 2,428
| 0
|
python,linux,python-module
|
You should install Python libraries with the Python package installer, pip.
Create a virtualenv with the Python version you want to use, activate it, and do pip install NetfilterQueue. You'll still need to install the system dependencies (eg libnetfilter-queue-dev in this case) with apt-get.
| 0
| 1
| 0
| 0
|
2014-08-06T13:50:00.000
| 2
| 0.291313
| false
| 25,162,141
| 1
| 0
| 0
| 1
|
I have different python versions installed on my ubuntu machine. The default version is 2.7.
So when I install any new python module, for example using:
#apt-get install python-nfqueue
it will be istalled just for the default version (2.7)
How can I install the new modules for the other versions?
Is there a way to do it using apt-get install?
Thank you!
|
How to pass collection data from Python to bash?
| 25,164,957
| 1
| 0
| 2,192
| 0
|
python,bash
|
Yes, that is best and almost only way to pass data from python to bash.
Also your function can write to file, which would be read by bash script.
| 0
| 1
| 0
| 0
|
2014-08-06T15:51:00.000
| 3
| 0.066568
| false
| 25,164,837
| 0
| 0
| 0
| 1
|
I am writing a dev ops kind of a bash script that is used for running an application in a local development environment under configuration as similar to production as possible. To eliminate duplicating some code/data which is already in a Python script, I would like my bash script to invoke a Python call to retrieve data that is hard coded in that Python script. The data structure in Python is a dict but I really only care about the keys so I can just return an array of keys. The Python script is used in production and I want to use it and not duplicate the data in my shell script to avoid having to follow on any modification in the production script with parallel changes in the local environment shell script.
Is there any way I can invoke a Python function from bash and retrieve this collection of values? If not, should I just have the Python function print to STDOUT and have the shell script parse the result?
|
virtualenv not finding updated module
| 25,170,606
| 0
| 1
| 466
| 0
|
python,virtualenv,tornado
|
Could it be that the virtualEnv is inheriting the global site-packages? I'm not sure if I added -no-site-packages when I set up the virtualEnv. Is there an easy way to address this setting now or to test this possibility?
no-global-site-packages.txt is present in the python2.7 directory
and
orig-prefix.txt contains a parent directory outside of the virtualenv of the older version of tornado that is being loaded
| 0
| 1
| 0
| 0
|
2014-08-06T20:55:00.000
| 2
| 0
| false
| 25,170,016
| 1
| 0
| 0
| 1
|
I am running python-2.7 with virtualenv on a unix server to which I do not have root access. I updated the module tornado using pip install tornado --upgrade because installing ipython required tornado >= 3.1.0 but only version 2.4 was installed by default on the server. However, when I try to open ipython, it still complains that I don't have the updated version.
I confirmed that ipython is correctly aliased to the virtualenv, and that the upgrade had indeed produced tornado version 4.0 in the site-packages of the virtualenv.
However if I open python (correctly aliased to the virtualenv) and import tornado, I find that it is importing the earlier version (2.4) and not the newer version from my virtualenv. Importing another package that was only installed on the virtualenv correctly imports it from the site-packages of the virtualenv.
Any idea how I should tell python to use the updated version of tornado by default instead of the earlier version that isn't on the virtualenv?
One really hacky thing that I tried was appending to my virtualenv activate file the following:
PYTHONPATH=path_to_standardVE/lib/python2.7/site-packages/tornado:$PYTHONPATH
If I check $PYTHONPATH upon startup, it indeed contains this path at the front. However, loading the module in python still loads the 2.4 version.
Thanks!
|
Is twistedweb with django recommeneded
| 25,198,042
| 2
| 3
| 244
| 0
|
python,django,twisted,event-driven,twisted.web
|
Nope, unless you heavily modify django db adapters and some core component you will not get any advantage. There are some tool for simplyfing the job, but you will be on the bleeding edge trying to adapt something built with the blocking paradigm since the beginning, to something completely different.
On the other side, performance should not be worst, as 99.9% of the time your app itself is the bottleneck, not your WSGI infrastructure.
Regarding async django, lot of people had luck with gevent, but you need to carefully analyze your app to be sure all of the components are gevent-friendly (and this could not be an easy task, expecially for db adapters).
Remember, even if your app is 99.9999999% non-blocking, you are still blocking.
| 0
| 1
| 0
| 0
|
2014-08-07T07:37:00.000
| 2
| 0.197375
| false
| 25,176,734
| 0
| 0
| 1
| 1
|
I have a Django application which I need to deploy in a WSGI container. I can either chose an event driven app server like TwistedWeb or a process driven server like uWSGI. I completely understand the difference between an event driven and a process driven server and I know Django framework is blocking in nature.
I came across TwistedWeb which lets us run a WSGI application in a simple fashion.
My questions are as follows:
1) Would I gain anything by running Twisted instead of uWSGI as Django is blocking in nature. Is TwistedWeb different from the standard twisted library ? I know people run Twisted with Django when they need support for async as well, for ex chat along with normal functionality and they still want to have just one app. I have no such use case and for me its just a website.
2) Would the performance be worse on TwistedWeb as its just a single process and my request would block as Django is synchronous in nature ? Or TwistedWeb runs something like uWSGI which launches multiple processes before hand and distributes requests in a roundrobin fashion among those ? If yes then is TwistedWeb any better than uWSGI ?
3) Is there any other protocol other than WSGI which can integrate Twisted with Django and still give me async behavior (trying my luck here :) )
|
Using Python pudb debugger with pytest
| 25,183,130
| 27
| 24
| 4,263
| 0
|
python,pytest,pudb
|
Simply by adding the -s flag pytest will not replace stdin and stdout and debugging will be accessible, i.e. pytest -s my_file_test.py will do the trick.
In documentation provided by ambi it is also said that previously using explicitly -s was required for regular pdb too, now -s flag is implicitly used with --pdb flag.
However pytest does not implicitly support pUdb, so setting -s is needed.
| 0
| 1
| 0
| 1
|
2014-08-07T12:39:00.000
| 2
| 1.2
| true
| 25,182,812
| 0
| 0
| 0
| 1
|
Before my testing library of choice was unittest. It was working with my favourite debugger - Pudb. Not Pdb!!!
To use Pudb with unittest, I paste import pudb;pudb.set_trace() between the lines of code.
I then executed python -m unittest my_file_test, where my_file_test is module representation of my_file_test.py file.
Simply using nosetests my_file_test.py won't work - AttributeError: StringIO instance has no attribute 'fileno' will be thrown.
With py.test neither works:
py.test my_file_test.py
nor
python -m pytest my_file_test.py
both throw ValueError: redirected Stdin is pseudofile, has no fileno()
Any ideas about how to use Pudb with py.test
|
"localhost" vs "127.0.0.1" performance
| 25,221,564
| 10
| 15
| 5,148
| 0
|
python,windows,ip,xml-rpc
|
Every domain name gets resolved. There is no exception to this rule, including with regards to a local site.
When you make a request to localhost, localhost's IP gets resolved by the host file every time it gets requested. In Windows, the host file controls this. But if you make a request to 127.0.0.1, the IP address is already resolved, so any request goes directly to this IP.
| 0
| 1
| 1
| 0
|
2014-08-08T08:47:00.000
| 1
| 1
| false
| 25,199,405
| 0
| 0
| 0
| 1
|
I've setup an XML-RPC server/client communication under Windows. What I've noticed is that if exchanged data volume become huge, there's a difference in starting the server listening on "localhost" vs. "127.0.0.1". If "127.0.0.1" is set, the communication speed is faster than using "localhost". Could somebody explain why? I thought it could be a matter on naming resolving, but....locally too?
|
Transparent solution for bypassing local outgoing firewalls for python scripts
| 25,200,050
| 1
| 0
| 352
| 0
|
python,unit-testing,firewall-access
|
Is it possible to have the script itself run through these steps? By this I mean have the setup phase of your unit tests probe for firewall, and if detected dynamically setup a proxy somehow, use it to run unit tests, then when done teardown proxy. That seems like it would achieve the transparency you're aiming for.
| 0
| 1
| 0
| 1
|
2014-08-08T09:03:00.000
| 1
| 0.197375
| false
| 25,199,709
| 0
| 0
| 0
| 1
|
Here is the problem: I do have several python packages that do have unittest that do require access to different online services in order to run, like connecting to a postgresql database or a LDAP/AD server.
In many cases these are not going to execute successfully because local network is fire-walled, allowing only basic outgoing traffic on ports like 22, 80, 443, 8080 and 8443.
I know that first thing coming into your mind is: build a VPN. This is not the solution I am looking for, and that's due to two important issues: will affect other software running on the same machine probably breaking them.
Another solution I had in mind was SSH port forwarding, which I successfully used but also this is very hard to configure and worse it does require me to re-configure the addresses the python script is trying to connect to, and I do not want to go this way.
I am looking for a solution that could work like this:
detect if there is a firewall preventing your access
setup the bypassing strategy (proxy?)
run the script
restore settings.
Is there a way to setup this in a transparent way, one that would not require me to make changes to the executed script/app ?
|
Vagrant and Google App Engine are not syncing files
| 26,824,688
| 4
| 2
| 298
| 0
|
python,google-app-engine,vagrant
|
Finally found the answer!
In the latest version of google app engine, there is a new parameter you can pass to dev_appserver.py.
using dev_appserver.py --use_mtime_file_watcher=True works!
Although the change takes 1-2 seconds to detect, but it still works!
| 0
| 1
| 0
| 0
|
2014-08-09T09:43:00.000
| 1
| 1.2
| true
| 25,217,223
| 0
| 0
| 1
| 1
|
I am currently using Vagrant to spin up a VM to run GAE's dev_appserver in the Virtual Machine.
The sync folder works and I can see all the files.
But, after I run the dev appserver, changes to python files by the host machine are not dynamically updated.
To see updates to my python files, I have to relaunch dev appserver in my Virtual Machine.
Also, I have grunt tasks that watch html/css files. These also do not sync properly when updated by editors outside the Virtual Machine.
I suspect that it's something to do with the way Vagrant syncs files changed on the host machine.
Has anyone found a solution to this problem?
|
sh.exe":mktemp:command not found ERROR: virtualenvwrapper could not create a temporary file name
| 25,244,240
| 0
| 1
| 946
| 0
|
python,windows-7,virtualenv,virtualenvwrapper
|
i had found a solution ,just hope to help someone like me.
download the mktemp.exe , put it into the C:\msys\1.0\bin ,then everything will be ok...
| 0
| 1
| 0
| 0
|
2014-08-09T13:36:00.000
| 1
| 1.2
| true
| 25,219,136
| 1
| 0
| 0
| 1
|
I trying to setup virtualenvwrapper in GitBash (Windows 7), but get an error message.
When I run this command: " $ source /c/Python27/Scripts/virtualenvwrapper.sh"
Then I get an error: sh.exe":mktemp:command not found ERROR: virtualenvwrapper could not create a temporary file name.
Somebody help me...
|
Python-How to update an entity in apeengine?
| 25,226,221
| 4
| 1
| 40
| 0
|
python,google-app-engine
|
If you call .put() on an entity that you've previously retrieved from the datastore, it will update the existing entity. (Make sure you're not specifying a new key for the entity.)
| 0
| 1
| 0
| 0
|
2014-08-10T06:27:00.000
| 1
| 0.664037
| false
| 25,226,153
| 0
| 0
| 1
| 1
|
In appengine documentation, it says that the put() method replaces the previous entity. But when I do so it always adds a new entity to the datastore. How do I update an entity?
|
Will aborting a Python script corrupt file which is open for read?
| 25,229,783
| 4
| 2
| 662
| 0
|
python,file-io,abort
|
Probably not. It will release the file handle when the script stops running. Also you typically only have to worry about corrupting a file when you kill a script that is writing to the file, in case it is interrupted mid-write.
| 0
| 1
| 0
| 1
|
2014-08-10T14:43:00.000
| 1
| 1.2
| true
| 25,229,703
| 0
| 0
| 0
| 1
|
If I run a python script (with Linux) which reads in a file (e.g.: with open(inputfile) as infi:): Will the file be in danger when I abort the script by pressing Ctrl C?
|
Playback of at least 3 music files at one in python
| 25,243,720
| 1
| 0
| 57
| 0
|
python,audio
|
I finally got an answer/workaround.
I am using pythons multiprocessing class/functionality to run multiple pygame instances.
So i can play more than one music file at a time with fully control over playmode and playback position.
| 0
| 1
| 0
| 0
|
2014-08-10T21:57:00.000
| 1
| 1.2
| true
| 25,233,396
| 0
| 0
| 1
| 1
|
I am searching for a way/framework to play at least 3 music files at one in a python application. It should run at least under ubuntu and mac as well as on the raspberry pi.
I need per channel/music file/"deck" that is played:
Control of the Volume of the Playback
Control of the start position of the playback
Play and Pause/Resume functionality
Should support mp3 (Not a must but would be great!)
Great would be built in repeat functionality really great would be a fade to the next iteration when the song is over.
If I can also play at least two video files with audio over the same framework, this would be great, but is not a must.
Has anyone an Idea? I already tried pygame but their player can play just one music file at once or has no control over the playback position.
I need that for a theatre where a background sound should be played (and started simultaneosly with the light) and when it is over, a next file fades over. while that is happening there are some effects (e.g. a bell) at a third audio layer.
|
Simple Python TCP server not working on Amazon EC2 instance
| 25,234,530
| 10
| 5
| 3,240
| 0
|
python,amazon-web-services,tcp,amazon-ec2,firewall
|
Your TCP_IP is only listening locally because you set your listening IP to 127.0.0.1.
Set TCP_IP = "0.0.0.0" and it will listen on "all" interfaces including your externally-facing IP.
| 0
| 1
| 1
| 0
|
2014-08-11T00:30:00.000
| 1
| 1
| false
| 25,234,336
| 0
| 0
| 0
| 1
|
I am trying to run a simple Python TCP server on my EC2, listening on port 6666. I've created an inbound TCP firewall rule to open port 6666, and there are no restrictions on outgoing ports.
I cannot connect to my instance from the outside world however, testing with telnet or netcat can never make the connection. Things do work if I make a connection from localhost however.
Any ideas as to what could be wrong?
#!/usr/bin/env python
import socket
TCP_IP = '127.0.0.1'
TCP_PORT = 6666
BUFFER_SIZE = 20 # Normally 1024, but we want fast response
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connection address:', addr
while 1:
data = conn.recv(BUFFER_SIZE)
if not data: break
print "received data:", data
conn.send(data) # echo
conn.close()
|
Making a Shortcut for Running Python Files in a Folder with CMD.exe
| 25,253,872
| 0
| 0
| 5,375
| 0
|
python,windows,cmd
|
Create a shortcut to cmd.exe.
Get properties on that shortcut, and there's an entry for the current working directory. (They change the name of that field every other version of Windows, but I believe most recently it's called "Start In:".) Just set that to C:\Users\Name\FolderLocation\ProjectFolder.
Now, when you double-click that shortcut, it'll open a cmd.exe window, in your project directory.
| 0
| 1
| 0
| 0
|
2014-08-11T21:05:00.000
| 1
| 0
| false
| 25,252,413
| 1
| 0
| 0
| 1
|
To run Python files (.py) with Windows CMD.exe, I SHIFT + Right Click on my project folder which contains all of my Python code files. Doing this shows a menu containing the option Open command window here, which I click to open CMD.exe with the prompt C:\Users\Name\FolderLocation\ProjectFolder>. I then type the python command, and the file I want to run in my project folder (python MyFile.py) which runs the file, of course.
What I would like to know is if there is a way I can setup a shortcut to open CMD.exe with my project folder opened/ being accessed so then all I have to do is type python and the file name? Thanks
|
gcsfs is not writing files in the google bucket
| 57,971,532
| 0
| 1
| 554
| 1
|
python-3.x,google-cloud-platform,google-cloud-storage
|
Although this is quiet an old topic, I will try to provide an answer especially because of people who might stumble on this in the course of their own work. I have experience using more recent versions of gcsfs and it works quiet well. You can find the latest documentation at https://gcsfs.readthedocs.io/en/latest. To make it work you need to have the environment variable:
GOOGLE_APPLICATION_CREDENTIALS=SERVICE_ACCOUNT_KEY.json.
| 0
| 1
| 0
| 0
|
2014-08-12T13:07:00.000
| 1
| 0
| false
| 25,265,110
| 0
| 0
| 0
| 1
|
When mounting and writing files in the google cloud storage using the gcsfs, the gcsfs is creating folders and files but not writing files. Most of the times it shows input/output error. It even occurs even when we copy files from local directory to the mounted gcsfs directory.
gcsfs version 0.15
|
installing cx_Freeze to python at windows
| 25,936,813
| 56
| 26
| 27,553
| 0
|
python,batch-file,python-3.x,cx-freeze
|
I faced a similar problem (Python 3.4 32-bit, on Windows 7 64-bit). After installation of cx_freeze, three files appeared in c:\Python34\Scripts\:
cxfreeze
cxfreeze-postinstall
cxfreeze-quickstart
These files have no file extensions, but appear to be Python scripts. When you run python.exe cxfreeze-postinstall from the command prompt, two batch files are being created in the Python scripts directory:
cxfreeze.bat
cxfreeze-quickstart.bat
From that moment on, you should be able to run cx_freeze.
cx_freeze was installed using the provided win32 installer (cx_Freeze-4.3.3.win32-py3.4.exe). Installing it using pip gave exactly the same result.
| 0
| 1
| 0
| 0
|
2014-08-12T17:48:00.000
| 3
| 1.2
| true
| 25,270,885
| 1
| 0
| 0
| 2
|
I am using python 3.4 at win-8. I want to obtain .exe program from python code. I learned that it can be done by cx_Freeze.
In MS-DOS command line, I wrote pip install cx_Freeze to set up cx_Freeze. It is installed but it is not working.
(When I wrote cxfreeze to command line, I get this warning:C:\Users\USER>cxfreeze
'cxfreeze' is not recognized as an internal or external command,operable program or batch file.)
(I also added location of cxfreeze to "PATH" by environment variables)
Any help would be appriciated thanks.
|
installing cx_Freeze to python at windows
| 33,244,651
| 0
| 26
| 27,553
| 0
|
python,batch-file,python-3.x,cx-freeze
|
Make sure the Version of Python is correct, if you have more than one version on your computer, just simply type "python" in console to check the version of your python. I just had this problem earlier.
| 0
| 1
| 0
| 0
|
2014-08-12T17:48:00.000
| 3
| 0
| false
| 25,270,885
| 1
| 0
| 0
| 2
|
I am using python 3.4 at win-8. I want to obtain .exe program from python code. I learned that it can be done by cx_Freeze.
In MS-DOS command line, I wrote pip install cx_Freeze to set up cx_Freeze. It is installed but it is not working.
(When I wrote cxfreeze to command line, I get this warning:C:\Users\USER>cxfreeze
'cxfreeze' is not recognized as an internal or external command,operable program or batch file.)
(I also added location of cxfreeze to "PATH" by environment variables)
Any help would be appriciated thanks.
|
How to launch a python process in Windows SYSTEM account
| 25,279,812
| 2
| 0
| 1,368
| 0
|
python,windows
|
Create a service that runs permanently.
Arrange for the service to have an IPC communications channel.
From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service.
The service receives the message and performs the action. That is, executes the python code that the sender requests.
This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service.
If you don't want to run in a service then you can use CreateProcessAsUser or similar APIs.
| 0
| 1
| 0
| 0
|
2014-08-13T06:48:00.000
| 4
| 0.099668
| false
| 25,279,746
| 1
| 0
| 0
| 2
|
I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu
|
How to launch a python process in Windows SYSTEM account
| 25,281,143
| 0
| 0
| 1,368
| 0
|
python,windows
|
You could also use Windows Task Scheduler, it can run a script under SYSTEM account and its interface is easy (if you do not test too often :-) )
| 0
| 1
| 0
| 0
|
2014-08-13T06:48:00.000
| 4
| 0
| false
| 25,279,746
| 1
| 0
| 0
| 2
|
I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu
|
Install .desktop file with setup.py
| 25,284,972
| 1
| 8
| 1,893
| 0
|
python,linux,pip
|
This sounds to me like a good approach but perhaps instead of placing the .desktop file in the system wide /usr/share/applications/ folder, you could place the file in the users applications folder at ~/.local/share/applications.
This would also not require elevated permissions to access the root owned /user directory and it's sub-directories.
| 0
| 1
| 0
| 0
|
2014-08-13T11:25:00.000
| 2
| 1.2
| true
| 25,284,879
| 1
| 0
| 0
| 1
|
I have a python application that is supposed to be launchable via GUI so it has to have a .desktop file in /usr/share/applications/. The application only supports Linux. Normally, pip installs all files in one directory but it is possible to specify other locations (e.g. the .desktop file) in the setup.py using data_files=[].
Is this considered to be good a solution in this case or is this something that should only happen in a distribution specific package (like .rpm/.deb/.ebuild)?
|
Google App Engine 413 error (Request Entity Too Large)
| 25,311,367
| 4
| 3
| 5,558
| 0
|
python,google-app-engine,http-status-code-413
|
Looks like it was because I was making a GET request. Changing it to POST fixed it.
| 0
| 1
| 0
| 0
|
2014-08-14T03:40:00.000
| 1
| 1.2
| true
| 25,299,681
| 0
| 0
| 1
| 1
|
I've implemented an app engine server in Python for processing html documents sent to it. It's all well and good when I run it locally, but when running off the App engine, I get the following error:
"413. That’s an error. Your client issued a request that was too large. That’s all we know."
The request is only 155KB, and I thought the app engine request limit was 10MB. I've verified that I haven't exceeded any of the daily quotas, so anyone know what might be going on?
Thanks in advance!
-Saswat
|
python ftpclient limit connections
| 25,303,266
| 0
| 0
| 691
| 0
|
python,ftp,ftplib
|
It seems that it uses, per default, two connections (one for sending commands, one for datatransfer?).
That's how ftp works. You have a control connection (usually port 21) for commands and a data connection for data transfer, file listing etc and a dynamic port.
However my ftpserver only accepts one connection at any given time.
ftpserver might have a limit for multiple control connections, but it must still accept data connections. Could you please show from tcpdump, wireshark, logfiles etc why you think multiple connections are the problem?
In filezilla I'm able to "limit the maximum number of simultanious connections"
This is for the number of control connections only. Does it work with filezilla? Because I doubt that ftplib opens multiple control connections.
| 0
| 1
| 1
| 0
|
2014-08-14T08:04:00.000
| 1
| 1.2
| true
| 25,302,979
| 0
| 0
| 0
| 1
|
I have a bit of a problem with the ftplib from python. It seems that it uses, per default, two connections (one for sending commands, one for datatransfer?). However my ftpserver only accepts one connection at any given time. Since the only file that needs to be transfered is only about 1 MB large, the reasoning of being able to abort inflight commands does not apply here.
Previously the same job was done by the windows commandline ftp client. So I could just call this client from python, but I would really prefer a complete python solution.
Is there a way to tell ftplib, that it should limit itself to a single connection? In filezilla I'm able to "limit the maximum number of simultanious connections", ideally I would like to reproduce this functionality.
Thanks for your help.
|
Python RabbitMQ - consumer only seeing every second message
| 25,345,174
| 10
| 3
| 1,694
| 0
|
python,rabbitmq,amqp,pika
|
Your code is fine logically, and runs without issue on my machine. The behavior you're seeing suggests that you may have accidentally started two consumers, with each one grabbing a message off the queue, round-robin style. Try either killing the extra consumer (if you can find it), or rebooting.
| 0
| 1
| 0
| 1
|
2014-08-16T21:38:00.000
| 1
| 1.2
| true
| 25,344,239
| 0
| 0
| 1
| 1
|
I'm testing out a producer consumer example of RabbitMQ using Pika 0.98. My producer runs on my local PC, and the consumer runs on an EC2 instance at Amazon.
My producer sits in a loop and sends up some system properties every second. The problem is that I am only seeing the consumer read every 2nd message, it's as though every 2nd message is not being read. For example, my producer prints out this (timestamp, cpu pct used, RAM used):
2014-08-16 14:36:17.576000 -0700,16.0,8050806784
2014-08-16 14:36:18.578000 -0700,15.5,8064458752
2014-08-16 14:36:19.579000 -0700,15.0,8075313152
2014-08-16 14:36:20.580000 -0700,12.1,8074121216
2014-08-16 14:36:21.581000 -0700,16.0,8077778944
2014-08-16 14:36:22.582000 -0700,14.2,8075038720
but my consumer is printing out this:
Received '2014-08-16 14:36:17.576000 -0700,16.0,8050806784'
Received '2014-08-16 14:36:19.579000 -0700,15.0,8075313152'
Received '2014-08-16 14:36:21.581000 -0700,16.0,8077778944'
The code for the producer is:
import pika
import psutil
import time
import datetime
from dateutil.tz import tzlocal
import logging
logging.getLogger('pika').setLevel(logging.DEBUG)
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='54.191.161.213'))
channel = connection.channel()
channel.queue_declare(queue='ems.data')
while True:
now = datetime.datetime.now(tzlocal())
timestamp = now.strftime('%Y-%m-%d %H:%M:%S.%f %z')
msg="%s,%.1f,%d" % (timestamp, psutil.cpu_percent(),psutil.virtual_memory().used)
channel.basic_publish(exchange='',
routing_key='ems.data',
body=msg)
print msg
time.sleep(1)
connection.close()
And the code for the consumer is:
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='0.0.0.0'))
channel = connection.channel()
channel.queue_declare(queue='hello')
print ' [*] Waiting for messages. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] Received %r" % (body,)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
channel.start_consuming()
|
sys.path vs. $PATH
| 64,768,294
| 1
| 6
| 1,684
| 0
|
python,bash,filesystems,sys
|
sys.path and PATH are two entirely different variables. The PATH environment variable specifies to your shell (or more precisely, the operating system's exec() family of system calls) where to look for binaries, whereas sys.path is a Python-internal variable which specifies where Python looks for installable modules.
The environment variable PYTHONPATH can be used to influence the value of sys.path if you set it before you start Python.
Conversely, os.environ['PATH']can be used to examine the value of PATH from within Python (or any environment variable, really; just put its name inside the quotes instead of PATH).
| 0
| 1
| 0
| 1
|
2014-08-16T23:09:00.000
| 3
| 0.066568
| false
| 25,344,841
| 1
| 0
| 0
| 1
|
I would like to access the $PATH variable from inside a python program. My understanding so far is that sys.path gives the Python module search path, but what I want is $PATH the environment variable. Is there a way to access that from within Python?
To give a little more background, what I ultimately want to do is find out where a user has Package_X/ installed, so that I can find the absolute path of an html file in Package_X/. If this is a bad practice or if there is a better way to accomplish this, I would appreciate any suggestions. Thanks!
|
Call system command 'history' in Linux
| 25,350,907
| -1
| 0
| 227
| 0
|
python,linux
|
history is not an executable file, but a built-in bash command. You can't run it with os.system.
| 0
| 1
| 0
| 0
|
2014-08-17T15:52:00.000
| 1
| -0.197375
| false
| 25,350,882
| 0
| 0
| 0
| 1
|
I am trying to using os.system function to call command 'history'
but the stdout just show that 'sh :1 history: not found'
Other example i.e. os.system('ls') is works. Can anyone can tell me why 'history' does not work, and how to call 'history' command in Python script.
|
Pip/Easy_install do not install desired package
| 25,352,852
| 1
| 0
| 97
| 0
|
python-2.7,module,pip,python-requests
|
Since the "python -m pip install -U pip" actually displayed something, on a hunch I tried:
"python -m pip install requests"
This worked! I don't know why any of the installation guides do not say to do this.
| 0
| 1
| 0
| 0
|
2014-08-17T19:24:00.000
| 1
| 1.2
| true
| 25,352,831
| 1
| 0
| 0
| 1
|
I am new to Python (2.7) but I am trying to run a program that requires the "requests" module. I have installed pip using the get-pip.py script and registered the Python27 and Python27/Scripts paths as environment variables.
When I run "python -m pip install -U pip" it says the package is already up-to-date.
Following installation guides, when I run "pip install requests" I get a new command prompt line. I tried "easy_install requests" and get the same thing. I tried "pip install --verbose requests" and have the same behavior (so much for being verbose!).
I am running on Windows Vista Ultimate, using the command prompt as administrator.
|
Pyinstaller add icon, launch without console for Mac
| 46,946,166
| 4
| 4
| 3,926
| 0
|
python,console,icons,pyinstaller
|
You must have group.icns file for app in Mac OS
| 0
| 1
| 0
| 0
|
2014-08-17T19:47:00.000
| 2
| 0.379949
| false
| 25,353,008
| 1
| 0
| 0
| 2
|
This is a really short question. I have created a package for Mac using Pyinstaller and I am mainly trying to add an icon to it. I am also trying to get the program to run without launching the terminal as the user has no interaction with the terminal. Currently I am keeing the following into cmd when running pyinstaller:
python pyinstaller.py --icon=group.ico --onefile --noconsole GESL_timetabler.py
I get the regular package (Unix Executable) and an App. However only the Unix Executable works and the no processes run when I double click the App.
Also, neither the App, nor the Unix Executable, has the icon image displayed. I am sure this is a trivial problem with my command to pyinstaller, but I am having difficulty figuring out the mistake. Could someone help me fix the instructions above? Thank you!
|
Pyinstaller add icon, launch without console for Mac
| 33,063,961
| 1
| 4
| 3,926
| 0
|
python,console,icons,pyinstaller
|
Try using --windowed instead. As far as I can tell they're the same thing, but it might do the trick.
As for icons, I've only gotten that to work on console windows. It just doesn't carry over to my main GUI window.
| 0
| 1
| 0
| 0
|
2014-08-17T19:47:00.000
| 2
| 0.099668
| false
| 25,353,008
| 1
| 0
| 0
| 2
|
This is a really short question. I have created a package for Mac using Pyinstaller and I am mainly trying to add an icon to it. I am also trying to get the program to run without launching the terminal as the user has no interaction with the terminal. Currently I am keeing the following into cmd when running pyinstaller:
python pyinstaller.py --icon=group.ico --onefile --noconsole GESL_timetabler.py
I get the regular package (Unix Executable) and an App. However only the Unix Executable works and the no processes run when I double click the App.
Also, neither the App, nor the Unix Executable, has the icon image displayed. I am sure this is a trivial problem with my command to pyinstaller, but I am having difficulty figuring out the mistake. Could someone help me fix the instructions above? Thank you!
|
print('\a') doesn't work on linux (no sound)
| 25,382,814
| 2
| 1
| 788
| 0
|
python,python-3.x
|
The code works, the problem is probably your terminal settings. Go there and find the settings for "bell" and make sure it's set to "audible" or whatever your system calls it (as opposed to "visual" or "disabled" etc.).
To prove that it isn't Python's fault, try pressing backspace at the terminal prompt when nothing has been typed on the line. This should make the bell ding on most systems where it is enabled.
| 0
| 1
| 0
| 0
|
2014-08-19T11:36:00.000
| 1
| 0.379949
| false
| 25,382,412
| 1
| 0
| 0
| 1
|
I have tried print('\a') just to preduce a sound, but it didn't work. Why and How can I make it work? I'm on a linux system.
|
BigQuery Api getQueryResults returning pageToken for 0 records
| 25,393,093
| 0
| 1
| 471
| 1
|
python,google-app-engine,google-bigquery
|
This is a known issue that has lingered for far far too long. It is fixed in this week's release, which should go live this afternoon or tomorrow.
| 0
| 1
| 0
| 0
|
2014-08-19T16:07:00.000
| 1
| 1.2
| true
| 25,388,124
| 0
| 0
| 1
| 1
|
We have a query which returns 0 records sometimes when called. When you call the getQueryResults on the jobId it returns with a valid pageToken with 0 rows. This is a bit unexpected since technically there is no data. Whats worst is if you keep supplying the pageToken for subsequent data-pulls it keeps giving zero rows with valid tokens at each page.
If the query does return data initially with a pageToken and you keep using the pageToken for subsequent data pulls it returns pageToken as None after the last page giving a termination condition.
The behavior here seems inconsistent?Is this a bug?
Here is a sample jobresponse I see:
Here is a sample job response:
{u'kind': u'bigquery#getQueryResultsResponse', u'jobReference': {u'projectId': u'xxx', u'jobId': u'job_aUAK1qlMkOhqPYxwj6p_HbIVhqY'}, u'cacheHit': True, u'jobComplete': True, u'totalRows': u'0', u'pageToken': u'CIDBB777777QOGQFBAABBAAE', u'etag': u'"vUqnlBof5LNyOIdb3TAcUeUweLc/6JrAdpn-kvulQHoSb7ImNUZ-NFM"', u'schema': {......}}
I am using python and running queries on GAE using the BQ api
|
can I use python to paste commands into LX terminal?
| 25,409,597
| 0
| 0
| 428
| 0
|
python,terminal,raspberry-pi,tesseract,raspbian
|
There are ways to do what you asked, but I think you lack some research of your own, as some of these answers are very "googlable".
You can print commands to LX terminal with python using "sys.stdout.write()"
For the boot question:
1 - sudo raspi-config
2 - change the Enable Boot to Desktop to Console
3 - there is more than one way to make your script auto-executable:
-you have the Crontab (which I think it will be the easiest, but probably not the best of the 3 ways)
-you can also make your own init.d script (best, not easiest)
-or you can use the rc.local
Also be carefull when placing an infinite loop script in auto-boot.
Make a quick google search and you will find everything you need.
Hope it helps.
D.Az
| 0
| 1
| 0
| 1
|
2014-08-20T02:22:00.000
| 1
| 0
| false
| 25,395,814
| 0
| 0
| 0
| 1
|
okay, So for a school project I'm using raspberry pi to make a device that basically holds both the functions of an ocr and a tts. I heard that I need to use Google's tesseract through a terminal but I am not willing to rewrite the commands each time I want to use it. so i was wondering if i could either:
A: Use python to print commands into the LX Terminal
B: use a type of loop command on the LX terminal and save as a script?
It would also be extremely helpful if I could find out how to make my RPI go staight to my script rather than the raspbian desktop when it first boote up.
Thanks in advance.
|
How to call methods of class tornado.httpserver.HTTPserver?
| 25,410,159
| 1
| 1
| 120
| 0
|
python,tornado
|
These methods are used internally; you shouldn't call them yourself.
| 0
| 1
| 0
| 0
|
2014-08-20T11:07:00.000
| 1
| 1.2
| true
| 25,403,160
| 0
| 0
| 0
| 1
|
I am learning the web framework Tornado. During the study of this framework, I found the class tornado.httpserver.HTTPserver. I know how to create a constructor of this class and create instance tornado.httpserver.HTTPserver in main() function. But this class tornado.httpserver.HTTPserver has 4 methods. I have not found how to use these methods.
1) def close_all_connections(self):
2) def handle_stream(self, stream, address):
3) def start_request(self, server_conn, request_conn):
4) def on_close(self, server_conn):
I know that 2-4 methods are inherited from the class tornado.tcpserver.TCPServer
Can someone illustrate how to use these methods of a class tornado.httpserver.HTTPserver?
|
In which design layer i can put jinja2
| 25,407,418
| 1
| 0
| 54
| 0
|
python,google-app-engine,jinja2
|
These are very artificial distinctions, and it's a mistake to assume that all apps have each of these layers, or that any particular function will fit only into one of them.
Jinja2 is a template language. It's firmly in the presentation layer.
There isn't really any such thing as the data access layer. If you really need to put something here, one possibility would be whichever library you are using to access the data: ndb or the older db.
| 0
| 1
| 0
| 0
|
2014-08-20T14:20:00.000
| 1
| 1.2
| true
| 25,407,197
| 0
| 0
| 1
| 1
|
I'm new in Python/GAE and jinja2, and I want to present a schema of this architecture with displaying that in Layered, like this:
Presentation Layer: HTML+CSS+JQUERY
Business Layer: webapp2
DAO Layer: (I don't know what I put here when it's Python, I find some exemples for java thay put here "JDO orJDO or low level API")
Data Layer: appengine DataStore
My questions:
Regarding jinja2, where can I put it?
What can I put in DAO layer for Python/GAE
Thanks
|
Prevent greenthread switch in eventlet
| 25,425,696
| 1
| 1
| 349
| 0
|
python,django,eventlet,green-threads
|
There is no such context manager, though you are welcome to contribute one.
You have monkey patched everything, but you do not want to monkey patch socket in memcache client. Your options:
monkey patch everything but socket, then patcher.import_patched particular modules. This is going to be very hard with Django/Tastypie.
modify your memcache client to use eventlet.patcher.original('socket')
| 0
| 1
| 0
| 0
|
2014-08-20T21:04:00.000
| 1
| 1.2
| true
| 25,414,394
| 0
| 0
| 1
| 1
|
I have a Django/Tastypie app where I've monkey patched everything with eventlet.
I analysed performance during load tests while using both sync and eventlet worker clasees for gunicorn. I tested against sync workers to eliminate the effects of waiting for other greenthreads to switch back, and I found that the memcached calls in my throttling code only take about 1ms on their own. Rather than switch to another greenthread while waiting for this 1ms response, I'd rather just block at this one point. Is there some way to tell eventlet to not switch to another greenthread? Maybe a context manager or something?
|
Kill Python Multiprocessing Pool
| 25,415,676
| 24
| 22
| 10,314
| 0
|
python,linux,multiprocessing
|
SIGQUIT (Ctrl + \) will kill all processes even under Python 2.x.
You can also update to Python 3.x, where this behavior (only child gets the signal) seems to have been fixed.
| 0
| 1
| 0
| 0
|
2014-08-20T21:57:00.000
| 4
| 1.2
| true
| 25,415,104
| 1
| 0
| 0
| 2
|
I am running a Python program which uses the multiprocessing module to spawn some worker threads. Using Pool.map these digest a list of files.
At some point, I would like to stop everything and have the script die.
Normally Ctrl+C from the command line accomplishes this. But, in this instance, I think that just interrupts one of the workers and that a new worker is spawned.
So, I end up running ps aux | grep -i python and using kill -9 on the process ids in question.
Is there a better way to have the interrupt signal bring everything to a grinding halt?
|
Kill Python Multiprocessing Pool
| 25,415,725
| 0
| 22
| 10,314
| 0
|
python,linux,multiprocessing
|
I found that using the python signal library works pretty well in this case. When you initialize the pool, you can pass a signal handler to each thread to set a default behavior when the main thread gets a keyboard interrupt.
If you really just want everything to die, catch the keyboard interrupt exception in the main thread, and call pool.terminate().
| 0
| 1
| 0
| 0
|
2014-08-20T21:57:00.000
| 4
| 0
| false
| 25,415,104
| 1
| 0
| 0
| 2
|
I am running a Python program which uses the multiprocessing module to spawn some worker threads. Using Pool.map these digest a list of files.
At some point, I would like to stop everything and have the script die.
Normally Ctrl+C from the command line accomplishes this. But, in this instance, I think that just interrupts one of the workers and that a new worker is spawned.
So, I end up running ps aux | grep -i python and using kill -9 on the process ids in question.
Is there a better way to have the interrupt signal bring everything to a grinding halt?
|
How to connect to kivy-remote-shell?
| 41,788,451
| 0
| 1
| 1,540
| 0
|
android,python,ssh,kivy
|
Don't know you found the answer or not. But what i have understood is that you are trying to connect android device from Ubuntu. If I am right then (go on reading) you are following wrong steps.
First :- Your Ubuntu does not have ssh server by default so you get this error message.
Second :- You are using 127.0.0.1 address i.e your Ubuntu machine itself.
Method to do this shall be
Give your android machine a static address or if it gets dynamic its OK.
know the IP address of android and then from Ubuntu typessh -p8000 admin@IP_Of_andrid_device and this should solve the issue.
| 1
| 1
| 0
| 0
|
2014-08-21T06:23:00.000
| 3
| 0
| false
| 25,419,510
| 0
| 0
| 0
| 3
|
This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 admin@127.0.0.1 from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more?
|
How to connect to kivy-remote-shell?
| 25,423,631
| 1
| 1
| 1,540
| 0
|
android,python,ssh,kivy
|
127.0.0.1
This indicates something has gone wrong - 127.0.0.1 is a standard loopback address that simply refers to localhost, i.e. it's trying to ssh into your current computer.
If this is the ip address suggested by kivy-remote-shell then there must be some other problem, though I don't know what - does it work on another device?
| 1
| 1
| 0
| 0
|
2014-08-21T06:23:00.000
| 3
| 0.066568
| false
| 25,419,510
| 0
| 0
| 0
| 3
|
This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 admin@127.0.0.1 from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more?
|
How to connect to kivy-remote-shell?
| 25,426,085
| 2
| 1
| 1,540
| 0
|
android,python,ssh,kivy
|
When the app is running, the GUI will tell you what IP address and port to connect to.
| 1
| 1
| 0
| 0
|
2014-08-21T06:23:00.000
| 3
| 0.132549
| false
| 25,419,510
| 0
| 0
| 0
| 3
|
This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 admin@127.0.0.1 from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more?
|
Running an .exe script from a VB script by passing arguments during runtime
| 25,442,205
| 1
| 1
| 185
| 0
|
python,vb.net,arguments
|
If you don't have the source to the python exe convertor and if the arguments don't need to change on each execution, you could probably open the exe in a debugger like ollydbg and search for shellexecute or createprocess and then create a string in a code cave and use that for the arguments. I think that's your only option.
Another idea: Maybe make your own extractor that includes the python script, vbscript, and python interpreter. You could just use a 7zip SFX or something.
| 0
| 1
| 0
| 0
|
2014-08-22T06:22:00.000
| 2
| 1.2
| true
| 25,440,747
| 1
| 0
| 0
| 1
|
I have converted a python script to .exe file. I just want to run the exe file from a VB script. Now the problem is that the python script accepts arguments during run-time (e.g.: serial port number, baud rate, etc.) and I cannot do the same with the .exe file. Can someone help me how to proceed?
|
anaconda launcher links don't work
| 25,669,586
| 0
| 2
| 14,710
| 0
|
python,python-2.7,osx-mavericks,ipython-notebook,anaconda
|
Use Anaconda full suite , that include installing all the tools and necessary packages ,it works fine for me , I didn't use the Launcher !
| 0
| 1
| 0
| 0
|
2014-08-24T14:42:00.000
| 4
| 0
| false
| 25,472,840
| 1
| 0
| 0
| 1
|
I've installed anaconda on mavericks osx. When I'm trying to install ipython notebook from launcher app - it shows message that app is installing, but nothing happens after. Also links in launcher don't work and I can easily start ipython notebook from terminal. So I guess something wrong with launcher itself.
How can I fix it?
|
set `ulimit -c` from outside shell
| 25,475,976
| 1
| 7
| 7,676
| 0
|
python,bash,shell,ulimit
|
I'm guessing your problem is that you haven't understood that rlimits are set per process. If you use os.system in Python to call ulimit, that is only going to set the ulimit in that newly spawned shell process, which then immediately exits after which nothing has been changed.
What you need to do, instead, is to run ulimit in the shell that starts your program. The process your program is running in will then inherit that rlimit from the shell.
I do not think there is any way to alter the rlimit of process X from process Y, where X != Y.
EDIT: I'll have to take that last back, at least in case you're running in Linux. There is a Linux-specific syscall prlimit that allows you to change the rlimits of a different process, and it also does appear to be available in Python's resource module, though it is undocumented there. See the manpage prlimit(2) instead; I'd assume that the function available in Python uses the same arguments.
| 0
| 1
| 0
| 0
|
2014-08-24T20:15:00.000
| 3
| 0.066568
| false
| 25,475,906
| 0
| 0
| 0
| 1
|
I have a program that's running automatically on boot, and sporadically causing a coredump.
I'd like to record the output, but I can't seem to set ulimit -c programmatically (It's defaulted to 0, and resets every time).
I've tried using a bash script, as well as python's sh, os.system and subprocess, but I can't get it to work.
|
Writing a line to CMD in python
| 25,490,166
| 2
| 0
| 727
| 0
|
python,cmd
|
Each call to os.system is a separate instance of the shell. The cd you issued only had effect in the first instance of the shell. The second call to os.system was a new shell instance that started in the Python program's current working directory, which was not affected by the first cd invocation.
Some ways to do what you want:
1 -- put all the relevant commands in a single bash file and execute that via os.system
2 -- skip the cd call; just invoke your tesseract command using a full path to the file
3 -- change the directory for the Python program as a whole using os.chdir but this is probably not the right way -- your Python program as a whole (especially if running in a web app framework like Django or web2py) may have strong feelings about the current working directory.
The main takeaway is, os.system calls don't change the execution environment of the current Python program. It's equivalent to what would happen if you created a sub-shell at the command line, issued one command then exited. Some commands (like creating files or directories) have permanent effect. Others (like changing directories or setting environment variables) don't.
| 0
| 1
| 0
| 0
|
2014-08-25T15:35:00.000
| 1
| 1.2
| true
| 25,489,450
| 1
| 0
| 0
| 1
|
I am very new to Python and I have been trying to find a way to write in cmd with python.
I tried os.system and subprocess too. But I am not sure how to use subprocess.
While using os.system(), I got an error saying that the file specified cannot be found.
This is what I am trying to write in cmd os.system('cd '+path+'tesseract '+'a.png out')
I have tried searching Google but still I don't understand how to use subprocess.
EDIT:
It's not a problem with python anymore, I have figured out. Here is my code now.
os.system("cd C:\\Users\\User\\Desktop\\Folder\\data\\")
os.system("tesseract a.png out")
Now it says the file cannot be open. But if I open the cmd separately and write the above code, it successfully creates a file in the folder\data.
|
Popen() does not redirect output from Task Scheduler task
| 25,511,407
| 0
| 0
| 95
| 0
|
windows,python-2.7,popen,py2exe
|
The error was caused by registering the task to run as a different user. The task was registered as the "SYSTEM" user, but could not access remote files which were accessed by the Popen() call. The solution was to run the task as another user, or run it as the "SYSTEM" user on the machine where the remote files are located.
| 0
| 1
| 0
| 0
|
2014-08-25T18:55:00.000
| 1
| 0
| false
| 25,492,551
| 0
| 0
| 0
| 1
|
I am writing a wrapper program to add features to a Windows command-line tool. This program is made to run on a Windows server with py2exe. In it, there are several lines which look like:
job = subprocess.Popen(syncCommand, stdout=myLog, stderr=myLog)
When I call this program from the command line, everything works fine, however, I wish to automate this script using Windows Task Scheduler (per a request from my manager). When I register the executable and attempt to run it as a service, the logs are touched, but do not become populated with dialogue that would normally be returned. I am unsure where precisely the error is occurring, as running it from the task scheduler does not call a terminal window with which to view debugging messages.
Note that in previous versions, I had calls to subprocess.Popen() replaced by os.system(), and everything worked as it should have.
Is there an obvious obstacle/problem with this method? Is there a compelling reason to use Popen() instead of system()?
|
Display a contantly updated text file in a web user interface using Python flask framework
| 25,496,797
| 0
| 0
| 268
| 0
|
python,shell,flask
|
The one way I can think of doing this is to refresh the page. So, you could set the page to refresh itself every X seconds.
You would hope that the file you are reading is not large though, or it will impact performance. Better to have the output in memory.
| 0
| 1
| 0
| 0
|
2014-08-25T22:40:00.000
| 1
| 0
| false
| 25,495,410
| 0
| 0
| 1
| 1
|
In my project workflow, i am invoking a sh script from a python script file. I am planning to introduce a web user interface for this, hence opted for flask framework. I am yet to figure out how to display the terminal output of the shell script invoked by my python script in a component like text area or label. This file is a log file which is constantly updated till the script run is completed.
The solution i thought was to redirect terminal output to a text file and read the text file every X seconds and display the content. I can also do it via the Ajax way from my web application. Is there any other prescribed way to achieve this ?
Thanks
|
How do I install Python 2.7 modules on Windows 64-Bit?
| 25,495,917
| 2
| 0
| 1,791
| 0
|
python,windows,python-2.7,module,pip
|
It doesn't matter what kind of machine you have. You can run 32-bit Windows on a 64-bit machine. And you can run 32-bit Python on 64-bit Windows.
If you have 32-bit Python, you need to install 32-bit pip. (Or you need to switch to 64-bit Python.)
From your description, you most likely have 32-bit Python on 64-bit Windows, and tried to use a 64-bit pip.
PS, if you want to install it manually instead of using Gohlke's installer, nobody can help you debug your problem based on "it says it failed to install". It produces a lot more output than that, and without that output, it's impossible to know which of the billion things that could possibly go wrong actually did.
PPS, just installing pip is sufficient to install any pure-Python packages. But if you want to install packages that include C extensions, you will need to set up a compiler (either MSVC, or MinGW/gcc), as explained in the pip documentation.
| 0
| 1
| 0
| 0
|
2014-08-25T23:13:00.000
| 1
| 0.379949
| false
| 25,495,712
| 1
| 0
| 0
| 1
|
I've been looking around the internet for days now and cannot find a solution to my problem. I've learned all the basics to programming in Python 2.7 and I want to add Pip to my copy of 2.7. I found the link to download the unoffical 64-Bit installer (www.lfd.uci.edu/~gohlke/pythonlibs/), but when I downloaded it and ran it, it said I needed to have Python 2.7 (which I do) and it couldn't find it in the registry. I went to Pip's website and downloaded the official Windows installer and unpacked it using WinRAR.
I then tried opening Command Prompt and changed the directory to where the get-pip.py is located and running get-pip.py install but it says it failed to install.
I am completely lost and really need detailed and clear help. Please answer!
|
Debug C-library from Python (ctypes)
| 25,719,883
| 6
| 13
| 5,237
| 0
|
python,eclipse,debugging,shared-libraries,ctypes
|
Actually, it is a fairly simple thing to do using the CDT and PyDev environments in Eclipse.
I assume here that you have already configured the projects correctly, so you can build and debug each one seperately.
Basically, you simply need to start the Python project in Debug mode and then to attach the CDT debugger to the running python process. To make it easier I'll try to describe it step by step:
Run your Python project in debug mode. Put a breakpoint somewhere after the loading of the dll using ctypes. Make note of the pid of the python process created (you should see a first line in the console view stating the pid. something like: pydev debugger: starting (pid: 1234))
Create a Debug configuration for your CDT project, choosing the type "C/C++ Attach to Application". You can use the default configuration.
Debug your project using the configuration you've created. A window should appear, asking you which process you want to attach to. Choose the python process having the right pid.
You can now add breakpoints to you C code.
You'll have two debuggers in the debug perspective, as if they were two different processes. You should always make sure the C/C++ debugging session is running when you work with the python debugger - as long as the C/C++ debugging session is suspended, the python debugger will be unresponsive.
| 0
| 1
| 0
| 0
|
2014-08-26T14:09:00.000
| 2
| 1.2
| true
| 25,507,911
| 1
| 0
| 0
| 1
|
I have a Python-program that uses ctypes and a C-shared library (dll-file). As an IDE, I am using Eclipse, where both projects will be developed (the C-shared library and the python program that uses it).
My idea is: when I start the Python-program in Debug-mode, can I somehow debug the shared library, which is written in C, too? Meaning: Can I set breakpoints and when the Python-program reaches that breakpoint in the shared library, executing stops and I can change variable values etc.?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.