Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
search sequence in genome with mismatches
| 16,360,951
| 0
| 1
| 979
| 0
|
python,perl,awk,biopython,bioperl
|
I think you should consider using an alignment tool designed for this data for a couple of reasons:
Those tools will also find reverse complemented matches (though, you could also implement this).
Aligners will properly handle paired-end reads and multiple matches.
Most aligners are written in C and use data structures and algorithms designed for this amount of data.
For those reasons, and others, any script you come up with will likely not be near as fast and complete as the tools that already exist. If you want to specify the number of mismatches to keep, instead of aligning all your reads and then parsing the output, you could use Vmatch if you have access to it (this tool is very fast and good for many matching tasks).
| 0
| 1
| 0
| 0
|
2013-05-02T17:17:00.000
| 2
| 0
| false
| 16,343,985
| 0
| 0
| 0
| 1
|
i have a fastq file with more than 100 million reads in it and a genome sequence of 10000 in length
i want to take out the sequences from the fastq file and search in the genome sequence with allowing 3 mismatches
I tried in this way using awk i got the sequences from fastq file:
1.fq(few lines)
@DH1DQQN1:269:C1UKCACXX:1:1101:1207:2171 1:N:0:TTAGGC
NATCCCCATCCTCTGCTTGCTTTTCGGGATATGTTGTAGGATTCTCAGC
+
1=ADBDDHD;F>GF@FFEFGGGIAEEI?D9DDHHIGAAF:BG39?BB
@DH1DQQN1:269:C1UKCACXX:1:1101:1095:2217 1:N:0:TTAGGC
TAGGATTTCAAATGGGTCGAGGTGGTCCGTTAGGTATAGGGGCAACAGG
+
??AABDD4C:DDDI+C:C3@:C):1?*):?)?################
$ awk 'NR%4==2' 1.fq
NATCCCCATCCTCTGCTTGCTTTTCGGGATATGTTGTAGGATTCTCAGC
TAGGATTTCAAATGGGTCGAGGTGGTCCGTTAGGTATAGGGGCAACAGG
i have all the sequences in file,now i want to take each line of sequence and search in genome sequence with allowing 3 mismatches and if it finds print the sequences
example:
genome sequence file:
GGGGAGGAATATGATTTACAGTTTATTTTTCAACTGTGCAAAATAACCTTAACTGCAGACGTTATGACATACATACATTCTATGAATTCCACTATTTTGGAGGACTGGAATTTTGGTCTACAACCTCCCCCAGGAGGCACACTAGAAGATACTTATAGGTTTGTAACCCAGGCAATTGCTTGTCAAAAACATACA
search sequence file:
GGGGAGGAATATGAT
GGGGAGGAATATGAA
GGGGAGGAATATGCC
TCAAAAACATAGG
TCAAAAACATGGG
OUTPUT FILE:
GGGGAGGAATATGAT 0 # 0 mismatch exact sequence
GGGGAGGAATATGAA 1 # 1 mismatch
GGGGAGGAATATGCC 2 # 2 mismatch
TCAAAAACATAGG 2 # 2 mismatch
TCAAAAACATGGG 3 # 3 mismatch
|
How do I receive and manage multiple TCP connections on the same port?
| 16,352,065
| 0
| 0
| 1,496
| 0
|
python,networking,tcp,client-server
|
Use select.select() to detect events on multiple sockets, like incoming connections, incoming data, outgoing buffer capacity and connection errors. You can use this on multiple listening sockets and on established connections from a single thread. Using a websearch, you can surely find example code.
| 0
| 1
| 1
| 0
|
2013-05-03T03:50:00.000
| 2
| 0
| false
| 16,351,298
| 0
| 0
| 0
| 2
|
I have a number of clients who need to connect to a server and maintain the connection for some time (around 4 hours). I don't want to specify a different connection port for each client (as there are potentially many of them) I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients. Is there a way to do this in Python or do I need to re-think the architecture.
EXTRA CREDIT: A Python snippet of the server code doing this would be amazing!
|
How do I receive and manage multiple TCP connections on the same port?
| 16,354,390
| 1
| 0
| 1,496
| 0
|
python,networking,tcp,client-server
|
I don't want to specify a different connection port for each client (as there are potentially many of them)
You don't need that.
I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients
That's how TCP already works.
Just create a socket listening to port 10800 and accept connections from it.
| 0
| 1
| 1
| 0
|
2013-05-03T03:50:00.000
| 2
| 1.2
| true
| 16,351,298
| 0
| 0
| 0
| 2
|
I have a number of clients who need to connect to a server and maintain the connection for some time (around 4 hours). I don't want to specify a different connection port for each client (as there are potentially many of them) I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients. Is there a way to do this in Python or do I need to re-think the architecture.
EXTRA CREDIT: A Python snippet of the server code doing this would be amazing!
|
Consuming from queues based upon external event (event queues)
| 18,451,136
| 0
| 0
| 689
| 0
|
python,erlang,rabbitmq,celery,eventqueue
|
Two more exotic options to consider: (1) define a custom exchange type in the Rabbit layer. This allows you to create routing rules that control which tasks are sent to which queues. (2) define a custom Celery mediator. This allows you to controls which tasks move when from queues to worker pools.
| 0
| 1
| 0
| 0
|
2013-05-03T21:40:00.000
| 2
| 0
| false
| 16,367,953
| 0
| 0
| 1
| 1
|
I am running into a use case where I would like to have control over how and when celery workers dequeue a task for processing from rabbitmq. Dequeuing will be synchronized with an external event that happens out of celery context, but my concern is whether celery gives me any flexibility to control dequeueing of tasks? I tried to investigate and below are a few possibilities:
Make use of basic.get instead of basic.consume, where basic.get is triggered based upon external event. However, I see celery defaults to basic.consume (push) semantics. Can I override this behavior without modifying the core directly?
Custom remote control the workers as and when the external event is triggered. However, from the docs it isn't very clear to me how remote control commands can help me to control dequeueing of the tasks.
I am very much inclined to continue using celery and possibly keep away from writing a custom queue processing solution on top of AMQP.
|
GAE: Logs from tasks does not appears in dashboard
| 16,476,151
| 0
| 0
| 63
| 0
|
python,google-app-engine
|
Is your application running on the appspot.com domain or your own custom domain? In the former case it should work without you specifiying the target. In the case of a custom domain we are aware of problems with this scenario. Please file a bug in either case.
| 0
| 1
| 0
| 0
|
2013-05-04T01:23:00.000
| 2
| 0
| false
| 16,369,685
| 0
| 0
| 1
| 1
|
I'm working with the Google App Engine Tasks Queue feature (Push).
In local, with the dev server, everything is working fine but once deployed my task fails.
I have put logs in it (logging python module) but they do not appear in my dashboard logs.
Is there anything to do to make it works?
Thanks for your help.
|
How to setup Git to deploy python app files into Ubuntu Server?
| 16,375,343
| 0
| 1
| 1,205
| 0
|
python,windows,git,ubuntu
|
Create a bare repository on your server.
Configure your local repository to use the repository on the server as a remote.
When working on your local workstation, commmit your changes and push them to the repository on your server.
Create a post-receive hook in the server repository that calls "git archive" and thus transfers your files to some other directory on the server.
| 0
| 1
| 0
| 1
|
2013-05-04T03:28:00.000
| 2
| 0
| false
| 16,370,283
| 0
| 0
| 0
| 1
|
I setup a new Ubuntu 12.10 Server on VPN hosting. I have installed all the required setup like Nginx, Python, MySQL etc. I am configuring this to deploy a Flask + Python app using uWSGI. Its working fine.
But to create a basic app i used Putty tool (from Windows) and created required app .py files.
But I want to setup a Git functionality so that i can push my code to required directory say /var/www/mysite.com/app_data so that i don't have to use SSH or FileZilla etc everytime i make some changes into my website.
Since i use both Ubuntu & Windows for development of app, setting up a Git kind of functionality would help me push or change my data easily to my Cloud Server.
How can i setup a Git functionality in Ubuntu ? and How could i access it and Deploy data using tools like GitBash etc. ?
Please Suggest
|
Using Python3 with Pymongo in Eclipse Pydev on Ubuntu
| 16,379,374
| 1
| 0
| 1,303
| 0
|
python,ubuntu,pydev,pymongo
|
You can install packages for a specific version of Python, all you need to do is specify the version of Python you want use from the command-line; e.g. Python2.7 or Python3.
Examples
Python3 pip your_package
Python3 easy_install your_package.
| 0
| 1
| 0
| 1
|
2013-05-04T21:52:00.000
| 1
| 1.2
| true
| 16,379,321
| 1
| 0
| 0
| 1
|
I am currently trying to run Pydev with Pymongo on an Python3.3 Interpreter.
My problem is, I am not able to get it working :-/
First of all I installed Eclipse with Pydev.
Afterwards I tried installing pip to download my Pymongo-Module.
Problem is: it always installs pip for the default 2.7 Version.
I read that you shouldn't change the default system Interpreter (running on Lubuntu 13.04 32-Bit) so I tried to install a second Python3.3 and run it in an virtual environement, but I can't find any detailed Information on how to use everything on my specific problem.
Maybe there is someone out there, that uses a similar configuration and can help me out to get everything running (in a simple way) ?
Thanks in advance,
Eric
|
Choosing Python3.3 interpreter in Eclipse problems
| 19,039,512
| 0
| 1
| 986
| 0
|
eclipse,python-3.x,settings,pydev,interpreter
|
Use the auto-config option. It will automatically find the libraries.
| 0
| 1
| 0
| 1
|
2013-05-05T08:30:00.000
| 2
| 0
| false
| 16,382,769
| 0
| 0
| 0
| 2
|
I am new to Eclipse & PyDev (on Ubuntu 13.04) and want to try Python3.3 programming.
But I cannot choose python3.3 iterpreter, - I try to choose it in usr\lib\python3.3 , but:
- when I try to choose PYTHONPATH by clicking "New folder" - window doesn't open (I can do it onl after choosing auto-config, which will add python2.7 pates);
- I don't know the file in usr\lib\python3.3, which I need to choose, as python3.3 interpreter (auto-config returns me only 2.7 objects).
Can you advice me how to choose python3.3 interpreter (maybe the main is the file\path I need to choose in " usr\lib\python3.3" as interpreter file - in windows Eclipse I see python3.3.exe, - I need to find its equal in Ubuntu I think)?
Thanks!
|
Choosing Python3.3 interpreter in Eclipse problems
| 19,208,342
| 0
| 1
| 986
| 0
|
eclipse,python-3.x,settings,pydev,interpreter
|
You set the path usr\lib\python3.3 by typing it directly in the 'Interpreter Executable' field! You don't have to search for the Interpreter file. This will do the Auto Config for you. Afterwards you declare a name and you're done.
| 0
| 1
| 0
| 1
|
2013-05-05T08:30:00.000
| 2
| 0
| false
| 16,382,769
| 0
| 0
| 0
| 2
|
I am new to Eclipse & PyDev (on Ubuntu 13.04) and want to try Python3.3 programming.
But I cannot choose python3.3 iterpreter, - I try to choose it in usr\lib\python3.3 , but:
- when I try to choose PYTHONPATH by clicking "New folder" - window doesn't open (I can do it onl after choosing auto-config, which will add python2.7 pates);
- I don't know the file in usr\lib\python3.3, which I need to choose, as python3.3 interpreter (auto-config returns me only 2.7 objects).
Can you advice me how to choose python3.3 interpreter (maybe the main is the file\path I need to choose in " usr\lib\python3.3" as interpreter file - in windows Eclipse I see python3.3.exe, - I need to find its equal in Ubuntu I think)?
Thanks!
|
Python & Django on a Mac: Illegal hardware instruction
| 16,386,760
| 0
| 4
| 3,921
| 0
|
python,django,homebrew
|
that kind of problem smells like architecture mess. You may try to execute a 64bit library from a 32bit interpreter or vice versa… As you're using homebrew, you shall be really careful of which interpreter you're using, what is your path etc… Maybe you shall trace your program to know more exactly where it fails, so you can pinpoint what is actually failing. It is very unlikely django that fails, but more something that django uses. For someone to help you, you need to dig more closely to your failing point, and give more context about what is failing beyond django.
| 0
| 1
| 0
| 0
|
2013-05-05T16:36:00.000
| 2
| 1.2
| true
| 16,386,707
| 0
| 0
| 1
| 2
|
Here is my issue:
I installed Python and Django on my mac. When I run "django-admin.py startproject test1" I get this error:
1 11436 illegal hardware instruction django-admin.py startproject
test1 (the number is always different)
I've tested with multiple Django versions, and this only happens with version 1.4 and higher...1.3 works fine.
I've been searching the web like crazy for the past week, and couldn't find anything regarding this issue with django so I assume the problem is not Django itself but something else. This is only on my mac at home, at work where I user Ubuntu works fine.
I tried to reinstall my entire system and this are the only things I have installed right now:
- Command line tools
- Homebrew
- Python & pip (w/ Homebrew)
- Git (w/ Homebrew)
- zsh (.oh-my-zsh shell)
I set up my virtualenv and install django 1.5.1 -- the same issue still appears.
I'm out of options for now since nothing I found resolves my problem, I'm hoping someone has some knowledge about this error.
I appreciate all the help, and thanks.
This is the python crash log:
Process: Python [2597] Path: /usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python Version: 2.7.4 (2.7.4) Code Type:
X86-64 (Native) Parent Process: zsh [2245] User ID: 501
Date/Time: 2013-05-05 20:53:19.899 +0200 OS Version: Mac OS
X 10.8.3 (12D78) Report Version: 10
Interval Since Last Report: 16409 sec Crashes Since Last
Report: 2 Per-App Crashes Since Last Report: 1 Anonymous
UUID: D859C141-544F-3473-1A13-F984DB2F8CBE
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes:
0x0000000000000001, 0x0000000000000000
|
Python & Django on a Mac: Illegal hardware instruction
| 68,527,402
| 0
| 4
| 3,921
| 0
|
python,django,homebrew
|
I had the same, but went around this issue by using Docker/docker-compose.
| 0
| 1
| 0
| 0
|
2013-05-05T16:36:00.000
| 2
| 0
| false
| 16,386,707
| 0
| 0
| 1
| 2
|
Here is my issue:
I installed Python and Django on my mac. When I run "django-admin.py startproject test1" I get this error:
1 11436 illegal hardware instruction django-admin.py startproject
test1 (the number is always different)
I've tested with multiple Django versions, and this only happens with version 1.4 and higher...1.3 works fine.
I've been searching the web like crazy for the past week, and couldn't find anything regarding this issue with django so I assume the problem is not Django itself but something else. This is only on my mac at home, at work where I user Ubuntu works fine.
I tried to reinstall my entire system and this are the only things I have installed right now:
- Command line tools
- Homebrew
- Python & pip (w/ Homebrew)
- Git (w/ Homebrew)
- zsh (.oh-my-zsh shell)
I set up my virtualenv and install django 1.5.1 -- the same issue still appears.
I'm out of options for now since nothing I found resolves my problem, I'm hoping someone has some knowledge about this error.
I appreciate all the help, and thanks.
This is the python crash log:
Process: Python [2597] Path: /usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python Version: 2.7.4 (2.7.4) Code Type:
X86-64 (Native) Parent Process: zsh [2245] User ID: 501
Date/Time: 2013-05-05 20:53:19.899 +0200 OS Version: Mac OS
X 10.8.3 (12D78) Report Version: 10
Interval Since Last Report: 16409 sec Crashes Since Last
Report: 2 Per-App Crashes Since Last Report: 1 Anonymous
UUID: D859C141-544F-3473-1A13-F984DB2F8CBE
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes:
0x0000000000000001, 0x0000000000000000
|
Gedit plugin for python autocomplete: how to install?
| 45,171,475
| 0
| 2
| 1,116
| 0
|
python,plugins,autocomplete,installation,gedit
|
If you have gedit3, have you checked that this is a plugin for gedit3, not gedit2?
Take a look at *.plugin file. It should say IAge=3.
| 0
| 1
| 0
| 0
|
2013-05-06T11:03:00.000
| 1
| 0
| false
| 16,397,384
| 0
| 0
| 0
| 1
|
Of course I tried to copy the files in the .gnome2/gedit/plugins and in the .local/share/gedit/plugins directories.
But it doesn't work at all. How do I install the plugin?
I'm on Fedora 18. Lxde desktop manager.
|
Google App Engine urlfetch loop
| 16,406,284
| 1
| 0
| 81
| 0
|
python,google-app-engine,loops
|
Take a look at the GAE cron functionality.
| 0
| 1
| 0
| 0
|
2013-05-06T19:46:00.000
| 2
| 0.099668
| false
| 16,406,080
| 0
| 0
| 1
| 1
|
Can I make a loop in Google App Engine that fetches information from a site?
I have made a small code that already gets the information I want from the site, but I don't know how to make this code run every lets say, 20 minutes.
Is there a way to do this?
P.S.: I have looked at TaskQueue, but I'm not sure if it is meant for things like this.
|
Running binary WSGI app
| 16,407,922
| 0
| 1
| 717
| 0
|
python,nginx,wsgi,uwsgi,pyinstaller
|
The binary you get is just your program and a python interperter stuffed together into an executable with all dependencies so that it can be easier distributed. It won't give you any speed boost, and it's not really 'compiled' into a binary.
By using a binary of this kind you would loose all the advantages that WSGI provides you, so it's a bad idea. Just deplay your wsgi application as documented.
| 0
| 1
| 0
| 0
|
2013-05-06T21:20:00.000
| 1
| 0
| false
| 16,407,471
| 1
| 0
| 0
| 1
|
I have a working wsgi app developed in python. I know that pyinstaller will compile and get me a binary of this application. I have a nginx and uwsgi running. can i use this binary instead of the python script to run the whole thing from uwsgi to boost the speed ..
|
data bridge between Java and Python daemons
| 16,444,115
| 1
| 1
| 368
| 0
|
java,python,database,distributed-computing,in-memory-database
|
Given the relatively low volume of data you need, I would say the easiest way would be to use a TCP socket to communicate between the two processes. The data speed on the loopback interface is more than enough for your needs.
| 0
| 1
| 0
| 0
|
2013-05-08T05:05:00.000
| 3
| 0.066568
| false
| 16,433,047
| 0
| 0
| 1
| 1
|
I have two background processes running on linux machine. One is Java and second one is in Python. What would be most efficient way to exchange data between these two apps ? I am talking about text / images data below < 10Mb approx each 5 minutes (not streamed). Due high cost of refactoring we cannot migrate fully to Python (or Java).
Natural choice is filesystem or local networking but what about in memory database (sqllite/redis/...) ? Filesystem handling or network handling is sometimes painfull i guess.
Do you think that in-memory-DB would be good option for such task ? Jython is not option there as not all Python libraries are compatible...
Environment : ubuntu server 12.04 64bit, Python 2.7, Java 7
|
How to directly access a resource in a Py2app (or Py2exe) program?
| 17,084,259
| 4
| 5
| 1,524
| 0
|
python,resources,py2app
|
By default the 'Resource' folder is current working directory for applications started by py2app.
Futhermore the environment variable "RESOURCEPATH" is set to the absolute path of the resource folder.
| 1
| 1
| 0
| 0
|
2013-05-08T07:05:00.000
| 1
| 0.664037
| false
| 16,434,632
| 0
| 0
| 0
| 1
|
This is mostly for Py2app, but I plan to also port to Windows so Py2exe is also applicable.
For Mac: How can I access the Resources folder of my app bundle from Python code? The ideal way for me would be to get the path to this folder into a variable that my classes prepend to any file they need to access. Given the portable nature of OSX app bundles this Resources folder can move, so it's obviously not acceptable to assume it'll always be at /Applications/MyApp.app/Contents/Resources.
For development I can preset this variable to something like "./Resources-test" but for the final distribution I would need to be able to locate the Resources folder to access files therein as file objects.
For Windows: If I use py2exe, what's the correct way to get the path to where the application is running from? (Think portable app - the app might be running from Program files, or a directory on someone's flash drive, or in a temp directory!) On Windows it'd be suitable to simply know where the .exe file is and just have a Resources folder there. (I plan to make cross-platform apps using wxwidgets.)
Thanks
|
openCV install Error using brew
| 16,495,361
| 0
| 0
| 447
| 0
|
python,macos,opencv
|
Try using macports it builds opencv including python bindings without any issue.
I have used this for osx 10.8.
| 0
| 1
| 0
| 0
|
2013-05-08T08:45:00.000
| 1
| 0
| false
| 16,436,260
| 0
| 1
| 0
| 1
|
i am trying to install opencv on my MacbookPro OSX 10.6.8 (snow leopard)
and Xcode version is 3.2.6
and result of "which python" is
Hong-Jun-Choiui-MacBook-Pro:~ teemo$ which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
and i am suffering from this below..
Linking CXX shared library ../../lib/libopencv_contrib.dylib
[ 57%] Built target opencv_contrib
make: * [all] Error 2
Full log is here link by "brew install -v opencv"
54 248 246 33:7700/log.txt
any advice for me?
i just need opencv lib for python.
|
Any advantage of using node.js for task queue worker instead of other languages?
| 16,471,242
| 0
| 2
| 1,154
| 0
|
php,python,ruby,node.js,redis
|
I have used Node.js for task worker for jobs that call runnable webpages written in PHP or running commands on certain hosts. In both these instances Node is just initializing (triggering) the job, waiting for and then evaluating the result. The heavy lifting / CPU intensive work is done by another system / program.
Hope this helps!
| 0
| 1
| 0
| 1
|
2013-05-09T07:31:00.000
| 2
| 0
| false
| 16,456,682
| 0
| 0
| 1
| 1
|
Will i have any advantage of using Node.js for task queue worker instead of any other language, like PHP/Python/Ruby?
I want to learn Redis for simple task queue tasks like sending big ammounts of email and do not want keeping users to wait for establishing connection etc.
So the questions is: does async nature of node.js help in this scenario or is it useless?
P.S. i know that node is faster than any of this language in memory consumption and computation because of effecient V8 engine, maybe it's possible to win on this field?
|
Can't connect to localhost:8080 when trying to run Google App Engine program
| 18,885,291
| 1
| 1
| 10,838
| 0
|
google-app-engine,python-2.7
|
I have to manually start python and make it point to my app folder, for instance in a command line window on Windows I am using python. I installed python in C:\Python27 and my sample app is in c:\GoogleApps\guestbook
C:\Python27>dev_appserver.py c:\GoogleApps\guestbook
and then I can start my app in the Google App Engine Launcher and hit localhost 8080
| 0
| 1
| 0
| 0
|
2013-05-10T02:00:00.000
| 5
| 0.039979
| false
| 16,474,027
| 0
| 0
| 1
| 2
|
I'm trying to run the Google App Engine Python 2.7 Hello World program and view it in a browser via Google App Engine Launcher. I followed the install and program instructions to the letter. I copied and pasted the code in the instructions to the helloworld.py file and app.yam1 and verified that they are correct and in the directory listed as the application directory. I hit run on the launcher and it runs with no errors, although I get no sign that is has completed (orange clock symbol next to app name). I get the following from the logs:
Running dev_appserver with the following flags: --skip_sdk_update_check=yes --port=8080 --admin_port=8000 Python command: /opt/local/bin/python2.7
When I try to open in the browser via the GAE Launcher, the 'browse' icon is grayed out and the browser won't open. I tried opening localhost:8080 in Firefox and Chrome as the tutorial suggests, but I get unable to connect errors from both.
How can I view Hello World in a browser? Is there some configuration I need to make on my machine?
|
Can't connect to localhost:8080 when trying to run Google App Engine program
| 18,226,152
| 1
| 1
| 10,838
| 0
|
google-app-engine,python-2.7
|
I had the same problem. This seemed to fix it:
cd to google_appengine, run
python dev_appserver.py --port=8080 --host=127.0.0.1 /path/to/application
at this point there is a prompt to allow updates on running, I said Yes.
At this point the app was running as it should, also when I quit this and went in using the launcher again, that worked too.
| 0
| 1
| 0
| 0
|
2013-05-10T02:00:00.000
| 5
| 0.039979
| false
| 16,474,027
| 0
| 0
| 1
| 2
|
I'm trying to run the Google App Engine Python 2.7 Hello World program and view it in a browser via Google App Engine Launcher. I followed the install and program instructions to the letter. I copied and pasted the code in the instructions to the helloworld.py file and app.yam1 and verified that they are correct and in the directory listed as the application directory. I hit run on the launcher and it runs with no errors, although I get no sign that is has completed (orange clock symbol next to app name). I get the following from the logs:
Running dev_appserver with the following flags: --skip_sdk_update_check=yes --port=8080 --admin_port=8000 Python command: /opt/local/bin/python2.7
When I try to open in the browser via the GAE Launcher, the 'browse' icon is grayed out and the browser won't open. I tried opening localhost:8080 in Firefox and Chrome as the tutorial suggests, but I get unable to connect errors from both.
How can I view Hello World in a browser? Is there some configuration I need to make on my machine?
|
How to count the threads and process of WSGI?
| 16,480,852
| 2
| 3
| 1,298
| 0
|
multithreading,wsgi,python-multithreading
|
You mean why do you have 3 extra per mod_wsgi daemon process.
For your configuration, 15 new threads will be created for handling the requests. The other 3 in a process are due to:
The main thread which the process was started as. It will wait until the appropriate signal is received to shutdown the process.
A monitor thread which checks for certain events to occur and which will signal the process to shutdown.
A deadlock thread which checks to see if a deadlock has occurred in the Python interpreter. If it does occur, it will sent an event which thread (2) will detect. Thread (2) would then send a signal to the process to quit. That signal would be detected by thread (1) which would then gracefully exit the process and try and cleanup properly.
So the extra threads are all about ensuring that the whole system is very robust in the event of various things that can occur. Plus ensuring that when the process is being shutdown that the Python sub intepreters are destroyed properly to allow atexit registered Python code to run to do its own cleanup.
| 0
| 1
| 0
| 0
|
2013-05-10T09:35:00.000
| 1
| 1.2
| true
| 16,479,249
| 0
| 0
| 1
| 1
|
I have deployed a wsgi application on the apache and I have configured it like this:
WSGIDaemonProcess wsgi-pcapi user= group= processes=2 threads=15
After I restart the apache I am counting the number of threads:
ps -efL | grep | grep -c httpd
The local apache is running only one wsgi app but the number I get back is 36 and I cannot understand why. I know that there are 2 processes and 15 threads which means:
15*2+2=32
So why do I have 4 more?
|
Get-WmiObject without PowerShell
| 16,491,345
| 0
| 1
| 1,863
| 0
|
python,windows,wmi,wmic
|
From CMD.EXE, I think the command I need is wmic path Win32_USBControllerDevice get *
So most likely the general pattern is:
PowerShell: gwmi MYCLASSNAME
translates into:
CMD.EXE: wmic path
MYCLASSNAME get *
| 0
| 1
| 0
| 1
|
2013-05-10T21:27:00.000
| 2
| 0
| false
| 16,491,077
| 0
| 0
| 0
| 1
|
I am writing a Windows python program that needs to query WMI. I am planning to do this by using the subprocess module to call WMIC with the arguments I need.
I see a lot of examples online of using WMI via PowerShell, usually using the "commandlet" Get-WmiObject or the equivalent gwmi.
How do you do the equivalent of Get-WmiObject without using PowerShell, but rather with WMIC?
Specificially, from within CMD.EXE, I want to do powershell gwmi Win32_USBControllerDevice, but without using powershell; rather, I want to invoke WMIC directly.
Thanks, and sorry for the beginner question!
|
Is there any ipdb print pager?
| 16,565,699
| 3
| 4
| 725
| 0
|
python,debugging,printing,pager,pdb
|
You might want to create a function which accepts a text, puts this text into a temporary file, and calls os.system('less %s' % temporary_file_name).
To make it easier for everyday use: Put the function into a file (e.g: ~/.pythonrc) and specify it in your PYTHONSTARTUP.
Alternatively you can just install bpython (pip install bpython), and start the bpython shell using bpython. This shell has a "pager" functionality which executes less with your last output.
| 0
| 1
| 0
| 1
|
2013-05-14T11:20:00.000
| 1
| 1.2
| true
| 16,541,847
| 0
| 0
| 0
| 1
|
I am using ipdb to debug a python script.
I want to print a very long variable. Is there any ipdb pager like more or less used in shells?
Thanks
|
Where I should put my python scripts in Linux?
| 16,565,499
| 13
| 12
| 14,785
| 0
|
python,linux,open-source
|
For sure, if this program is to be available only for root, then the main execution python script have to go to /usr/sbin/.
Config files ought to go to /etc/, and log files to /var/log/.
Other python files should be deployed to /usr/share/pyshared/.
Executable scripts of other languages will go either in /usr/bin/ or /usr/sbin/ depending on whether they should be available to all users, or for root only.
| 0
| 1
| 0
| 1
|
2013-05-15T12:45:00.000
| 2
| 1.2
| true
| 16,565,363
| 1
| 0
| 0
| 2
|
My python program consists of several files:
the main execution python script
python modules in *.py files
config file
log files
executables scripts of other languages.
All this files should be available only for root. The main script should run on startup, e.g. via upstart.
Where I should put all this files in Linux filesystem?
What's the better way for distribution my program? pip, easy_install, deb, ...? I haven't worked with any of these tool, so I want something easy for me.
The minimum supported Linux distributive should be Ubuntu.
|
Where I should put my python scripts in Linux?
| 16,565,490
| 1
| 12
| 14,785
| 0
|
python,linux,open-source
|
If only root should access the scripts, why not put it in /root/ ?
Secondly, if you're going to distribute your application you'll probably need easy_install or something similar, otherwise just tar.gz the stuff if only a few people will access it?
It all depends on your scale..
Pyglet, wxPython and similar have a hughe userbase.. same for BeautifulSoup but they still tar.gz the stuff and you just use setuptools to deply it (whcih, is another option).
| 0
| 1
| 0
| 1
|
2013-05-15T12:45:00.000
| 2
| 0.099668
| false
| 16,565,363
| 1
| 0
| 0
| 2
|
My python program consists of several files:
the main execution python script
python modules in *.py files
config file
log files
executables scripts of other languages.
All this files should be available only for root. The main script should run on startup, e.g. via upstart.
Where I should put all this files in Linux filesystem?
What's the better way for distribution my program? pip, easy_install, deb, ...? I haven't worked with any of these tool, so I want something easy for me.
The minimum supported Linux distributive should be Ubuntu.
|
How to query and manage Debian package repositories in Python?
| 20,988,427
| 1
| 5
| 1,224
| 0
|
python,debian,apt
|
A very effective way is to create local apt caches for all the relevant distributions. The tool chdist from the devscripts package allows you to create a number of these caches without the need to use root privileges. You can then use the tools you are used to (e.g. apt-rdepends) to query those caches by wrapping them up in chdist. You can even point python-apt at your local cache using the rootdir keyword argument to apt.cache.Cache where you can then resolve dependencies.
| 0
| 1
| 0
| 0
|
2013-05-15T15:04:00.000
| 3
| 0.066568
| false
| 16,568,621
| 1
| 0
| 0
| 1
|
I want to be able to look at local .deb files and at remote repositories and deduce dependencies etc so that I can build my own repositories and partial mirrors (probably by creating config files for reprepro).
The challenge is that many of the command-line tools to help with this (apt-rdepends etc) assume that you're running on the target system and make use of your local apt cache, whereas I'll often be handling stuff for different Ubuntu and Debian distributions from the one I'm currently running on, so I'd like to do this a bit more at arm's length.
The capable but very poorly-documented python-apt packages let me examine .deb files on the local filesystem and pull out dependencies. I'm now wondering if there are similar tools to parse the Packages.gz files from repositories? (It's not too tricky, but I don't want to reinvent the wheel!)
The overall goal is to create and maintain two repositories: one with our own packages in, and a partial mirror of an Ubuntu distribution with some known required packages plus anything that they, or our own ones, depend upon.
|
Is it possible to queue sendkey commands in Windows?
| 16,609,920
| 0
| 1
| 248
| 0
|
python,clipboard,timing,sendkeys,queuing
|
No, I don't think so. You're talking about separate message queues here. Alt+Esc is a global hotkey, presumably handled by windows explorer. Ctrl+A and Ctrl+C are handled by the source app, and should be processed in order. However, there will be a lag after the Ctrl+C, as the clipboard must be locked, cleared, and updated, and then clipboard notification messages are sent to all applications registered on the clipboard notification chain, as well as the newer clipboard notification API. After all of those applications have had a chance to react to the data, THEN it is safe to paste with Ctrl+V.
Note that if you're running any sort of remote desktop software, you also have to wait for OTHER SYSTEMS to react to the clipboard notification, which will include syncing clipboard data across the network.
Now you see why this is hard. Sorry for the bad news.
| 0
| 1
| 0
| 0
|
2013-05-16T15:12:00.000
| 1
| 0
| false
| 16,591,244
| 0
| 0
| 0
| 1
|
We are writing a Python application that relies on copying and pasting content from the top windows.
To do that we issue sendkey commands:
Ctrl-Esc for going to the previous windows
Ctrl-A followed by Ctrl-C to copy all text from the window
And Cnrl-V to paste the the clipboard content to the top window.
Unfortunately at times we run into timing problems.
Is there some way to queue the SendKey commands so that
Cntl-A waits for Alt-Esc, and then
Cntl-C waits till Cntl-A is done?
Or perhaps there is a way to know when each command is finished before sending the next one?
Thank you in advance for your help.
|
Azure - Running an http server on an VM
| 16,604,662
| 0
| 1
| 453
| 0
|
python,azure,azure-virtual-machine
|
Could it be that port 81 is blocked by firewall in Ubuntu?
| 0
| 1
| 0
| 0
|
2013-05-17T08:10:00.000
| 1
| 0
| false
| 16,604,372
| 0
| 0
| 1
| 1
|
I have created a VM on Windows Azure with Ubuntu 12.04 running on it.
I have two end-points
End-point1
public port: 50348
private port: 22
End-point2
public port: 81
private port: 81
Now, I have a simple python HTTP server running on the Virtual Machine, which is listening on port 81.
When I try to connect to localhost:81 from within the Virtual machine, I am able to connect, so I know that the server is up and running.
Say, the DNS name assigned to my VM be blah-blah.cloudapp.net
But, when I try to connect to http://blah-blah.cloudapp.net:81 from somewhere outside, I always get a Server Not Found error.
So, how can I connect to my server?
|
Trying to install wxpython on Mac OSX
| 20,097,953
| 8
| 6
| 7,577
| 0
|
python,macos,wxpython
|
Go to System preferences --> Security and privacy --> Allow applications downloaded from..select 'Anywhere'
| 1
| 1
| 0
| 0
|
2013-05-19T20:08:00.000
| 3
| 1
| false
| 16,638,977
| 0
| 0
| 0
| 1
|
I am trying to install wxpython onto my Mac OSX 10.8.3. I download the disk images from their downloads page and mount it. When I try to install the package I get an error that saying that the package is damaged and can't be opened. Any suggestions on how I can fix this?
I have also tried opening the package through the terminal but no luck.
Thanks in advance.
|
Python: How can I test my package if it runs on Linux, Mac and Windows
| 63,363,093
| 0
| 2
| 808
| 0
|
python,linux,windows,macos,testing
|
You can use travis to run tests on linux, mac and windows. Travis supports these platforms. This is the most convenient option. If the repo is open source, travis is free.
| 0
| 1
| 0
| 0
|
2013-05-20T14:01:00.000
| 3
| 0
| false
| 16,651,259
| 1
| 0
| 0
| 2
|
I need to test my Python package if it works properly on different systems. I found Tox for different Python versions, but what about different operating systems like Windows, Linux and Mac.
Can you recommend me a convenient way to test if my code works on all systems?
|
Python: How can I test my package if it runs on Linux, Mac and Windows
| 68,606,338
| 0
| 2
| 808
| 0
|
python,linux,windows,macos,testing
|
Just assuming you use Windows...
I use Ubuntu on WSL2 (Windows Subsystem for Linux 2). It is basically a virtual machine, but is much faster than Hyper-V or Virtual box. It doesn't come with a GUI unless you're in the Windows Insiders Dev Channel, but that is likely not needed just to test code, and you can install GWSL (an X server designed for WSL and SSH) to provide a GUI. On my laptop, Hyper-V and VirtualBox VMs crash within seconds of starting, but WSL2 runs smoothly for hours of intense usage. For an IDE, I would recommend installing Visual Studio Code (on Windows,not on the WSL2 VM), then use the Remote - WSL extension. And I would recommend installing Windows Terminal to replace the ugly Windows Console Host. And for MacOS, I guess you just have to use a regular VM.
| 0
| 1
| 0
| 0
|
2013-05-20T14:01:00.000
| 3
| 0
| false
| 16,651,259
| 1
| 0
| 0
| 2
|
I need to test my Python package if it works properly on different systems. I found Tox for different Python versions, but what about different operating systems like Windows, Linux and Mac.
Can you recommend me a convenient way to test if my code works on all systems?
|
How to install various files other than Python code using Python packages?
| 16,656,024
| 1
| 2
| 78
| 0
|
python,pip,software-distribution
|
I'm sorry, but Python knows nothing about bash, or man, or other things you might take for granted. For instance, Windows, a widely deployed platform supported by Python, has neither. Other platforms, even Unix-like, may not have bash, too (e.g. using busybox) and would rather not spend storage space on man pages. Some users don't even have bash installed on capable systems (and use zsh for interactive work and ash for scripts).
So please limit your egg archive to things that only require Python, or Python extensions.
If you want to install other files, you have a few options.
Publish an archive that contains both a setup.py for your package and whatever optional files you might want to include, possibly with an installation script for them, too.
Create proper packages for the OSes you target. The rest will use option 1.
Run the extra installations steps from your setup script (not recommended).
Also, you don't have to provide a man page, just support --help well. E.g. easy_install on Debian does not have a man page, and I'm fine with it.
| 0
| 1
| 0
| 0
|
2013-05-20T17:44:00.000
| 1
| 1.2
| true
| 16,655,156
| 1
| 0
| 0
| 1
|
My Python project includes some manpages and bash completion script. I want those to be installed when user installs a package with, for example, pip install mypackage. How do I do that? I only came across a very barbaric way of doing so by calling an external script (for example an .sh script) in the setup.py. Is there a more elegant approach?
|
How to get google ID from email
| 16,673,012
| 2
| 1
| 1,883
| 0
|
python,google-app-engine,authentication,google-plus
|
First of all you should store the email property always in lowercase since the case is not relevant. Now if you also want to take into the account the dot or the plus symbols and being able to query on them, you should then store in another (hidden) property the stripped out version of the email and execute your queries on this one.
| 0
| 1
| 0
| 0
|
2013-05-21T14:43:00.000
| 2
| 1.2
| true
| 16,672,846
| 0
| 0
| 1
| 1
|
I'm using google ID as the datastore id for my user objects.
Sometimes I want to find a user by email. The gmail address can appear with dots or without, capital letters and other variations. How can I retrieve the user id from the given email?
|
Windows: get default microphone name
| 16,702,788
| 1
| 2
| 2,638
| 0
|
python,windows,audio,portaudio
|
Apparently I can get the full string from ffmpeg, as follows:
ffmpeg -list_devices true -f dshow -i dummy
And then the name of the mic will be on the line after "DirectShow audio devices"
| 0
| 1
| 0
| 0
|
2013-05-22T06:29:00.000
| 2
| 0.099668
| false
| 16,684,894
| 0
| 0
| 0
| 1
|
In python2.7 on Windows, I need to get the name of the default microphone, which will be a string such as "Microphone (2- High Definition Audio Device)".
My first attempt was to query WMI using subprocess: wmic path Win32_SoundDevice get * /format:list. Unfortunately, this seems to return speakers as well as mics, and I can't see any property that would let me distinguish the two. Also, the name of the correct device is not in the right format, e.g. it appears as simply "High Definition Audio Device" instead of the full correct string "Microphone (2- High Definition Audio Device)".
My second attempt was to use PyAudio (the python bindings to PortAudio). Calling PyAudio().get_default_input_device_info()["name"] gets me pretty close, but unfortunately the name is getting truncated for some reason! The return value is "Microphone (2- High Definition " (truncated to 31 characters length). If I could only get a non-truncated version of this string, it would be perfect.
Any ideas for what is the simplest, most self-contained way to get the default microphone name? Thanks!
|
Google App Engine import NLTK error
| 16,700,974
| 0
| 0
| 423
| 0
|
google-app-engine,python-2.7,nltk
|
Where do you have nltk installed?
GAE libraries need to be available in your app folder. If you have nltk elsewhere in your pythonpath it won't work.
| 0
| 1
| 0
| 0
|
2013-05-22T17:38:00.000
| 1
| 0
| false
| 16,698,260
| 0
| 0
| 1
| 1
|
I am trying to import NLTK library in Google App Engine it gives error, I created another module "testx.py" and this module works without error but I dont know why NLTK does not work.
My code
nltk_test.py
import webapp2
import path_changer
import testx
import nltk
class MainPage(webapp2.RequestHandler):
def get(self):
#self.response.headers['Content-Type'] = 'text/plain'
self.response.write("TEST")
class nltkTestPage(webapp2.RequestHandler):
def get(self):
text = nltk.word_tokenize("And now for something completely different")
self.response.write(testx.test("Hellooooo"))
application = webapp2.WSGIApplication([
('/', MainPage), ('/nltk', nltkTestPage),
], debug=True)
testx.py code
def test(txt):
return len(txt)
path_changer.py code
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'nltk'))
sys.path.insert(1, os.path.join(os.path.dirname(__file__), 'new'))
app.yaml
application: nltkforappengine
version: 0-1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /.*
script: nltk_test.application
- url: /nltk.*
script: nltk_test.application
libraries:
- name: numpy
version: "1.6.1"
This code works fine When I comment the import nltk and nltk related code, I think NLTK is not imported, please help me to sort out this problem, thanks
|
Use Python to Access Battery Status in Ubuntu
| 56,511,789
| 0
| 5
| 5,179
| 0
|
python,linux,ubuntu
|
You do not need to use any module for this.
Simply you can navigate to
/sys/class/power_supply/BAT0.
Here you will find a lot of files with information about your battery.
You will get current charge in charge_now file and total charge in charge_full file.
Then you can calculate battery percentage by using some math.
Note:- You may need root access for this. You can use sudo nautilus command to open directories in root mode.
| 0
| 1
| 0
| 1
|
2013-05-22T19:15:00.000
| 4
| 0
| false
| 16,699,883
| 0
| 0
| 0
| 2
|
I am trying to come out with a small python script to monitor the battery state of my ubuntu laptop and sound alerts if it's not charging as well as do other stuff (such as suspend etc).
I really don't know where to start, and would like to know if there is any library for python i can use.
Any help would be greatly appreciated.
Thanks
|
Use Python to Access Battery Status in Ubuntu
| 39,884,293
| 0
| 5
| 5,179
| 0
|
python,linux,ubuntu
|
The the "power" library on pypi is a good bet, it's cross platform too.
| 0
| 1
| 0
| 1
|
2013-05-22T19:15:00.000
| 4
| 0
| false
| 16,699,883
| 0
| 0
| 0
| 2
|
I am trying to come out with a small python script to monitor the battery state of my ubuntu laptop and sound alerts if it's not charging as well as do other stuff (such as suspend etc).
I really don't know where to start, and would like to know if there is any library for python i can use.
Any help would be greatly appreciated.
Thanks
|
optimizing google protocol buffer
| 19,603,923
| 0
| 2
| 1,004
| 0
|
java,python,protocol-buffers
|
Unfortunately the Python protobuf deserialization is just pretty slow (as of 2013) compared to the other languages.
| 0
| 1
| 0
| 0
|
2013-05-22T20:01:00.000
| 2
| 0
| false
| 16,700,600
| 0
| 0
| 1
| 1
|
I'm new to google's protocol buffers and looking into some insight. I have a large object that is serialized in java which I am de-serializing in python. The upstream tells me that the file is serialized in about 4 to 5 seconds. Where it takes me 37 seconds to de-serialize. Any ideas on why it is such a huge difference besides hardware? are there ways I can speed up the de-serialization? Does Java perform better for this? I'm simply grabbing a serialized data file and using ParseFromString.
Thanks
UPDATE:- So just got back to this after a while and tried to deserialize the file using java. It took 4 seconds to deserialize a bigger file (56 m). Now this solves my problem with the performance however, I really am confused about the huge difference between the python and java, any insights?
|
What's the Google App Engine equivalent of ASP.NET's Server.Transfer?
| 16,705,889
| 0
| 0
| 210
| 0
|
python,google-app-engine,webapp2
|
Usually, you just have to call the corresponding method.
For being more specific... Which flavour of AppEngine are you using? Java, Python, Go... Php?
| 0
| 1
| 0
| 0
|
2013-05-23T04:29:00.000
| 3
| 0
| false
| 16,705,684
| 0
| 0
| 1
| 1
|
Server.Transfer is sort of like a Redirect except instead of requesting the browser to do another page fetch, it triggers an internal request that makes the request handler "go to" another request handler.
Is there a Python equivalent to this in Google App Engine?
Edit: webapp2
|
Does twisted epollreactor use non-blocking dns lookup?
| 16,717,175
| 1
| 2
| 591
| 0
|
python,dns,twisted
|
I'm not massively familiar with twisted, I only recently started used it. It looks like it doesn't block though, but only on platforms that support threading.
In twisted.internet.base in ReactorBase it looks like it does the resolving through it's resolve method which returns a deferred from self.resolver.getHostByName.
self.resolver is an instance of BlockingResolver by default which does block, but it looks like that if the platform supports threading the resolver instance is replaced by ThreadedResolver in the ReactorBase._initThreads method.
| 0
| 1
| 0
| 0
|
2013-05-23T14:04:00.000
| 2
| 0.099668
| false
| 16,716,049
| 0
| 0
| 0
| 1
|
It seems obvious that it would use the twisted names api and not any blocking way to resolve host names.
However digging in the source code, I have been unable to find the place where the name resolution occurs. Could someone point me to the relevant source code where the host resolution occurs ( when trying to do a connectTCP, for example).
I really need to be sure that connectTCP wont use blocking DNS resolution.
|
Why when use sys.platform on Mac os it print "darwin"?
| 16,722,274
| 9
| 21
| 12,731
| 0
|
python,macos
|
To expand on the other answers: Darwin is the part of OS X that is the actual operating system, in a stricter sense of that term.
To give an analogy, Darwin would be the equivalent of Linux - or Linux and the GNU utilities - while Mac OS X would be the equivalent of Ubuntu or another distribution. I.e. a kernel, the basic userspace utilities, and a GUI layer and a bunch of "built-in" applications.
| 0
| 1
| 0
| 0
|
2013-05-23T19:05:00.000
| 2
| 1
| false
| 16,721,940
| 1
| 0
| 0
| 1
|
In Python, when I type sys.platform on the Mac OS X the output is "darwin"? Why is this so?
|
Does GoogleAppEngine(Python SDK) disturb GoogleAppEngine(PHP SDK)?
| 17,073,068
| 0
| 0
| 209
| 0
|
php,python,google-app-engine
|
Thanks very much, hakre. I know what happened. The problem is I also have a python version Google-App-Engine.Thus, I need to specify the "development server" to GAE-PHP-SDK and it works well now!! Thanks again, I think I will deliver such a kindness to others in the future. – moshaholo May 26 at 12:16
Can any one tell me how to change or specify the development server to GAE-PHP-SDK. I just started using it and don't know too much about this stuff.
P.S Sorry for posting as answer. Wasn't able to see a reply option on top.
| 0
| 1
| 0
| 0
|
2013-05-24T14:25:00.000
| 1
| 0
| false
| 16,737,308
| 0
| 0
| 1
| 1
|
The new launched GoodleAppEngine(PHP Version) does not work on my computer.
Every time I type in "localhost:8080", the running server returns me a "GET / HTTP/1.1" 500".
And it give me a fatal ERROR:
Fatal error: require_once(): Failed opening required
'google/appengine/runtime/ApiProxy.php'
(include_path='/Users/xxxxx/Job_work/helloworld:/usr/local/bin/php/sdk')
in
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/php/setup.php
Does that mean my Python GAE disturbs my PHP version SDK?
|
Beautiful Soup "not supported" Google App Engine
| 16,737,623
| 0
| 0
| 1,031
| 0
|
python,google-app-engine,beautifulsoup
|
It seems uploading the whole directory where the bs4 module resides in to the GAE app folder would work.
| 0
| 1
| 0
| 0
|
2013-05-24T14:29:00.000
| 2
| 0
| false
| 16,737,386
| 0
| 0
| 1
| 1
|
I am working in python on a GAE app. Beautiful soup, which the app uses, works fine on my dev server locally. When I try and upload it to google's servers however, I get the following error: "Error parsing yaml file: the library "bs4" is not supported".
I am not sure how to fix this. Does anyone have any idea?
Thank you.
File Structure:
app.yaml
main.py
static(DIR)
templates(DIR)
bs4(DIR)
|
celery.chord gives IndexError: list index out of range error in celery version 3.0.19
| 18,938,559
| 1
| 2
| 859
| 0
|
python,runtime-error,celery
|
This is an error that occurs when a chord header has no tasks in it. Celery tries to access the tasks in the header using self.tasks[0] which results in an index error since there are no tasks in the list.
| 0
| 1
| 0
| 0
|
2013-05-25T01:12:00.000
| 1
| 0.197375
| false
| 16,745,487
| 0
| 1
| 0
| 1
|
Has anyone seen this error in celery (a distribute task worker in Python) before?
Traceback (most recent call last):
File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/task/trace.py", line 415, in protected_call
return self.run(*args, **kwargs)
File "/home/mcapp/lister/lister/tasks/init.py", line 69, in update_playlist_db
video_update(videos)
File "/home/mcapp/lister/lister/tasks/init.py", line 55, in video_update
chord(tasks)(update_complete.s(update_id=update_id, update_type='db', complete=True))
File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/canvas.py", line 464, in call
_chord = self.type
File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/canvas.py", line 461, in type
return self._type or self.tasks[0].type.app.tasks['celery.chord']
IndexError: list index out of range
This particular version of celery is 3.0.19, and happens when the celery chord feature is used. We don't think there is any error in our application, as 99% of the time our code works correctly, but under heavier loads this error would happen. We are trying to find out if this is an actual bug in our application or a celery bug, any help would be greatly appreciated.
|
IPython command not found Terminal OSX. Pip installed
| 65,625,644
| 0
| 22
| 34,234
| 0
|
macos,bash,command-line,terminal,ipython
|
For me the only thing that helped was:
python -m pip install --upgrade pip
Upgrading pip did the work and all the installations started working properly!
Give it a try.
| 0
| 1
| 0
| 0
|
2013-05-25T02:45:00.000
| 7
| 0
| false
| 16,745,923
| 1
| 0
| 0
| 4
|
Using Python 2.7 installed via homebrew. I then used pip to install IPython. So, IPython seems to be installed under:
/usr/local/lib/python2.7/site-packages/
I think this is true because there is a IPython directory and ipython egg.
However, when I type ipython in the terminal I get:
-bash: ipython: command not found
I do not understand why this ONLY happens with IPython and not with python? Also, how do I fix this? What path should I add in .bashrc? And how should I add?
Currently, my .bashrc reads:
PATH=$PATH:/usr/local/bin/
Thanks!
|
IPython command not found Terminal OSX. Pip installed
| 22,583,681
| 25
| 22
| 34,234
| 0
|
macos,bash,command-line,terminal,ipython
|
I had this issue too, the following worked for me and seems like a clean simple solution:
pip uninstall ipython
pip install ipython
I'm running mavericks and latest pip
| 0
| 1
| 0
| 0
|
2013-05-25T02:45:00.000
| 7
| 1.2
| true
| 16,745,923
| 1
| 0
| 0
| 4
|
Using Python 2.7 installed via homebrew. I then used pip to install IPython. So, IPython seems to be installed under:
/usr/local/lib/python2.7/site-packages/
I think this is true because there is a IPython directory and ipython egg.
However, when I type ipython in the terminal I get:
-bash: ipython: command not found
I do not understand why this ONLY happens with IPython and not with python? Also, how do I fix this? What path should I add in .bashrc? And how should I add?
Currently, my .bashrc reads:
PATH=$PATH:/usr/local/bin/
Thanks!
|
IPython command not found Terminal OSX. Pip installed
| 59,742,054
| 0
| 22
| 34,234
| 0
|
macos,bash,command-line,terminal,ipython
|
After trying to a number of solutions like above with out joy, when I restarted my terminal, Ipython command launched. Don't forgot to restart your terminal after all the fiddling!
P.S. I think the brew install Ipython did it ... but can't be sure.
| 0
| 1
| 0
| 0
|
2013-05-25T02:45:00.000
| 7
| 0
| false
| 16,745,923
| 1
| 0
| 0
| 4
|
Using Python 2.7 installed via homebrew. I then used pip to install IPython. So, IPython seems to be installed under:
/usr/local/lib/python2.7/site-packages/
I think this is true because there is a IPython directory and ipython egg.
However, when I type ipython in the terminal I get:
-bash: ipython: command not found
I do not understand why this ONLY happens with IPython and not with python? Also, how do I fix this? What path should I add in .bashrc? And how should I add?
Currently, my .bashrc reads:
PATH=$PATH:/usr/local/bin/
Thanks!
|
IPython command not found Terminal OSX. Pip installed
| 59,297,333
| 2
| 22
| 34,234
| 0
|
macos,bash,command-line,terminal,ipython
|
I use pip3 install ipython is OK.
maybe ipython rely on python3
| 0
| 1
| 0
| 0
|
2013-05-25T02:45:00.000
| 7
| 0.057081
| false
| 16,745,923
| 1
| 0
| 0
| 4
|
Using Python 2.7 installed via homebrew. I then used pip to install IPython. So, IPython seems to be installed under:
/usr/local/lib/python2.7/site-packages/
I think this is true because there is a IPython directory and ipython egg.
However, when I type ipython in the terminal I get:
-bash: ipython: command not found
I do not understand why this ONLY happens with IPython and not with python? Also, how do I fix this? What path should I add in .bashrc? And how should I add?
Currently, my .bashrc reads:
PATH=$PATH:/usr/local/bin/
Thanks!
|
Does Apache really "fork" in mod_php/python way for request handling?
| 21,819,195
| 0
| 4
| 459
| 0
|
apache,webserver,cgi,mod-python,mod-php
|
With a modern version of Apache, unless you configure it in prefork mode, it should run threaded (and not fork). mod_python is threadsafe, and doesn't require that each instance of it is forked into its own space.
| 0
| 1
| 0
| 1
|
2013-05-25T07:06:00.000
| 1
| 0
| false
| 16,747,301
| 0
| 0
| 0
| 1
|
I am a dummy in web apps. I have a doubt regaring the functioning of apache web server. My question is mainly centered on "how apache handles each incoming request"
Q: When apache is running in the mod_python/mod_php mode, then does a "fork" happen for each incoming reuest?
If it forks in mod_php/mod_python way, then where is the advantage over CGI mode, except for the fact that the forked process in mod_php way already contains an interpretor instance.
If it doesn't fork each time, how does it actually handle each incoming request in the mod_php/mod_python way. Does it use threads?
PS: Where does FastCGI stands in the above comparison?
|
Which platform as a service/infrastructure as a server provider gives the most backend resources for their free tier?
| 16,754,484
| 1
| 1
| 184
| 0
|
java,python,node.js,paas,iaas
|
Redhat Openshift - 3 container instances ("gears") that can each run multiple items. Max 40,000 files, 1GB of storage, 512MB Memory, 250 threads per small gear. Appears to be a hybrid of PaaS & IaaS.
Amazon EC2 - Single linux microinstace. 64-bit 640mb server. 30Gb block storage, 5Gb "standard" storage, 100Mb nosql storage. strictly IaaS.
Amazon Beanstalk - PaaS that is billed based on the underlying EC2 usage consumed. Free tier has the same resources the free EC2 tier has.
Google App Engine - No backend instance provided for free, only frontend instances that run only for the duration of a web request.
| 0
| 1
| 0
| 0
|
2013-05-25T22:30:00.000
| 1
| 0.197375
| false
| 16,754,483
| 0
| 0
| 0
| 1
|
I realize there is quite a difference between IaaS and PaaS, but there is some overlap. I'm particularly interested in getting the most number of "backend" server instances at the free tier (or for cheap). In particular for testing the scalability of an app I'm writing.
|
: bad interpreter: No such file or directory in python
| 16,758,687
| 13
| 21
| 70,614
| 0
|
python
|
You're probably using the #!python hashbang convention that's inexplicably popular among Windows users. Linux expects a full path there. Use either #!/usr/bin/python or (preferably) #!/usr/bin/env python instead.
| 0
| 1
| 0
| 0
|
2013-05-26T08:07:00.000
| 2
| 1
| false
| 16,757,349
| 1
| 0
| 0
| 1
|
I originally coded in python IDE on windows. Now when I pasted my code in a file on Linux server. Now when I run the script, it gives me this error:
bad interpreter: No such file or directory
Please tell how to resolve this error.
|
Using fabric with namespaces, is there a way to specify per-file env.shell
| 16,776,049
| 1
| 2
| 181
| 0
|
python,fabric
|
I ended up overriding the @task decorator like this:
from functools import wraps
from fabric.api import task as real_task
def task(func):
@wraps(func)
def wrapper(*args, **kwargs):
with settings(shell='/path/to/my/shell'):
return func(*args, **kwargs)
return real_task(wrapper)
I can't use alias and other kwargs in this form, but it suits me.
| 0
| 1
| 0
| 0
|
2013-05-27T12:41:00.000
| 1
| 1.2
| true
| 16,773,454
| 0
| 0
| 0
| 1
|
I use a fabric with namespaces to separate commands for dev and production servers
the structure is
fabfile/
__init__.py
dev.py
prod.py
dev.py and prod.py both define different env.shell and one of them overrides another.
Is there a way to use per-file env configuration for fabric?
|
A way to optimize reading from a datastore which updates once a day
| 16,775,062
| 1
| 1
| 53
| 1
|
python,django,google-app-engine
|
Your total amout of data is very small and looks like a dict. Why not save it (this object) as a single entry in the database or the blobstore and you can cache this entry.
| 0
| 1
| 0
| 0
|
2013-05-27T13:08:00.000
| 1
| 1.2
| true
| 16,773,961
| 0
| 0
| 1
| 1
|
I am running my Django site on appengine. In the datastore, there is an entity kind / table X which is only updated once every 24 hours.
X has around 15K entries and each entry is of form ("unique string of length <20", integer).
In some context, a user request involves fetching an average of 200 entries from X, which is quite costly if done individually.
What is an efficient way I can adopt in this situation?
Here are some ways I thought about, but have some doubts in them due to inexperience
Using the Batch query supported by db.get() where a list of keys may be passed as argument and the get() will try to fetch them all in one walk. This will reduce the time quite significantly, but still there will be noticeable overhead and cost. Also, I am using Django models and have no idea about how to relate these two.
Manually copying the whole database into memory (like storing it in a map) after each update job which occurs every 24 hour. This will work really well and also save me lots of datastore reads but I have other doubts. Will it remain persistent across instances? What other factors do I need to be aware of which might interfere? This or something like this seems perfect for my situation.
The above are just what I could come up with in first thought. There must be ways I am unaware/missing.
Thanks.
|
"writing a python binding" vs "using command-line directly"
| 16,788,200
| 3
| 3
| 619
| 0
|
python,python-bindings
|
I can hardly imagine a case where one would prefer wrapping a library's command line interface over wrapping the library itself. (Unless there is a library that comes with a neat command line interface while being a total mess internally; but the OP indicates that the same functionality available via the command line is easily accessible in terms of library function calls).
The biggest advantage of writing a Python binding is a clearly defined data interface between the library and Python. Ideally, the library can operate directly on memory managed by Python, without any data copying involved.
To illustrate this, let's assume a library function does something more complicated than printing the current time, i.e., it obtains a significant amount of data as an input, performs some operation, and returns a significant amount of data as an output. If the input data is expected as an input file, Python would need to generate this file first. It must make sure that the OS has finished writing the file before calling the library via the command line (I have seen several C libraries where sleep(1) calls were used as a band-aid for this issue...). And Python must get the output back in some way.
If the command line interface does not rely on files but obtains all arguments on the command line and prints the output on stdout, Python probably needs to convert between binary data and string format, not always with the expected results. It also needs to pipe stdout back and parse it. Not a problem, but getting all this right is a lot of work.
What about error handling? Well, the command line interface will probably handle errors by printing error messages on stderr. So Python needs to capture, parse and process these as well. OTOH, the corresponding library function will almost certainly make a success flag accessible to the calling program. This is much more directly usable for Python.
All of this is obviously affecting performance, which you already mentioned.
As another point, if you are developing the library yourself, you will probably find after some time that the Python workflow has made the whole command line interface obsolete, so you can drop supporting it altogether and save yourself a lot of time.
So I think there is a clear case to be made for the Python bindings. To me, one of the biggest strengths of Python is the ease with which such wrappers can be created and maintained. Unfortunately, there are about 7 or 8 equally easy ways to do this. To get started, I recommend ctypes, since it does not require a compiler and will work with PyPy. For best performance use the native C-Python API, which I also found very easy to learn.
| 0
| 1
| 0
| 0
|
2013-05-28T07:27:00.000
| 1
| 1.2
| true
| 16,786,183
| 1
| 0
| 0
| 1
|
I have a question regarding python bindings.
I have a command-line which exposes some functionality and code is re-factored to provide the functionality through a shared library. I wanted to know what the real advantage that I get from "writing a python binding for the shared library" vs "calling the command line directly".
One obvious advantage I think will be performance, the shared library will link to the same process and the functionality can called within the same process. It will avoid spawning a new process through the command line.
Any other advantages I can get from writing a python binding for such a case ?
Thanks.
|
pyparsing not working on windows text file but works on linux text file
| 16,822,978
| 1
| 1
| 147
| 0
|
python,pyparsing
|
Try Suppress("\r\n") instead of Suppress(LineEnd())
| 0
| 1
| 0
| 0
|
2013-05-29T18:03:00.000
| 1
| 0.197375
| false
| 16,820,895
| 1
| 0
| 0
| 1
|
I have a simple pyparsing construct for extracting parts of a log message. It looks like this
log_line = timestamp + task_info + Suppress(LineEnd())
This construct parses a log file generated in Linux very well but doesn't parse a similar file generated in windows. I am pretty sure it is because of the new line representation difference. I was wondering if LineEnd() takes care of that? If it doesn't how do I take care of it?
|
When does Python write a file to disk?
| 17,035,837
| 2
| 4
| 1,521
| 0
|
python,file,unix,file-io,operating-system
|
It is almost certainly not python's fault. If python closes the file, OR exits cleanly (rather than killed by a signal), then the OS will have the new contents for the file. Any subsequent open should return the new contents. There must be something more complicated going on. Here are some thoughts.
What you describe sounds more likely to be a filesystem bug than a Python bug, and a filesystem bug is pretty unlikely.
Filesystem bugs are far more likely if your files actually reside in a remote filesystem. Do they?
Do all the processes use the same file? Do "ls -li" on the file to see its inode number, and see if it ever changes. In your scenario, it should not. Is it possible that something is moving files, or moving directories, or deleting directories and recreating them? Are there symlinks involved?
Are you sure that there is no overlap in the running of your programs? Are any of them run from a shell with "&" at the end (i.e. in the background)? That could easily mean that a second one is started before the first one is finished.
Are there any other programs writing to the same file?
This isn't your question, but if you need atomic changes (so that any program running in parallel only sees either the old version or the new one, never the empty file), the way to achieve it is to write the new content to another file (e.g. "foo.tmp"), then do os.rename("foo.tmp", "foo"). Rename is atomic.
| 0
| 1
| 0
| 0
|
2013-05-29T20:21:00.000
| 1
| 1.2
| true
| 16,823,109
| 1
| 0
| 0
| 1
|
I have a library that interacts with a configuration file. When the library is imported, the initialization code reads the configuration file, possibly updates it, and then writes the updated contents back to the file (even if nothing was changed).
Very occasionally, I encounter a problem where the contents of the configuration file simply disappear. Specifically, this happens when I run many invocations of a short script (using the library), back-to-back, thousands of times. It never occurs during the same directories, which leads me to believe it's a somewhat random problem--specifically a race condition with IO.
This is a pain to debug, since I can never reliably reproduce the problem and it only happens on some systems. I have a suspicion about what might happen, but I wanted to see if my picture of file I/O in Python is correct.
So the question is, when does a Python program actually write file contents to a disk? I thought that the contents would make it to disk by the time that the file closed, but then I can't explain this error. When python closes a file, does it flush the contents to the disk itself, or simply queue it up to the filesystem? Is it possible that file contents can be written to disk after Python terminates? And can I avoid this issue by using fp.flush(); os.fsync(fp.fileno()) (where fp is the file handle)?
If it matters, I'm programming on a Unix system (Mac OS X, specifically). Edit: Also, keep in mind that the processes are not running concurrently.
Appendix: Here is the specific race condition that I suspect:
Process #1 is invoked.
Process #1 opens the configuration file in read mode and closes it when finished.
Process #1 opens the configuration file in write mode, erasing all of its contents. The erasing of the contents is synced to the disk.
Process #1 writes the new contents to the file handle and closes it.
Process #1: Upon closing the file, Python tells the OS to queue writing these contents to disk.
Process #1 closes and exits
Process #2 is invoked
Process #2 opens the configuration file in read mode, but new contents aren't synced yet. Process #2 sees an empty file.
The OS finally finishes writing the contents to disk, after process 2 reads the file
Process #2, thinking the file is empty, sets defaults for the configuration file.
Process #2 writes its version of the configuration file to disk, overwriting the last version.
|
How do I install Python modules on OSX if I don't have admin privs?
| 16,824,528
| 2
| 0
| 709
| 0
|
python,beautifulsoup,ipython
|
Just install virtualenv once, then work on local environments. It's a good practice too.
| 0
| 1
| 0
| 0
|
2013-05-29T21:46:00.000
| 3
| 0.132549
| false
| 16,824,419
| 0
| 0
| 0
| 1
|
I don't have admin privileges on my work computer (OSX) and do some light Python scripting (mostly web scraping). I don't have admin privileges at work and don't really want to learn OSX but I also don't want to lug my Ubuntu laptop around everyday just to write scrapers.
Is there a straightforward way for me to install modules without admin privileges? It seems like I need sudo to run easy_install. I can ask to have things installed, but I'd rather not have to ask every time I want to see if a module does what I need. FWIW, right now I just need BeautifulSoup and csv
|
pydev eclipse, jython scripting , syspath
| 18,384,815
| 0
| 2
| 205
| 0
|
python,eclipse,pydev,jython
|
The version that PyDev uses internally is Jython 2.1, so, you can't add newer libraries to that version unless they're compatible...
If you need to use a different version, you'd need to first update the version used inside PyDev itself (it wasn't updated so far because the current Jython size is too big -- PyDev has currently 7.5 MB and just the newer Jython jar is 10 MB -- with libs it goes to almost 16 MB, so making PyDev have 22 MB just for this upgrade is something I'm trying to avoid... now, I think there's probably too much bloat there in Jython, so, if that can be removed, it's something that may be worth revisiting...).
| 0
| 1
| 0
| 1
|
2013-05-29T22:26:00.000
| 1
| 0
| false
| 16,824,942
| 1
| 0
| 0
| 1
|
PyDev has its own jython interpreter, inside pydev.jython.VERSION
that jython has its own python libraries i.e. pydev.jython.VERSION/LIB/zipfile.py
Now if I write a jython script for pydev-jython-scripting, it will load only its internal Lib pydev.jython.VERSION/LIB/
How do I have this pydev-jython recognize PYTHONPATH, I tried appending to sys.path but there is some python version problem some invalid syntax
My system python installation has all the .py source, my pydev interpreter configuration has python interpreter setup and NOT jython and NOT ironpython
pydev-jython script does not recognize many of regular system python modules, why?
|
Is it possible to return a value from one Python file to another?
| 16,846,327
| 2
| 6
| 1,474
| 0
|
python
|
Not with os.startfile(), no; it provides no way of communicating with the launched process. You could use the subprocess module, though; this will allow you to send data to and receive data from the launched process through standard in/out. Or, since the thing you want to call is another Python script, simply import the other file and call its functions directly, or use execfile().
| 0
| 1
| 0
| 0
|
2013-05-30T21:36:00.000
| 2
| 0.197375
| false
| 16,846,264
| 1
| 0
| 0
| 1
|
I'm wondering if it is possible to run another file:
os.startfile('File.py')
and have that file return a value to the file that called the other file.
For example, you have File1. Is it possible for File1 to call and run File2 and have File2 return a value to File1?
|
Test if string is valid key prior to memcache.get()
| 16,866,096
| 0
| 1
| 581
| 0
|
python,google-app-engine,python-memcached
|
Any object is a valid key, provided that the object can be serialized using pickle. If pickle.dumps(key) succeeds, then you shouldn't get a BadKeyError.
| 0
| 1
| 0
| 0
|
2013-05-31T14:27:00.000
| 3
| 0
| false
| 16,859,674
| 0
| 0
| 1
| 2
|
Is there a function in Google App Engine to test if a string is valid 'string key' prior to calling memcache.get(key) without using db.get() or db.get_by_key_name() first?
In my case the key is being passed from the user's get request:
obj = memcache.get(self.request.get("obj"))
Somehow I'd like to know if that string is a valid key string without calling the db first, which would defeat the purpose of using memcache.
|
Test if string is valid key prior to memcache.get()
| 16,867,410
| 1
| 1
| 581
| 0
|
python,google-app-engine,python-memcached
|
A db module key sent to a client should pass through str(the_key) which gives you an URL safe encoded key. Your templating environment etc.. will do this for you just by rendering the key into a template.
On passing the key back from a client, you should recreate the key with
key = db.Key(encoded=self.request.get("obj"))
At this point it could fail with something like
BadKeyError: Invalid string key "thebadkeystring"=.
If not you have a valid key
obj = memcache.get(self.request.get("obj")) won't actually raise BadKeyError because at that point you are just working with a string, and you just get None returned or a value.
So at that point all you know is you have a key missing.
However you need to use the memcache.get(self.request.get("obj")) to get the object from memcache, as a db.Key instance is not a valid memcache key.
So you will be constructing a key to validate the key string at this point. Of course if the memcache get fails then you can use the just created key to fetch the object with db.get(key)
| 0
| 1
| 0
| 0
|
2013-05-31T14:27:00.000
| 3
| 0.066568
| false
| 16,859,674
| 0
| 0
| 1
| 2
|
Is there a function in Google App Engine to test if a string is valid 'string key' prior to calling memcache.get(key) without using db.get() or db.get_by_key_name() first?
In my case the key is being passed from the user's get request:
obj = memcache.get(self.request.get("obj"))
Somehow I'd like to know if that string is a valid key string without calling the db first, which would defeat the purpose of using memcache.
|
Enthought Canopy Encoding Error
| 16,913,698
| 1
| 0
| 763
| 0
|
python,unicode,encoding,enthought,canopy
|
are you running this code from an unsaved buffer? If yes, this is a known issue in Canopy 1.0 and should be fixed in the next update. If no, can you provide a minimal example to reproduce your problem, so this can be fixed? Thanks!
| 0
| 1
| 0
| 0
|
2013-06-01T01:54:00.000
| 1
| 1.2
| true
| 16,868,320
| 1
| 0
| 0
| 1
|
I've been using python from time to time for some small projects and just started using it again after quite a while. I'm using Enthoughts Canopy IDE and get the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 1: ordinal not in range(128)
I know that I have to define the encoding first and I do so in the 2nd Line:
" # -- coding: utf-8 -- "
but when I run the script within Canopy I keep getting the error whenever I enter one of the following letters: ä,ö,ü as user input
When I start my script via console ("python XXX.py" or "ipython XXX.py") it works like a charm .
I'm just a little confused since I thought Canopy uses the ipython interpreter so there shouldn't be any differences whether I start it from console as ipython or via canopy
best regards
|
Virtualenv is installing with wrong version of Python
| 54,860,786
| 2
| 2
| 3,193
| 0
|
python,virtualenv
|
virtualenv --python=python3 mynewenv
| 0
| 1
| 0
| 0
|
2013-06-01T20:30:00.000
| 2
| 0.197375
| false
| 16,876,974
| 1
| 0
| 0
| 1
|
I recently have started learning python and have run into an issue.
When I run python on my mac without virtualenv, the version number is Python 2.7.5. Unfortunately, when I go into my virtualenv, and run Python, the version number is Python 2.6.1.
I tried, creating another virtualenv using:
virtualenv -p /usr/bin/python2.7 newdev
but got: The executable /usr/bin/python2.7 (from --python=/usr/bin/python2.7) does not exist
|
how to clone a repo to a virtualenv on mac Mountain Lion
| 16,885,929
| 1
| 3
| 2,468
| 0
|
python,github,virtualenv,github-for-mac
|
Open terminal. cd into your virtualenv directory, and run git clone from there.
| 0
| 1
| 0
| 0
|
2013-06-02T18:25:00.000
| 2
| 0.099668
| false
| 16,885,902
| 1
| 0
| 0
| 1
|
I could be missing something fundamental here, but I'm struggling to work out what.
I'm using Github for Mac for the first time. I've found a repo i want to look at, so I've logged into GH and installed and configured the mac app.
I've created a virtualenv that I want to work in. added python 2.7.
I've forked the repo on GH. Then I've hit the clone to mac button. this works fine and asks me where i want to put it. the problem is here, the only action is to overwrite the whole directory. which isnt good.
ive checked the help for virtualenv and theres no convert to virtualenv option, which would allow me to download the project then make it a virtualenv.
im aware that i can probably get by with a copy and paste operation and two different folders, but this seems silly. Is there an easier way to accomplish this?
|
/usr/local/lib/python2.6/dist-packages/joblib/parallel.py:40: UserWarning: [Errno 30] Read-only file system. joblib will operate in serial mode
| 16,894,057
| 0
| 0
| 776
| 0
|
python,python-module
|
You need to change the permissions for the directory the process wants to write to. Find out what directory joblib wants to put things in and change its permissions or use a different directory with the needed permissions for this. In order to be able to give the permissions and to allow Python to write to the filesystem, it must be mounted in a way that allows writing.
| 0
| 1
| 0
| 0
|
2013-06-03T09:43:00.000
| 2
| 0
| false
| 16,893,882
| 1
| 0
| 0
| 1
|
Anyone knows what this error is and how to fix it?
I've already tried to
chmod -R 777 /usr/local/python2.6/dist-packages/joblib but with no luck.
|
How to shutdown all dynamic instances in Google App Engine without re-deploying the app?
| 23,557,544
| 1
| 1
| 1,179
| 0
|
java,python,google-app-engine,load-testing
|
We had a similar problem - I found that disabling the app in Application Settings and then re-enabling it terminated all 88 instances we had running, without any other adverse effects.
| 0
| 1
| 0
| 0
|
2013-06-03T20:29:00.000
| 2
| 0.099668
| false
| 16,905,303
| 0
| 0
| 1
| 2
|
We are running multiple load tests every day against one of our GAE apps. We use the following pattern:
Start a load test and let it run for a few hours.
Look at graphs.
Optionally deploy a new version of our app with performance improvements.
Go back to 1.
Each load test creates a couple hundred front end instances. We would like to terminate those between individual load tests even when we are not deploying a new version of our app.
Is there a way to terminate all dynamic instances? Right now we either have to deploy a new version or terminate all instances by hand.
|
How to shutdown all dynamic instances in Google App Engine without re-deploying the app?
| 16,909,149
| 0
| 1
| 1,179
| 0
|
java,python,google-app-engine,load-testing
|
Maybe have them all periodically probe the datastore (or memcache) for a kill value?
| 0
| 1
| 0
| 0
|
2013-06-03T20:29:00.000
| 2
| 0
| false
| 16,905,303
| 0
| 0
| 1
| 2
|
We are running multiple load tests every day against one of our GAE apps. We use the following pattern:
Start a load test and let it run for a few hours.
Look at graphs.
Optionally deploy a new version of our app with performance improvements.
Go back to 1.
Each load test creates a couple hundred front end instances. We would like to terminate those between individual load tests even when we are not deploying a new version of our app.
Is there a way to terminate all dynamic instances? Right now we either have to deploy a new version or terminate all instances by hand.
|
Cannot start Canopy's IPython from Windows command shell
| 28,218,590
| 0
| 2
| 1,361
| 0
|
ipython,enthought
|
If you want to launch web interactive then the command
ipython notebook in windows shell or in canopy shell works.
| 0
| 1
| 0
| 0
|
2013-06-05T19:24:00.000
| 1
| 0
| false
| 16,948,154
| 0
| 0
| 0
| 1
|
I have been using EPD for some time and recently started using Canopy. So now I have both EPD and Canopy installed on my machine, which runs Windows 7 Pro x64. But I just realized I cannot launch Canopy's IPython interactive session (located in the directory C:\Users\User\AppData\Local\Enthought\Canopy\User\Scripts) in a Windows command prompt. I already added this directory to my Path before the EPD's python directory.
I checked out those files in the directory .../Canopy/User/Scripts/, I believe that problem is not with the file "ipython-script.py" there, but with the file "ipython.exe", which is what will be run when I simply type "ipython" in a Windows command shell (I set the path already).
In a Windows command shell, if I changed to the directory .../Canopy/User/Scripts/ and type up "python ipython-script.py", then I can correctly start the IPython session in the command shell. So, it looks like that "ipython.exe" does not run the script "ipython-script.py"...
Has anyone run into this same problem? Is there an easy fix?
P.S. I already had the latest Canopy (version 1.0.1.1160) installed.
Thanks for any help.
|
Reading out version of cc1plus (SCons script-based)
| 16,966,985
| 1
| 0
| 229
| 0
|
python,gcc,scons
|
You might want to give it something to compile. Maybe be redirecting input from null: (not sure if that's correct for windows). Though if so, that looks like a moderately strange compiler
| 0
| 1
| 0
| 0
|
2013-06-06T13:50:00.000
| 1
| 1.2
| true
| 16,963,865
| 0
| 0
| 0
| 1
|
Actually I'm trying to read out the version of my cc1plus executable in windows. This is a rather simple job:
cc1plus -version
I need this for a scons script (Tool), to integrate an ARM cross compiler. Because of that I directly call cc1plus instead of using some compiler driver. There is no useful compiler driver available.
Back to my problem: When I'm calling "cc1plus -version" on cmd I get a version string back, but cc1plus isn't terminated. Instead it is continuously executed. I have to kill cc1plus with CRTL+D. For my script this is a problem.
In the following a snippet of my cmd:
C:\DevTools\CrossWorks_for_ARM_2.3\bin>cc1plus -version
GNU C++ (GCC) version 4.7.3 20121207 (release) [ARM/embedded-4_7-branch revision 194305] (arm-unknown-eabi)
compiled by GNU C version 3.4.4 (mingw special), GMP version 4.3.2, MPFR version 2.4.2, MPC version 0.8.1
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
^C
C:\DevTools\CrossWorks_for_ARM_2.3\bin>
Is there any trick to terminate cc1plus after retrieving the version? For me it is rather incomprehensible why cc1plus isn't terminating.
|
automated build of python eggs
| 16,966,255
| 0
| 0
| 97
| 0
|
python,setuptools,distutils,setup.py,distribute
|
I would use subprocess. I believe setup.py command line arguments should be your interface.
Check setup.py clean --all
| 0
| 1
| 0
| 1
|
2013-06-06T15:26:00.000
| 2
| 0
| false
| 16,966,095
| 1
| 0
| 0
| 2
|
I have a directory containing N subdirectories each of which contains setup.py file. I want to write a python script that iterates through all subdirectories, issues python setup.py bdist_egg --dist-dir=somedir, and finally removes build and *.egg-info from each subdirectory and I have two questions:
Can I invoke bdist_egg without using os.system? Some python interface would be nicer.
Can I tell bdist_egg not to generate build and *.egg-info or is there any complementary command for setup.py that cleans this for me?
|
automated build of python eggs
| 17,346,341
| 0
| 0
| 97
| 0
|
python,setuptools,distutils,setup.py,distribute
|
It turned out that Fabric is the right way!
| 0
| 1
| 0
| 1
|
2013-06-06T15:26:00.000
| 2
| 1.2
| true
| 16,966,095
| 1
| 0
| 0
| 2
|
I have a directory containing N subdirectories each of which contains setup.py file. I want to write a python script that iterates through all subdirectories, issues python setup.py bdist_egg --dist-dir=somedir, and finally removes build and *.egg-info from each subdirectory and I have two questions:
Can I invoke bdist_egg without using os.system? Some python interface would be nicer.
Can I tell bdist_egg not to generate build and *.egg-info or is there any complementary command for setup.py that cleans this for me?
|
Configuring MySQL with python on OS X lion
| 16,985,650
| 2
| 1
| 141
| 1
|
python,mysql,macos
|
You probably need Xcode's Command Line Tools.
Download the lastest version of Xcode, then go to "Preferences", select "Download" tab, then install Command Line Tools.
| 0
| 1
| 0
| 0
|
2013-06-07T13:41:00.000
| 1
| 1.2
| true
| 16,985,604
| 0
| 0
| 0
| 1
|
MySQL is installed at /usr/local/mysql
In site.cfg the path for mysql_config is /usr/local/mysql/bin/mysql_config
but when i try to build in the terminal im getting this error:
hammads-imac-2:MySQL-python-1.2.4b4 syedhammad$ sudo python setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.8-intel-2.7/MySQLdb
running build_ext
building '_mysql' extension
clang -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,4,'beta',4) -D_version_=1.2.4b4 -I/usr/local/mysql/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.8-intel-2.7/_mysql.o -Wno-null-conversion -Os -g -fno-strict-aliasing -arch x86_64
unable to execute clang: No such file or directory
error: command 'clang' failed with exit status 1
Help Please
|
Twistedmatrix, rotate log on a weekly basis and customizing name of the log
| 17,023,178
| 2
| 1
| 113
| 0
|
python,logging,twisted
|
You can definitely do this with Twisted's logging system. You're on the right track by looking at DailyLogFile.
However, consider that the best solution might involve idiomatically integrating with the target deployment platform. If the convention on the platform is for applications to manage their own log files, then I'd say you're on the right track.
If, instead, the convention is for applications to run under a manager like launchd, then you may want to consider that approach instead. If all deployed software follows the same local conventions, then the system admin has an easier time managing everything correctly.
| 0
| 1
| 0
| 0
|
2013-06-10T10:55:00.000
| 1
| 1.2
| true
| 17,022,237
| 0
| 0
| 0
| 1
|
what should be the better way to use Python twistedmatrix log file, and customize it so that it can be :
- rotating on a weekly basis (sunday)
- with a custom naming convention (replace the current date _underscore glued that can be seen in the DailyLogFile with something like myfile.yyyymmdd.log or so)
shoud it be by writing my own/subclassing in the same way as class DailyLogFile(BaseLogFile): ?
i have seen that some consider logrotate from linux command, but i wanted to go with a python twistedmatrix solution. (but maybe are there some trouble that i dot have guessed ?)
best regards
|
Implementing UDP traceroute in Python without root
| 17,045,881
| 0
| 2
| 1,380
| 0
|
python,sockets,permissions,udp,traceroute
|
The conclusion I've come to is that I'm restricted to parsing the output of the traceroute using subprocess. traceroute is able to overcome the root-requirement by using setuid for portions of the code effectively allowing that portion of the code to run as root. Since I cannot establish those rights without root privileges I'm forced to rely on the existence of traceroute since that is the more probable of the two situations.
| 0
| 1
| 1
| 0
|
2013-06-10T15:54:00.000
| 1
| 1.2
| true
| 17,027,970
| 0
| 0
| 0
| 1
|
I'm trying to implement a UDP traceroute solution in Python 2.6, but I'm having trouble understanding why I need root privileges to perform the same-ish action as the traceroute utility that comes with my operating system.
The environment that this code will run in will very doubtfully have root privileges, so is it more likely that I will have to forego a python implementation and write something to parse the output of the OS traceroute in UDP mode? Or is there something I'm missing about opening a socket configured like self.rx = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_UDP). It seems that socket.SOCK_RAW is inaccessible without root privileges which is effectively preventing me from consuming the data I need to implement this in python.
|
Ipython - Notebook error: Tornado.application : "Module" object has no attribute 'XREQ'
| 17,628,850
| 1
| 1
| 494
| 0
|
ipython-notebook
|
For those who end up on this page, here's the solution. This is happening because your OS package manager (in my case 12.04) is lagging pypi in python packages - but not in core libraries (like zeromq).
To solve this, my recommended solution is to install python-pandas using your package manager, but also install systemwide "pip". and then run "sudo pip install --upgrade ipython,pandas"
this should get everything back in sync.
| 0
| 1
| 0
| 0
|
2013-06-10T18:41:00.000
| 1
| 0.197375
| false
| 17,030,589
| 1
| 0
| 0
| 1
|
I am using python 2.6, ipython 0.12.1, tornado 3.02, pyzmq 13.1 , I am getting this error when I start ipython notebook.
"Websocket connection cannot be made"
In the ipython console window I get torado.application error , in line 183 in create_shell_stream
shell_stream = self.create_connected_stream(ip.....,zmq.XREQ)
error is "module" object has no attribute 'XREQ'
Do you know what's wrong? and how can I fix this error?
I installed ipython, tornado and pyzmq seperate and not from easy_install or pip.
|
Set break points in Tornado app
| 20,555,024
| 2
| 3
| 1,963
| 0
|
debugging,python-2.7,tornado
|
If you are running your app using foreman you would set you environment variable in .env file in root project folder.
Setting the below env variable in my .env file did the tick form me.
PYTHONUNBUFFERED=true
Now I can set code breakpoints in my app, and also print output to server logs while running the app using foreman.
| 0
| 1
| 0
| 0
|
2013-06-10T18:46:00.000
| 2
| 1.2
| true
| 17,030,677
| 0
| 0
| 1
| 1
|
How could I set a break point in my tornado app?
I tried pdb, but Tornado app seams to be ignoring my pdb.set_trace() command in my app.
|
How to implement unsubscribe usecase for website
| 17,033,151
| 1
| 0
| 138
| 0
|
google-app-engine,email,python-2.7,webapp2,unsubscribe
|
Each new class implies a new query, which adds to the total cost. Pack as much information that is practical into the User class. A simple boolean in the User class should work for active/inactive or subscribe/unsubscribe. Your app needs to accept emails to receive the Unsubscribe request and set the associated boolean to False.
| 0
| 1
| 0
| 0
|
2013-06-10T19:12:00.000
| 1
| 1.2
| true
| 17,031,075
| 0
| 0
| 1
| 1
|
I'm sending automated emails and hence I should deliver an unsubscribe function. I have a User entity that is not used much, only when a user registers and the emails can be send to users who are not registered as Users. So when I send an email and I must include an unsubscribe link, should I keep a whole separate entity / class for class Unsubscriptions or include them as a variable in the User class whether or not a user is registered to receive emails?
Did you use any method for unsubscribe that you can recommend? Are there any frameworks for unsubriptions? GAE that I'm using has a very primitive framework for sending and receiving emails and I understand that Amazon has a much more developed API for manging large email list, but I suppose I can still do it all in GAE without Amazon though that would take longer time so I'm considering managing large email lists from Amazon. I have > 10 000 registered users that I never emailed and I'd like to email them a reminder that they are welcome to use my application and that they can unsubscribe from future mailings.
|
celery beat HA - using pacemaker?
| 18,737,345
| 1
| 2
| 1,055
| 0
|
python,celery,pacemaker
|
The short answer is "yes." Pacemaker will do what you want, here.
The longer answer is that your architecture is tricky due to the requirement to restart in the middle of a sequence.
You have two solutions available here. The first is to use some sort of database (or a DRBD file system) to record the fact that 25 of the 50 calls have been completed. The problem with this isn't the 24 completed calls, or the 25 yet-to-be-completed, it's the one that the system was doing, when it crashed. Call #25, say. If C25 wasn't yet started then you're OK. The slave will fire up under Pacemaker control, the DRBD file system will fail over, and the new master will execute #25 through #50. What happens though if #25 was called but the old master hadn't yet marked it as such?
You can architect it so that it marks the call as complete before it actually executes it, in which case, C25 won't get called on this particular occasion or you can mark it as complete after the call in which case C25 will get called twice.
Ideally, you would make the calls idempotent. This is your second option. In which case, it doesn't matter if C1 -> C25 get called again because there's no repeat affect. C26 -> C50 only get called a single time. I don't know enough about your architecture to say which would work, but hopefully this helps.
Pacemaker will certainly handle failing over. Add DRBD and you can save state between the two systems. However, you will need to address the partial-call issue yourself.
| 0
| 1
| 0
| 0
|
2013-06-10T20:55:00.000
| 1
| 0.197375
| false
| 17,032,676
| 0
| 0
| 0
| 1
|
As far as I know about celery, celery beat is a scheduler considered as SPOF. It means the service crashes, nothing will be scheduled and run.
My case is that, I will need a HA set up with two schedulers: master/slave, master is making some calls periodically(let's say every 30 mins) while slave can be idle.
When master crashes, the slave needs to become the master and pick up the left over from the dead master, and carry on the periodic tasks. (leader election)
The requirements here are:
the task is scheduled every 30mins (this can be achieved by celery beat)
the task is not atomic, it's not just a call every 30 mins which either fails or succeeds. Let's say, every 30 mins, the task makes 50 different calls. If master finished 25 and crashed, the slave is expected to come up and finish the remaining 25, instead of going through all 50 calls again.
when the dead master is rebooted from failure, it needs to realize there is a master running already. By all means, it doesn't need to come up as master and just needs to stay idle til the running master crashes again.
Is pacemaker a right tool to achieve this combined with celery?
|
How to track the functions used by a python command?
| 17,218,519
| 0
| 2
| 135
| 0
|
python,cloud,openstack,openstack-nova
|
most of the python clients for openstack have a not so well documented --debug flag, this will show the api queries occurring in incredibly unsafe verbosity.
| 0
| 1
| 0
| 0
|
2013-06-11T06:16:00.000
| 1
| 1.2
| true
| 17,037,621
| 0
| 0
| 0
| 1
|
I want to trace the functions used by a particular command, specifically for OpenStack. Now, I have a command, let's say 'nova image-list', which shows the images available in the repository. I want to know what functions is this command calling?
I tried with strace, but the maximum I could get was the files that the command opens (and it's lot of them!). Again I tried with trace module of python, but when I try
tracer.run('nova image-list')
it gives a syntax error. Now, is there tool/mechanism that can help me to get the flow of this command?
|
Alternative IDE supporting debugging for Google App Engine in Python (Eclipse + PyDev no debug support on SDK 1.7.6+)
| 18,138,699
| 0
| 2
| 927
| 0
|
python,google-app-engine,debugging,ide,breakpoints
|
The latest version of PyDev (2.8.1) supports GAE debugging. However, "Edit and Continue Debugging or Interactive Debugging" feature seems to have stopped working.
| 0
| 1
| 0
| 0
|
2013-06-11T09:01:00.000
| 2
| 0
| false
| 17,040,209
| 0
| 0
| 0
| 1
|
I'm developing on GAE-Python 2.7 using Eclipse+PyDev as IDE. Since GAE SDK 1.7.6 (March 2013), where Google "broke" support for breakpoints*, I've been using the old dev server to continue debugging the application I'm working on.
However, Google will drop support of the old dev server as of July 2013 and, since I do not expect a prompt solution for this on PyDev (I've seen no activity so far about this), I would like to look for an alternative IDE to still being able to do debugging.
I know that one of the possible options is to go for PyCharm (initial license of 89€+VAT and 59€+VAT each year to continue receiving upgrades), but I would like to know how other people is (will be) addressing this problem and what are the current alternatives to PyCharm
*I would like to clarify the sentence "Google broke support for breakpoints": In SDK 1.7.6+, Google started using stdin/stdout in the new dev server for doing IPC and this leaves no chances to even do debugging with pdb. Google claims that they have created the hooks for tool vendors to support debugging (as PyCharm did) but, in my opinion, they "broke" debugging by forcing people to move away from the IDE they were initially recommending due to an architectural decision (I'm not an expert, but they could have used the native IPC mechanisms included in Python instead of using stdin/stdout).
EDIT:
I forgot to mention that I'm running Eclipse+Pydev for MacOSX, so please, also mention your OS compatibility in your alternatives/solutions.
|
Why am I getting IOError: [Errno 13] Permission denied?
| 52,801,091
| 0
| 6
| 27,894
| 0
|
python,python-2.7,permissions,permission-denied,ioerror
|
Although your code seems correct, I think it's better to assign an absolute path. If you develop on your local machine and the app runs on another server, there is probably some differences, like, who calls the process. Logs are recommended to be written to /var/log/app_name
| 0
| 1
| 0
| 0
|
2013-06-11T12:13:00.000
| 3
| 0
| false
| 17,043,814
| 0
| 0
| 0
| 2
|
I am creating Log file for the code but I am getting the following error :
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] import mainLCF
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/home/ai/Desktop/home/ubuntu/LCF/GA-LCF/mainLCF.py", line 10, in
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 1528, in basicConfig
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] hdlr = FileHandler(filename, mode)
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 901, in __init__
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] StreamHandler.__init__(self, self._open())
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 924, in _open
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] stream = open(self.baseFilename, self.mode)
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] IOError: [Errno 13] Permission denied: '/genetic.log'
I have checked the permissions in the particular folder where I want to make the log but still getting the error .
My code is : (name is mainLCF.py)
import logging
import sys
logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
logging.debug("starting of Genetic Algorithm")
sys.path.append("/home/ai/Desktop/home/ubuntu/LCF/ws_code")
import blackboard
from pyevolve import *
def eval_func(chromosome):
some function here
My system's file structure is :
/
home
ai
Desktop
home
ubuntu
LCF
ws_code GA-LCF
blackboard.py main-LCF.py
I am calling mainLCF.py from another function lcf.py which is in ws_code .
|
Why am I getting IOError: [Errno 13] Permission denied?
| 17,044,205
| 0
| 6
| 27,894
| 0
|
python,python-2.7,permissions,permission-denied,ioerror
|
Looks like logging tried to open the logfile as /genetic.log. If you pass filename as a keyword argument to logging.basicConfig it creates a FileHandler which passes it to os.path.abspath which expands the filename to an absolute path based on your current working dir. So you're either in your root dir or your code changes your current working dir.
| 0
| 1
| 0
| 0
|
2013-06-11T12:13:00.000
| 3
| 0
| false
| 17,043,814
| 0
| 0
| 0
| 2
|
I am creating Log file for the code but I am getting the following error :
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] import mainLCF
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/home/ai/Desktop/home/ubuntu/LCF/GA-LCF/mainLCF.py", line 10, in
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 1528, in basicConfig
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] hdlr = FileHandler(filename, mode)
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 901, in __init__
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] StreamHandler.__init__(self, self._open())
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 924, in _open
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] stream = open(self.baseFilename, self.mode)
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] IOError: [Errno 13] Permission denied: '/genetic.log'
I have checked the permissions in the particular folder where I want to make the log but still getting the error .
My code is : (name is mainLCF.py)
import logging
import sys
logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
logging.debug("starting of Genetic Algorithm")
sys.path.append("/home/ai/Desktop/home/ubuntu/LCF/ws_code")
import blackboard
from pyevolve import *
def eval_func(chromosome):
some function here
My system's file structure is :
/
home
ai
Desktop
home
ubuntu
LCF
ws_code GA-LCF
blackboard.py main-LCF.py
I am calling mainLCF.py from another function lcf.py which is in ws_code .
|
Install package on multiple python versions using easy_insatll
| 17,045,827
| 0
| 0
| 103
| 0
|
python,easy-install
|
It's hard to say without providing us operation system information you use. For example in OSX, easy_install has it's own versions, I just type easy_install[tab][tab] to get all available version of easy_install.
In OSX and debian and redhat I've got these:
easy_install
easy_install-2.5
easy_install-2.6
easy_install-2.7
For each python version you've got your own package. For example if this would be pip there's packages in osx:
py27-pip
py24-pip
py31-pip
easy_install probably build-in within python so it's should go per python version. and default one will be, the one which python version is set to default in your environment.
| 0
| 1
| 0
| 0
|
2013-06-11T13:24:00.000
| 2
| 0
| false
| 17,045,229
| 1
| 0
| 0
| 1
|
I'm trying to install a python package using easy_install. There are several python versions installed.
This causes the package to be installed on python2.7, whereas I want it to be installed on python2.4.
Suggestions?
Thanks
Edit:
I already have tried easy_install-2.4. I get -bash: easy_install-2.4: command not found
|
How to Detect File Rename / Move in Windows
| 17,046,562
| 3
| 2
| 1,402
| 0
|
java,python,file,rename,move
|
If you use java 7, you can simply use WatchService and WatchKey. This is a observer to watch a directory and each time something is changed, created or deleted you can do an action/file handling.
| 0
| 1
| 0
| 0
|
2013-06-11T14:21:00.000
| 3
| 1.2
| true
| 17,046,461
| 0
| 0
| 0
| 1
|
I'm trying to detect when a file is being moved or renamed in windows and I want to then use that change to update a database.
When I say file move: I mean moving from one directory to another from ".../A/foo.txt" to ".../B/foo.txt".
When I say file rename: I mean renaming but staying in the same directory ".../A/foo.txt" to ".../A/bar.txt"
I know that linux and most people treat them as the same thing, and for my purposes they are the same thing. I just want to know the actual file path after and be able to match it to the original file path even in circumstances where there is a batch move.
I am using python for the parent program, but I am willing to use any coding language though it preferably is Java/Python/some form of C.
|
Command history in interpreters in emacs
| 17,047,535
| 0
| 12
| 1,865
| 0
|
python,emacs,command,interpreter
|
AFAIS the keys are the same as in M-x shell. See menu In/Out for available keys/commands.
| 0
| 1
| 0
| 0
|
2013-06-11T14:41:00.000
| 2
| 0
| false
| 17,046,929
| 0
| 0
| 0
| 1
|
Inside emacs I am running interpreters for several different languages (python, R, lisp, ...). When I run the interpreters through the terminal in most cases I can use the up arrow to see the last command or line of code that I entered. I no longer have this functionality when I am running the interpreters in emacs. How can I achieve this functionality?
How can I access the command history from the interpreter inside emacs?
Can I do this generally for language X?
At the moment I need to use python, so if anyone knows how to do this specifically with the python interpreter in emacs please let me know!
|
can't get stderr value from a subprocess
| 17,054,413
| 0
| 0
| 221
| 0
|
python,subprocess,pipe
|
It sounds like you might be confusing stderr and the process's return code (available in proc.returncode after you've called proc.communicate()). stderr is the second output stream available to the process. It's generally used for printing error messages that shouldn't be mixed with the process's normal ("standard") output, but there's no rule that says it MUST be used for that purpose, or indeed that it MUST be used at all. And if you pass invalid commands to the cmd argument of Popen(), then stderr will never be used since no command actually gets run. If you're trying to get the error code (a numeric value) from the process, then proc.returncode is what you want.
| 0
| 1
| 0
| 0
|
2013-06-11T21:38:00.000
| 2
| 0
| false
| 17,054,291
| 0
| 0
| 0
| 1
|
I have code
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
I tried invalid commands assigning to cmd, but stderr always is Null
An invalid command like 'ls fds' returns 'ls: cannot access fds: No such file or directory'
But the message doesn't appear in neither stdout nor stderr.
|
Why won't PySys_SetPath() work?
| 44,001,898
| 0
| 1
| 3,054
| 0
|
c++,python,c,api
|
I had the same problem but when I fixed all the \ to / and added a . at the beginning of the path it worked ie. the path should look sth like that PySys_SetPath("./Python/") or PySys_SetPath("C:/full/path/Python/")
| 0
| 1
| 0
| 1
|
2013-06-11T23:25:00.000
| 2
| 0
| false
| 17,055,472
| 0
| 0
| 0
| 1
|
I'm using the Python C API, and numerous times now I've tried using PySys_SetPath() to redirect the interpreter to a path where I've stored all of my scripts. Yet, every time I try it, I get the following error:
Unhandled exception at 0x1e028482 in app.exe: 0xC0000005: Access violation reading location 0x00000004.
I use it in the following syntax: PySys_SetPath("/Python/"). Is that incorrect? Why does it keep crashing? Thanks in advance.
|
is it possible to unzip a .apk file(or generally any zipped file) into memory instead of writing it to fs
| 64,447,721
| 0
| 2
| 14,330
| 0
|
android,python,unzip
|
A late entry...but all I had to do was rename the file to a .ZIP extension.
| 1
| 1
| 0
| 0
|
2013-06-11T23:28:00.000
| 3
| 0
| false
| 17,055,496
| 0
| 0
| 0
| 1
|
I am doing research with mobile apps and need to analyze their code after unzipping the .apk file. However, the process of unzipping naturally involves lots of IO, which doesn't make it scalable, I am thinking if it's possible to hold the unzipped data in memory, with several variables representing it, thus saving the trouble of writing to FS. I am loaded with thousands of apps to analyze, so being able to do something like this would significantly speed up my process. Is there anyone who can suggest a way out for me. I am using python.
Thanks in advance
|
How to use Jenkins Environment variables in python script
| 64,924,366
| 0
| 8
| 30,579
| 0
|
python,bash,jenkins,environment-variables
|
import os
os.environ.get("variable_name")
| 0
| 1
| 0
| 0
|
2013-06-12T17:24:00.000
| 2
| 0
| false
| 17,071,584
| 0
| 0
| 0
| 1
|
so I have a bash script in which I use the environment variables from Jenkins
for example:
QUALIFIER=echo $BUILD_ID | sed "s/[-_]//g" | cut -c1-12
Essentially I'm taking the build id, along with job name to determine which script to call from my main script. I want to use python instead so I was wondering whether I can use these variables without the jenkins python api.
I hope the question makes sense. Thanks
|
Google App Engine Launcher not starting
| 17,086,966
| 1
| 1
| 291
| 0
|
python,windows,google-app-engine
|
I was having same problem with google app engine 1.8.0 then i installed the latest 1.8.1 and the issue fixed!
| 0
| 1
| 0
| 0
|
2013-06-13T04:38:00.000
| 1
| 0.197375
| false
| 17,079,358
| 0
| 0
| 1
| 1
|
I'm install google app engine on my laptop and when i clicked on google app engine launcher icon, mouse change to loading icon then nothing run, nothing display and no error reported, just nothing.
My laptop running with WIN7 64bit, Python27 installed.
Please help.
|
Error in devstack script. nova-api did not start?
| 17,218,560
| 0
| 0
| 3,196
| 0
|
python,openstack,openstack-nova
|
General rule of thumb for devstack.
Always unstack.sh before re-running stack.sh or pulling from repositories and re-running stack.sh
| 0
| 1
| 0
| 0
|
2013-06-13T05:39:00.000
| 2
| 0
| false
| 17,079,919
| 1
| 0
| 0
| 1
|
I have installed openstack on Ubuntu 12.04 single node using devstack. Now, it was running smoothly till yesterday. When i ran ./stack.sh today, it showed an error
./stack.sh:672 nova-api did not start
I have python-paste and python-pastedeploy installed. How to fix this error?
|
Sending out IMs to Lync/OCS programmatically
| 17,125,543
| 1
| 2
| 3,151
| 0
|
java,python,sip,lync,office-communicator
|
Well, if you are on Lync 2013, you can have a look at UCWA ucwa.lync.com. It's a web service that allows to log in to Lync and use IM, presence, etc.
You can use then any language you want. I played with it using Node on Mac OS X, for example.
| 0
| 1
| 0
| 1
|
2013-06-14T01:00:00.000
| 2
| 0.099668
| false
| 17,099,581
| 0
| 0
| 0
| 1
|
I need to send out Instant Messages to a Lync/OCS server from Linux programmatically as an alerting mechanism.
I've looked into using python dbus and pidgin-sipe with finch or pidgin, but they aren't really good for sending one-off instant messages (finch and pidgin need to be running all the time).
Ideally, I'd have a python script or java class that could spit out Instant Messages to users when needed.
|
how to access my 127.0.0.1:8000 from android tablet
| 17,116,746
| 1
| 31
| 33,438
| 0
|
android,python,django,web,localhost
|
If both are connected to the same network, all you need to do is provide the IP address of your server (in your network) in your Android app.
| 0
| 1
| 0
| 0
|
2013-06-14T20:24:00.000
| 9
| 0.022219
| false
| 17,116,718
| 0
| 0
| 1
| 6
|
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 61,816,349
| 0
| 31
| 33,438
| 0
|
android,python,django,web,localhost
|
Try this
python manage.py runserver
then connect both tablet and system to same wifi and browse in the address
eg: python manage.py runserver 192.168.0.100:8000
In tablet type that url in adress bar
| 0
| 1
| 0
| 0
|
2013-06-14T20:24:00.000
| 9
| 0
| false
| 17,116,718
| 0
| 0
| 1
| 6
|
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 17,116,796
| 1
| 31
| 33,438
| 0
|
android,python,django,web,localhost
|
need to know the ip address of your machine ..
Make sure both of your machines (tablet and computer) connected to same network
192.168.0.22 - say your machine address
do this :
192.168.0.22:8000 -- from your tablet
this is it !!!
| 0
| 1
| 0
| 0
|
2013-06-14T20:24:00.000
| 9
| 0.022219
| false
| 17,116,718
| 0
| 0
| 1
| 6
|
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 17,116,791
| 16
| 31
| 33,438
| 0
|
android,python,django,web,localhost
|
You can find out what the ip address of your PC is with the ipconfig command in a Windows command prompt. Since you mentioned them being connected over WiFi look for the IP address of the wireless adapter.
Since the tablet is also in this same WiFi network, you can just type that address into your tablet's browser, with the :8000 appended to it and it should pull up the page.
| 0
| 1
| 0
| 0
|
2013-06-14T20:24:00.000
| 9
| 1.2
| true
| 17,116,718
| 0
| 0
| 1
| 6
|
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 17,116,785
| 6
| 31
| 33,438
| 0
|
android,python,django,web,localhost
|
127.0.0.1 is a loopback address that means, roughly, "this device"; your PC and your android tablet are separate devices, so each of them has its own 127.0.0.1. In other words, if you try to go to 127.0.0.1 on your Android tab, it's trying to connect to a webserver on the Android device, which is not what you want.
However, you should be able to connect over the wifi. On your windows box, open a command prompt and execute ipconfig. Somewhere in the output should be your windows box's address, probably 192.168.1.100 or something similar. You tablet should be able to see the Django server at that address.
| 0
| 1
| 0
| 0
|
2013-06-14T20:24:00.000
| 9
| 1
| false
| 17,116,718
| 0
| 0
| 1
| 6
|
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 48,594,665
| 23
| 31
| 33,438
| 0
|
android,python,django,web,localhost
|
Though this thread was active quite a long time ago. This is what worked for me on windows 10. Posting it in details. Might be helpful for the newbies like me.
Add ALLOWED_HOSTS = ['*'] in django settings.py file
run django server with python manage.py 0.0.0.0:YOUR_PORT. I used 9595 as my port.
Make firewall to allow access on that port:
Navigate to control panel -> system and Security -> Windows Defender Firewall
Open Advanced Settings, select Inbound Rules then right click on it and then select New Rule
Select Port, hit next, input the port you used (in my case 9595), hit next, select allow the connections
hit next again then give it a name and hit next for the last time.
Now find the ip address of your PC.
Open Command Promt as adminstrator and run ipconfig command.
You may find more than one ip addresses. As I'm connected through wifi I took the one under Wireless LAN adapter WiFi. In my case it was 192.168.0.100
Note that this ip may change when you reconnect to the network. So you need to check it again then.
Now from another device (pc, mobile, tablet etc.) connected to the same network go to ip_address:YOUR_PORT (in my case 192.168.0.100:9595)
Hopefully you'll be good to go !
| 0
| 1
| 0
| 0
|
2013-06-14T20:24:00.000
| 9
| 1
| false
| 17,116,718
| 0
| 0
| 1
| 6
|
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
Inter-process communication for python
| 17,118,788
| 3
| 2
| 1,821
| 0
|
python,process
|
Python has good support for ZeroMQ, which is much easier and more robust than using raw sockets.
The ZeroMQ site treats Python as one of its primary languages and offers copious Python examples in its documentation. Indeed, the example in "Learn the Basics" is written in Python.
| 0
| 1
| 0
| 0
|
2013-06-14T23:35:00.000
| 1
| 1.2
| true
| 17,118,747
| 1
| 0
| 0
| 1
|
I'm having a problem creating a inter-process communication for my python application. I have two python scripts at hand, let's say A and B. A is used to open a huge file, keep it in memory and do some processing that Mysql can't do, and B is a process used to query A very often.
Since the file A needs to read is really large, I hope to read it once and have it hang there waiting for my Bs' to query.
What I do now is, I use cherrypy to build a http-server. However, I feel it's kind of awkward to do so since what I'm trying to do is absolutely local. So, I'm wondering are there other more organic way to achieve this goal?
I don't know much about TCP/socket etc. If possible, toy examples would be appreciate (please include the part to read file).
|
Python Request Module - Google App Engine
| 35,530,496
| 0
| 4
| 4,323
| 0
|
python,google-app-engine
|
You need to add the requests/requests sub-folder to your project. From your script's location (.), you should see a file at ./requests/__init__.py.
This applies to all modules you include for Google App Engine. If it doesn't have a __init__.py directly under that location, it will not work.
You do not need to add the module to app.yaml.
| 0
| 1
| 0
| 0
|
2013-06-15T21:36:00.000
| 2
| 0
| false
| 17,128,130
| 0
| 0
| 1
| 1
|
I'm trying to import the requests module for my app which I want to view locally on Google App Engine. I am getting a log console error telling me that "no such module exists".
I've installed it in the command line (using pip) and even tried to install it in my project directory. When I do that the shell tells me:
"Requirement already satisfied (use --upgrade to upgrade): requests in /Library/Python/2.7/site-packages".
App Engine is telling me that the module doesn't exist and the shell says it's already installed it.
I don't know if this is a path problem. If so, the only App Engine related application I can find in my mac is the launcher?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.