Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
3,008,509
2010-06-09T18:01:00.000
21
0
1
0
python,installation
8,712,435
9
false
1
0
For me this happens on a 32 bit system with activepython installed. It seams that the regs are not in HKEY_CURRENT_USER so here is what I do to fix that. Export the "Python" section under HKEY_LOCAL_MACHINE -> Software Open the export in notepad notepad. Replace "LOCAL_MACHINE" with "CURRENT_USER" Since I have 2.7 installed I also had to replace "2.7" with "2.6" (make sure that you do not affect the path which points to the installation of python). Over write the reg backup and run it. Now if you run the installation of whatever package you needed it will find python. This helped in my case but be aware that it might not work for you.
3
57
0
Can't download any python Windows modules and install. I wanted to experiment with scrapy framework and stackless but unable to install due to error "Python version 2.6 required, which was not found in the registry". Trying to install it to Windows 7, 64 bit machine
Python version 2.6 required, which was not found in the registry
1
0
0
72,470
3,008,509
2010-06-09T18:01:00.000
80
0
1
0
python,installation
7,170,483
9
false
1
0
I realize this question is a year old - but I thought I would contribute one additional bit of info in case anyone else is Googling for this answer. The issue only crops up on Win7 64-bit when you install Python "for all users". If you install it "for just me", you should not receive these errors. It seems that a lot of installers only look under HKEY_CURRENT_USER for the required registry settings, and not under HKEY_LOCAL_MACHINE. The page linked by APC gives details on how to manually copy the settings to HKEY_CURRENT_USER. Or here's the PowerShell command to do this: cp -rec HKLM:\SOFTWARE\Python\ HKCU:\SOFTWARE
3
57
0
Can't download any python Windows modules and install. I wanted to experiment with scrapy framework and stackless but unable to install due to error "Python version 2.6 required, which was not found in the registry". Trying to install it to Windows 7, 64 bit machine
Python version 2.6 required, which was not found in the registry
1
0
0
72,470
3,009,634
2010-06-09T20:22:00.000
0
0
0
1
python,linux,embedded-linux,xorg
9,260,878
5
false
0
0
I realize this is an old question, but I use openbox on my system, I have created a custom config file that disables all mouse keyboard shortcuts, and removes borders etc on the applications. In the openbox config i even created some secret shortcuts that can run fx. an xterm for debugging live on the box. The openbox documentation was very helpful in figuring everything out, I did the config in about 30 minutes.
3
2
0
I am developing an application that will run on Linux to run fullscreen all the time (no menus or trays or anything will be visible). The application is going to be developed in Python, not that that matters as far as the window manager, but what I am having a hard time with is choosing a window manager. I need something with the smallest possible footprint, that will let me run a graphical Python app and have an mplayer window at the same time, at widescreen resolutions (widescreen, 16:10,16:9, etc). Other than that, it doesn't need a lot of features, but the end footprint size is the most important thing I'll be looking at. What window manager would you recommend? EDIT: There won't be any interaction with the application needed.
Which display manager for a non interactive Python app and mplayer?
0
0
0
1,163
3,009,634
2010-06-09T20:22:00.000
1
0
0
1
python,linux,embedded-linux,xorg
3,012,123
5
false
0
0
I am doing something similar on my "set-top box" and I don't use any window manager. It boots debian, and from inittab I auto-login the user that runs the display. That user's .profile starts X, which runs .xinitrc, which starts my python app that runs as a network server in front of mplayer (running mplayer in -slave mode). My python app does not have a GUI element - only mplayer runs on the X display. But in your case, it should be no different. As I mentioned in a comment to another answer, you may want to look into how you can reparent mplayer's window to give you greater control over its placement and/or movement/size. Doing it this way avoided a display manager and a window manager. This simplifies the solution, boots faster and uses a smaller footprint (it runs of an SD card, with heaps of room to spare).
3
2
0
I am developing an application that will run on Linux to run fullscreen all the time (no menus or trays or anything will be visible). The application is going to be developed in Python, not that that matters as far as the window manager, but what I am having a hard time with is choosing a window manager. I need something with the smallest possible footprint, that will let me run a graphical Python app and have an mplayer window at the same time, at widescreen resolutions (widescreen, 16:10,16:9, etc). Other than that, it doesn't need a lot of features, but the end footprint size is the most important thing I'll be looking at. What window manager would you recommend? EDIT: There won't be any interaction with the application needed.
Which display manager for a non interactive Python app and mplayer?
0.039979
0
0
1,163
3,009,634
2010-06-09T20:22:00.000
1
0
0
1
python,linux,embedded-linux,xorg
3,009,693
5
false
0
0
You probably meant window manager. Display manages are KDM, GDM and the like. Windoe managers, to name, GNOME, Xfce, KDE, ratpoison, fvwm, twm, blackbox are a few. ratpoison gives full screen to the application that is in the foreground but demands heavy keyboard interaction (hence the name ratpoison) and no mouse interaction at all.
3
2
0
I am developing an application that will run on Linux to run fullscreen all the time (no menus or trays or anything will be visible). The application is going to be developed in Python, not that that matters as far as the window manager, but what I am having a hard time with is choosing a window manager. I need something with the smallest possible footprint, that will let me run a graphical Python app and have an mplayer window at the same time, at widescreen resolutions (widescreen, 16:10,16:9, etc). Other than that, it doesn't need a lot of features, but the end footprint size is the most important thing I'll be looking at. What window manager would you recommend? EDIT: There won't be any interaction with the application needed.
Which display manager for a non interactive Python app and mplayer?
0.039979
0
0
1,163
3,010,030
2010-06-09T21:19:00.000
0
0
1
1
python,python-idle
3,011,669
1
true
0
0
Honestly I would advise you to stop using IDLE, the fact that it runs program code in the same process as itself caused me a lot of problems when I used it, including things like not refreshing imported modules that were modified. Personally I switched to emacs, but you might like to try something like Notepad++.
1
0
0
I was using it as my primary text editor for quite sometime. However, one day it just stopped working. This had happened to me several times before, so I simply tried to end all procceses using windows task manager. However that didn't work. I've recently tried getting it to work again. Whenever I try to reopen it it informs me that it's subprocess couldn't connect. I tried uninstalling it and reinstalling it, yet the problem persists. Anyone have any other solutions? Important facts: Windows 7, Python 2.6.5
IDLE wont start Python 2.6.5
1.2
0
0
625
3,010,674
2010-06-09T23:18:00.000
2
0
0
1
python,networking
3,010,921
3
true
0
0
What you want technically isn't a problem of the language you're using - how much data is being transferred on your network interfaces is something you need to get from your operating system or network device driver. The way that you acquire these statistics will vary based on the OS, so that's what you need to nail down first.
1
1
0
How would I go about writing a python script that shows how much bandwidth is being used and how much data is being transferred on a Windows 7 machine?
How to calculate network usage with Python on Windows 7?
1.2
0
0
2,541
3,010,864
2010-06-10T00:13:00.000
0
0
1
0
python
3,011,234
2
false
0
0
I'd like to see someone write a a semantic file-browser, i.e. one that auto-generates tags for files according to their input and then allows views and searching accordingly. Think about it... take an MP3, lookup the lyrics, run it through Zemanta, bam! a PDF file, a OpenOffice file, etc., that'd be pretty kick-butt! probably fairly intensive too, but it'd be pretty dang cool! Cheers, -C
1
2
0
I am creating a sort of "Command line" in Python. I already added a few functions, such as changing login/password, executing, etc., But is it possible to browse files in the directory that the main file is in with a command/module, or will I have to make the module myself and use the import command? Same thing with changing directories to view, too.
viewing files in python?
0
0
0
210
3,011,686
2010-06-10T04:44:00.000
2
0
1
0
python,performance,io
3,011,711
3
false
0
0
Unless you know, or can figure out, the offset of line n in your file (for example, if every line were of a fixed width), you will have to read lines until you get to the nth one. Regarding your examples: xrange is faster than range since range has to generate a list, whereas xrange uses a generator if you only need line n, why are you storing all of the lines in a dictionary?
2
0
0
I have to read a file from a particular line number and I know the line number say "n": I have been thinking of two ways: 1. for i in range(n): fname.readline() k=readline() print k 2. i=0 for line in fname: dictionary[i]=line i=i+1 but I want a faster alternative as I might have to perform this on different files 20000 times. Are there any better alternatives? Also, are there are other performance enhancements for simple looping, as my code has nested loops.
Any faster alternative to reading nth line of a file
0.132549
0
0
649
3,011,686
2010-06-10T04:44:00.000
0
0
1
0
python,performance,io
3,011,871
3
false
0
0
Caching a list of offsets of every end-of-line character in the file would cost a lot of memory, but caching roughly one per memory page (generally 4KB) gives mostly the same reduction in I/O, and the cost of scanning a couple KB from a known offset is negligible. So, if your average line length is 40 characters, you only have to cache a list of every 100th end-of-line in the file. Exactly where you draw the line depends on how much memory you have and how fast your I/O is. You might even be able to get away with caching a list of the offsets of every 1000th end-of-line character without a noticeable difference in performance from indexing every single one.
2
0
0
I have to read a file from a particular line number and I know the line number say "n": I have been thinking of two ways: 1. for i in range(n): fname.readline() k=readline() print k 2. i=0 for line in fname: dictionary[i]=line i=i+1 but I want a faster alternative as I might have to perform this on different files 20000 times. Are there any better alternatives? Also, are there are other performance enhancements for simple looping, as my code has nested loops.
Any faster alternative to reading nth line of a file
0
0
0
649
3,012,157
2010-06-10T06:34:00.000
0
0
0
0
python,large-data-volumes,scientific-computing
9,964,718
4
false
0
0
The main assumptions are about the amount of cpu/cache/ram/storage/bandwidth you can have in a single machine at an acceptable price. There are lots of answers here at stackoverflow still based on the old assumptions of a 32 bit machine with 4G ram and about a terabyte of storage and 1Gb network. With 16GB DDR-3 ram modules at 220 Eur, 512 GB ram, 48 core machines can be build at reasonable prices. The switch from hard disks to SSD is another important change.
3
21
1
I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)
what changes when your input is giga/terabyte sized?
0
0
0
1,855
3,012,157
2010-06-10T06:34:00.000
1
0
0
0
python,large-data-volumes,scientific-computing
3,012,350
4
false
0
0
While some languages have naturally lower memory overhead in their types than others, that really doesn't matter for data this size - you're not holding your entire data set in memory regardless of the language you're using, so the "expense" of Python is irrelevant here. As you pointed out, there simply isn't enough address space to even reference all this data, let alone hold onto it. What this normally means is either a) storing your data in a database, or b) adding resources in the form of additional computers, thus adding to your available address space and memory. Realistically you're going to end up doing both of these things. One key thing to keep in mind when using a database is that a database isn't just a place to put your data while you're not using it - you can do WORK in the database, and you should try to do so. The database technology you use has a large impact on the kind of work you can do, but an SQL database, for example, is well suited to do a lot of set math and do it efficiently (of course, this means that schema design becomes a very important part of your overall architecture). Don't just suck data out and manipulate it only in memory - try to leverage the computational query capabilities of your database to do as much work as possible before you ever put the data in memory in your process.
3
21
1
I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)
what changes when your input is giga/terabyte sized?
0.049958
0
0
1,855
3,012,157
2010-06-10T06:34:00.000
18
0
0
0
python,large-data-volumes,scientific-computing
3,012,599
4
true
0
0
I'm currently engaged in high-performance computing in a small corner of the oil industry and regularly work with datasets of the orders of magnitude you are concerned about. Here are some points to consider: Databases don't have a lot of traction in this domain. Almost all our data is kept in files, some of those files are based on tape file formats designed in the 70s. I think that part of the reason for the non-use of databases is historic; 10, even 5, years ago I think that Oracle and its kin just weren't up to the task of managing single datasets of O(TB) let alone a database of 1000s of such datasets. Another reason is a conceptual mismatch between the normalisation rules for effective database analysis and design and the nature of scientific data sets. I think (though I'm not sure) that the performance reason(s) are much less persuasive today. And the concept-mismatch reason is probably also less pressing now that most of the major databases available can cope with spatial data sets which are generally a much closer conceptual fit to other scientific datasets. I have seen an increasing use of databases for storing meta-data, with some sort of reference, then, to the file(s) containing the sensor data. However, I'd still be looking at, in fact am looking at, HDF5. It has a couple of attractions for me (a) it's just another file format so I don't have to install a DBMS and wrestle with its complexities, and (b) with the right hardware I can read/write an HDF5 file in parallel. (Yes, I know that I can read and write databases in parallel too). Which takes me to the second point: when dealing with very large datasets you really need to be thinking of using parallel computation. I work mostly in Fortran, one of its strengths is its array syntax which fits very well onto a lot of scientific computing; another is the good support for parallelisation available. I believe that Python has all sorts of parallelisation support too so it's probably not a bad choice for you. Sure you can add parallelism on to sequential systems, but it's much better to start out designing for parallelism. To take just one example: the best sequential algorithm for a problem is very often not the best candidate for parallelisation. You might be better off using a different algorithm, one which scales better on multiple processors. Which leads neatly to the next point. I think also that you may have to come to terms with surrendering any attachments you have (if you have them) to lots of clever algorithms and data structures which work well when all your data is resident in memory. Very often trying to adapt them to the situation where you can't get the data into memory all at once, is much harder (and less performant) than brute-force and regarding the entire file as one large array. Performance starts to matter in a serious way, both the execution performance of programs, and developer performance. It's not that a 1TB dataset requires 10 times as much code as a 1GB dataset so you have to work faster, it's that some of the ideas that you will need to implement will be crazily complex, and probably have to be written by domain specialists, ie the scientists you are working with. Here the domain specialists write in Matlab. But this is going on too long, I'd better get back to work
3
21
1
I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)
what changes when your input is giga/terabyte sized?
1.2
0
0
1,855
3,012,488
2010-06-10T07:35:00.000
5
0
1
0
python,language-features,with-statement
3,012,826
11
false
0
0
points 1, 2, and 3 being reasonably well covered: 4: it is relatively new, only available in python2.6+ (or python2.5 using from __future__ import with_statement)
1
490
0
I came across the Python with statement for the first time today. I've been using Python lightly for several months and didn't even know of its existence! Given its somewhat obscure status, I thought it would be worth asking: What is the Python with statement designed to be used for? What do you use it for? Are there any gotchas I need to be aware of, or common anti-patterns associated with its use? Any cases where it is better use try..finally than with? Why isn't it used more widely? Which standard library classes are compatible with it?
What is the python "with" statement designed for?
0.090659
0
0
114,920
3,012,661
2010-06-10T08:07:00.000
0
0
0
1
python,google-app-engine,audio,chat
3,020,384
4
false
1
0
You'll need two things: A browser plugin to get audio. You could build this on top of eg. http://code.google.com/p/libjingle/'>libjingle which has the advantage of being cross-platform and allowing P2P communication, not to mention being able to talk to arbitrary other XMPP endoints. Or you could use Flash to grab the audio and bounce the stream off a server you build (I think trying to do STUN in Flash for P2P would be impossible), but this would be very tricky to do in App Engine because you'd need it to be long-running. A way to get signaling messages between your clients. You'll have to poll until the Channel API is released (soon). This is a big hairy problem, to put it mildly, but it would be awesome if you did it.
3
0
0
i want to make a chat room on gae ,(audio chat) has any framework to do this ? thanks
how to make a chat room on gae ,has any audio python-framework to do this?
0
0
0
749
3,012,661
2010-06-10T08:07:00.000
1
0
0
1
python,google-app-engine,audio,chat
3,080,160
4
false
1
0
Try Adobe Stratus (it works with p2p connections) and you could use Google App Engine only for exchanging peer ids.
3
0
0
i want to make a chat room on gae ,(audio chat) has any framework to do this ? thanks
how to make a chat room on gae ,has any audio python-framework to do this?
0.049958
0
0
749
3,012,661
2010-06-10T08:07:00.000
1
0
0
1
python,google-app-engine,audio,chat
3,013,054
4
true
1
0
App Engine doesn't directly support audio chat of any sort, and since it's based around a request-response system with (primarily) HTTP requests, you can't implement it yourself.
3
0
0
i want to make a chat room on gae ,(audio chat) has any framework to do this ? thanks
how to make a chat room on gae ,has any audio python-framework to do this?
1.2
0
0
749
3,012,863
2010-06-10T08:40:00.000
18
0
0
0
python,django,signals
3,012,925
2
true
1
0
Django signals are synchronous. The handlers are executed as soon as the signal is fired, and control returns only when all appropriate handlers have finished.
1
5
0
How does Django's event routing system work?
How do Django signals work?
1.2
0
0
2,048
3,013,134
2010-06-10T09:21:00.000
2
0
0
0
python,linux,pdf,merge,jpeg
3,013,162
1
false
1
0
Not exactly knowing what you mean my sequence - ImageMagick, esp. its 'montage' is probably the tool you need. IM has python interface, too, altough I have never used it. EDIT: As after your edit I do not get the point of this any more, I cannot recommend anything, either. :(
1
1
0
im doing a project as part of academic programme.Im doing this in linux platform.here i wanted to create a application which retrieve some information from some pdf files .for eg i have pdfs of subject2,subject1,in both the whole pdf is divided in to 4 modules and i want to get the data of module 1 from pdf..for this purpose my tutor told me to use pdftohtml application and convert pdf files to html and jpeg images.now i want to create a Python script which will combine the pages(which have been coverted in to jpeg images) under module 1 and merge it into a single file and then i will convert it back to pdf . how can i do this?.if anyone can provide any such python script which have done any functions similar to this then it will be very helpful. .... thanks in advance
Sequence and merge jpeg images using Python?
0.379949
0
0
634
3,014,686
2010-06-10T13:19:00.000
0
1
0
1
python,testing,sockets,wrapper
3,019,494
2
false
0
0
Another option is to mock the socket module before importing the asyncore module. Of course, then you have to make sure that the mock works properly first.
1
0
0
I am looking for a way of programmatically testing a script written with the asyncore Python module. My test consists of launching the script in question -- if a TCP listen socket is opened, the test passes. Otherwise, if the script dies before getting to that point, the test fails. The purpose of this is knowing if a nightly build works (at least up to a point) or not. I was thinking the best way to test would be to launch the script in some kind of sandbox wrapper which waits for a socket request. I don't care about actually listening for anything on that port, just intercepting the request and using that as an indication that my test passed. I think it would be preferable to intercept the open socket request, rather than polling at set intervals (I hate polling!). But I'm a bit out of my depths as far as how exactly to do this. Can I do this with a shell script? Or perhaps I need to override the asyncore module at the Python level? Thanks in advance, - B
How can I build a wrapper to wait for listening on a port?
0
0
1
838
3,015,825
2010-06-10T15:24:00.000
3
0
0
0
python,django
3,015,914
2
false
1
0
Providing an editing interface is one half of the battle but it's pretty straightforward. There are already apps out there to provide editing of templates and media files so it's pretty much just an extension of that. The hardest part is restarting the server which would have to happen in order for the new code to be compiled. I don't think there's a way to do this from within the server so here's how I would do it: When you make an edit, create a new file in the project root. eg an empty file called restart. Write an bash script to look for that file, if it exists, restart the site and delete the file. Cron the script to run once every 10 seconds. It shouldn't use any meaningful resources. One serious issue is if you introduce bugs. You could test-compile (ie running the dev-server before you restart the site and check the input) but that's not very robust and you could easily end up in a situation where you lose access to the site. As that's the case, it might be wise to set up the editor as a completely separate site so you're never locked out...
1
1
0
just wondering if it would be possible in some experimental way, to edit django app code within django safely to then refresh the compiled files. Would be great if someone has tried something similar already or has some ideas. I would like to be able to edit small bits of code from a web interface, so I can easily maintain a couple of experimental projects. Help would be great! Thanks.
Editing django code within django - Django
0.291313
0
0
135
3,015,874
2010-06-10T15:31:00.000
0
0
1
0
python,stage,fast-esp
3,421,127
2
true
1
0
The FAST documentation (ESP Document Processor Integration Guide) has a pretty good example of how to write a custom document processor. FAST does not provide the source code to any of it's software, but the AttributeFilter stage functionality should be very straightforward.
2
0
0
I am working on Enterprise Search and we are using Fast ESP and for now i have 4 projects but i have no information about stages and python. But i realize that i have learn custom stage development. Because we have a lot of difficulties about document processing. I want to know how can i develop custom stage and especially i wanna know about how i can find Attributefilter stage source code. I am waiting your answers
Fast Esp Custom Stage Development
1.2
0
0
816
3,015,874
2010-06-10T15:31:00.000
0
0
1
0
python,stage,fast-esp
47,027,134
2
false
1
0
I had worked with FAST ESP for document processing and we used to modify the python files. you can modify them but you need to restart the document processor each time you modify any file. You need to search for document processing in the admin UI, there you go to the pipelines, and you can create a custom pipeline based on standard pipelines included in FAST ESP. Once you create your pipeline, then you can select the stage (python program) that you want to modify and the UI shows you the path of each script. I highly recommend you to create your custom stages for each pipeline you modify.
2
0
0
I am working on Enterprise Search and we are using Fast ESP and for now i have 4 projects but i have no information about stages and python. But i realize that i have learn custom stage development. Because we have a lot of difficulties about document processing. I want to know how can i develop custom stage and especially i wanna know about how i can find Attributefilter stage source code. I am waiting your answers
Fast Esp Custom Stage Development
0
0
0
816
3,016,116
2010-06-10T15:58:00.000
0
1
1
0
python,serialization
27,096,538
5
false
0
0
I take issue with the statement that the saving of variables in Matlab is an environment function. the "save" statement in matlab is a function and part of the matlab language not just a command. It is a very useful function as you don't have to worry about the trivial minutia of file i/o and it handles all sorts of variables from scalar, matrix, objects, structures.
2
3
1
I cannot understand it. Very simple, and obvious functionality: You have a code in any programming language, You run it. In this code You generate variables, than You save them (the values, names, namely everything) to a file, with one command. When it's saved You may open such a file in Your code also with simple command. It works perfect in matlab (save Workspace , load Workspace ) - in python there's some weird "pickle" protocol, which produces errors all the time, while all I want to do is save variable, and load it again in another session (?????) f.e. You cannot save class with variables (in Matlab there's no problem) You cannot load arrays in cPickle (but YOu can save them (?????) ) Why don't make it easier? Is there a way to save the current variables with values, and then load them?
Save Workspace - save all variables to a file. Python doesn't have it)
0
0
0
8,265
3,016,116
2010-06-10T15:58:00.000
2
1
1
0
python,serialization
3,016,188
5
false
0
0
What you are describing is Matlab environment feature not a programming language. What you need is a way to store serialized state of some object which could be easily done in almost any programming language. In python world pickle is the easiest way to achieve it and if you could provide more details about the errors it produces for you people would probably be able to give you more details on that. In general for object oriented languages (including python) it is always a good approach to incapsulate a your state into single object that could be serialized and de-serialized and then store/load an instance of such class. Pickling and unpickling of such objects works perfectly for many developers so this must be something specific to your implementation.
2
3
1
I cannot understand it. Very simple, and obvious functionality: You have a code in any programming language, You run it. In this code You generate variables, than You save them (the values, names, namely everything) to a file, with one command. When it's saved You may open such a file in Your code also with simple command. It works perfect in matlab (save Workspace , load Workspace ) - in python there's some weird "pickle" protocol, which produces errors all the time, while all I want to do is save variable, and load it again in another session (?????) f.e. You cannot save class with variables (in Matlab there's no problem) You cannot load arrays in cPickle (but YOu can save them (?????) ) Why don't make it easier? Is there a way to save the current variables with values, and then load them?
Save Workspace - save all variables to a file. Python doesn't have it)
0.07983
0
0
8,265
3,018,122
2010-06-10T20:14:00.000
2
1
0
0
python,swig,segmentation-fault
3,018,504
2
true
0
0
You could, on starting Test.py, copy the Example.* files to a temp folder unique for that instance (take a look at tempfile.mkdtemp, it can create safe, unique folders), add that to sys.path and then import Example; and on Test.py shutdown remove that folder (shutils.rmtree) at the cleanup stage. This would mean that each instance of Test.py would run on its own copy of the Example module, not interfering with the others, and would update to the new one only upon relaunch. You would need the Example.* files not to be on the same folder as Test.py for this, otherwise the import would get those first. Just storing them on a subfolder should be fine.
1
3
0
I'm trying to solve the following problem: Say I have a Python script (let's call it Test.py) which uses a C++ extension module (made via SWIG, let's call the module "Example"). I have Test.py, Example.py, and _Example.so in the same directory. Now, in the middle of running Test.py, I want to make a change to my Example module, recompile (which will overwrite the existing .so), and use a command to gracefully stop Test.py which is still using the old version of the module (Test.py has some cleaning up to do, which uses some stuff which is defined in the Example module), then start it up again, using the new version of the module. Gracefully stopping Test.py and THEN recompiling the module is not an option in my case. The problem is, as soon as _Example.so is overwritten and Test.py tries to access anything defined in the Example module (while gracefully stopping), I get a segmentation fault. One solution to this is to explicitly name the Example module by appending a version number at the end, but I was wondering if there was a better solution (I don't want to be importing Example_1_0)?
Problems replacing a Python extension module while Python script is executing
1.2
0
0
133
3,018,690
2010-06-10T21:32:00.000
5
0
0
0
python,django,pipeline,request-pipeline
3,018,777
2
true
1
0
With a scripting language like python (or php), things are not compiled down to bytecode like in .net or java. Wrong: everything you import in Python gets compiled to bytecode (and saved as .pyc files if you can write to the directory containing the source you're importing -- standard libraries &c are generally pre-compiled, depending on the installation choices of course). Just keep the main script short and simple (importing some module and calling a function in it) and you'll be using compiled bytecode throughout. (Python's compiler is designed to be extremely fast -- with implications including that it doesn't do a lot of otherwise reasonable optimizations -- but avoiding it altogether is still faster;-).
2
2
0
With a scripting language like python (or php), things are not compiled down to bytecode like in .net or java. So does this mean that on every request, it has to go through the entire application and parse/compile it? Or at least all the code required for the given call stack?
How exactly does a python (django) request happen? does it have to reparse all the codebase?
1.2
0
0
207
3,018,690
2010-06-10T21:32:00.000
3
0
0
0
python,django,pipeline,request-pipeline
3,018,704
2
false
1
0
When running as CGI, yes, the entire project needs to be loaded for each request. FastCGI and mod_wsgi keep the project in memory and talk to it over a socket.
2
2
0
With a scripting language like python (or php), things are not compiled down to bytecode like in .net or java. So does this mean that on every request, it has to go through the entire application and parse/compile it? Or at least all the code required for the given call stack?
How exactly does a python (django) request happen? does it have to reparse all the codebase?
0.291313
0
0
207
3,019,742
2010-06-11T01:59:00.000
0
1
0
1
python,android
3,019,817
1
false
0
0
You should check your python instalation as the repo command is an python script made by Google to interact with git repositories. If you do have python installed it is possible that it is not in your shell path or you are using a diferent version than required by repo, ie. you have version 3 while repo requires version 2.5 (just an example, I'm not sure what version repo uses).
1
2
0
im trying to build android from source on ubuntu 10.04. when i enter the repo command: repo init -u git://android.git.kernel.org/platform/manifest.git -b eclair it get this error back exec: 23: python: not found any ideas.
exec: 23: python: not found error?
0
0
0
3,027
3,020,267
2010-06-11T04:53:00.000
2
0
0
1
python,perl,scripting,cross-platform,shebang
3,020,285
4
false
0
0
The shebang line will be interpreted as a comment by Perl or Python. The only thing that assigns it a special meaning is the UNIX/Linux shell; it gets ignored on Windows. The way Windows knows which interpreter to use to run the file is through the file associations in the registry, a different mechanism altogether.
2
11
0
I was wondering how to make a python script portable to both linux and windows? One problem I see is shebang. How to write the shebang so that the script can be run on both windows and linux? Are there other problems besides shebang that I should know? Is the solution same for perl script? Thanks and regards!
how to make a python or perl script portable to both linux and windows?
0.099668
0
0
5,770
3,020,267
2010-06-11T04:53:00.000
14
0
0
1
python,perl,scripting,cross-platform,shebang
3,020,286
4
true
0
0
Windows will just ignore the shebang (which is, after all, a comment); in Windows you need to associate the .py extension to the Python executable in the registry, but you can perfectly well leave the shebang on, it will be perfectly innocuous there. There are many bits and pieces which are platform-specific (many only exist on Unix, msvcrt only on Windows) so if you want to be portable you should abstain from those; some are subtly different (such as the detailed precise behavior of subprocess.Popen or mmap) -- it's all pretty advanced stuff and the docs will guide you there. If you're executing (via subprocess or otherwise) external commands you'd better make sure they exist on both platforms, of course, or check what platform you're in and use different external commands in each case. Remember to always use /, not \, as path separator (forward slash works in both platforms, backwards slash is windows-only), and be meticulous as to whether each file you're opening is binary or text. I think that's about it...
2
11
0
I was wondering how to make a python script portable to both linux and windows? One problem I see is shebang. How to write the shebang so that the script can be run on both windows and linux? Are there other problems besides shebang that I should know? Is the solution same for perl script? Thanks and regards!
how to make a python or perl script portable to both linux and windows?
1.2
0
0
5,770
3,020,979
2010-06-11T07:35:00.000
1
0
0
0
python,xml,http
3,021,000
2
false
0
0
You can achieve that through a standard http post request.
1
7
0
how can i send an xml file on my system to an http server using python standard library??
send xml file to http using python
0.099668
0
1
5,264
3,021,046
2010-06-11T07:45:00.000
0
0
0
0
python,wav
3,021,073
3
false
1
1
NumPy can load the data into arrays for easy manipulation. Or SciPy. I forget which.
1
1
0
I want get the details of the wave such as its frames into a array of integers. Using fname.getframes we can ge the properties of the frame and save in list or anything for writing into another wav or anything,but fname.getframes gives information not in integers some thing like a "/xt/x4/0w' etc.. But i want them in integer so that would be helpful for manupation and smoothening join of 2 wav files
wav file manupalation
0
0
0
660
3,021,264
2010-06-11T08:22:00.000
4
1
1
0
python,optimization,memory-management
3,021,350
7
false
0
0
For a web application you should use a database, the way you're doing it you are creating one copy of your dict for each apache process, which is extremely wasteful. If you have enough memory on the server the database table will be cached in memory (if you don't have enough for one copy of your table, put more RAM into the server). Just remember to put correct indices on your database table or you will get bad performance.
2
13
0
I need to optimize the RAM usage of my application. PLEASE spare me the lectures telling me I shouldn't care about memory when coding Python. I have a memory problem because I use very large default-dictionaries (yes, I also want to be fast). My current memory consumption is 350MB and growing. I already cannot use shared hosting and if my Apache opens more processes the memory doubles and triples... and it is expensive. I have done extensive profiling and I know exactly where my problems are. I have several large (>100K entries) dictionaries with Unicode keys. A dictionary starts at 140 bytes and grows fast, but the bigger problem is the keys. Python optimizes strings in memory (or so I've read) so that lookups can be ID comparisons ('interning' them). Not sure this is also true for unicode strings (I was not able to 'intern' them). The objects stored in the dictionary are lists of tuples (an_object, an int, an int). my_big_dict[some_unicode_string].append((my_object, an_int, another_int)) I already found that it is worth while to split to several dictionaries because the tuples take a lot of space... I found that I could save RAM by hashing the strings before using them as keys! But then, sadly, I ran into birthday collisions on my 32 bit system. (side question: is there a 64-bit key dictionary I can use on a 32-bit system?) Python 2.6.5 on both Linux(production) and Windows. Any tips on optimizing memory usage of dictionaries / lists / tuples? I even thought of using C - I don't care if this very small piece of code is ugly. It is just a singular location. Thanks in advance!
Python tips for memory optimization
0.113791
0
0
12,150
3,021,264
2010-06-11T08:22:00.000
2
1
1
0
python,optimization,memory-management
3,021,346
7
false
0
0
I've had situations where I've had a collection of large objects that I've needed to sort and filter by different methods based on several metadata properties. I didn't need the larger parts of them so I dumped them to disk. As you data is so simple in type, a quick SQLite database might solve all your problems, even speed things up a little.
2
13
0
I need to optimize the RAM usage of my application. PLEASE spare me the lectures telling me I shouldn't care about memory when coding Python. I have a memory problem because I use very large default-dictionaries (yes, I also want to be fast). My current memory consumption is 350MB and growing. I already cannot use shared hosting and if my Apache opens more processes the memory doubles and triples... and it is expensive. I have done extensive profiling and I know exactly where my problems are. I have several large (>100K entries) dictionaries with Unicode keys. A dictionary starts at 140 bytes and grows fast, but the bigger problem is the keys. Python optimizes strings in memory (or so I've read) so that lookups can be ID comparisons ('interning' them). Not sure this is also true for unicode strings (I was not able to 'intern' them). The objects stored in the dictionary are lists of tuples (an_object, an int, an int). my_big_dict[some_unicode_string].append((my_object, an_int, another_int)) I already found that it is worth while to split to several dictionaries because the tuples take a lot of space... I found that I could save RAM by hashing the strings before using them as keys! But then, sadly, I ran into birthday collisions on my 32 bit system. (side question: is there a 64-bit key dictionary I can use on a 32-bit system?) Python 2.6.5 on both Linux(production) and Windows. Any tips on optimizing memory usage of dictionaries / lists / tuples? I even thought of using C - I don't care if this very small piece of code is ugly. It is just a singular location. Thanks in advance!
Python tips for memory optimization
0.057081
0
0
12,150
3,021,514
2010-06-11T09:00:00.000
3
0
0
0
javascript,python,graphics,canvas,svg
3,022,580
5
false
0
0
PyGame can do all of those things. OTOH, I don't think it embeds into a GUI too well.
3
2
0
I need a simple graphics library that supports the following functionality: Ability to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent), Ability to load bitmap images, Ability to read current color of pixel in a given coordinate. Ideally using JavaScript or Python. Seems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?
Simple graphics API with transparency, polygons, reading image pixels?
0.119427
0
1
1,047
3,021,514
2010-06-11T09:00:00.000
0
0
0
0
javascript,python,graphics,canvas,svg
3,023,182
5
false
0
0
I voted for PyGame, but I would also like to point out that the new QT graphics library seems quite capable. I have not used PyQT with QT4 yet, but I really like PyQT development with QT3.
3
2
0
I need a simple graphics library that supports the following functionality: Ability to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent), Ability to load bitmap images, Ability to read current color of pixel in a given coordinate. Ideally using JavaScript or Python. Seems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?
Simple graphics API with transparency, polygons, reading image pixels?
0
0
1
1,047
3,021,514
2010-06-11T09:00:00.000
2
0
0
0
javascript,python,graphics,canvas,svg
3,027,643
5
false
0
0
I ended up going with Canvas. The "secret" of polygons is using paths. Thanks, "tur1ng"!
3
2
0
I need a simple graphics library that supports the following functionality: Ability to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent), Ability to load bitmap images, Ability to read current color of pixel in a given coordinate. Ideally using JavaScript or Python. Seems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?
Simple graphics API with transparency, polygons, reading image pixels?
0.07983
0
1
1,047
3,021,652
2010-06-11T09:26:00.000
1
0
0
1
python,c,ruby,programming-languages
3,021,757
6
false
0
0
If I assume this is your central question: Where is the line between language functions and system API? Then imagine if you will this analogy: OS API system calls are like lego bricks and lego components. Programming 'functions' are merely an arrangement of many lego bricks. Such that the combination results in a tool. Thus different languages may 'arrange' and create the tool in different ways. If I asked you to create a car with lego's, you could come up with many different designs.
5
3
0
so I asked here few days ago about C# and its principles. Now, if I may, I have some additional general questions about some languages, because for novice like me, it seems a bit confusing. To be exact I want to ask more about language functions capabilities than syntax and so. To be honest, its just these special functions that bothers me and make me so confused. For example, C has its printf(), Pascal has writeln() and so. I know in basic the output in assembler of these functions would be similar, every language has more or less its special functions. For console output, for file manipulation, etc. But all these functions are de-facto part of its OS API, so why is for example in C distinguished between C standard library functions and (on Windows) WinAPI functions when even printf() has to use some Windows feature, call some of its function to actually show desired text on console window, becouse the actual "showing" is done by OS. Where is the line between language functions and system API? Now languages I don't quite understand - Python, Ruby and similar. To be more specific, I know they are similar to java and C# in term they are compiled into bytecode. But, I do not unerstand what are its capabilities in term of building GUI applications. I saw tutorial for using Ruby to program GUI applications on Linux and Windows. But isn´t that just some kind of upgrade? I mean fram other tutorials It seemed like these languages was first intended for small scripts than building big applications. I hope you understand why I am confused. If you do, please help me sort it out a bit, I have no one to ask.
Help me sort programming languages a bit
0.033321
0
0
333
3,021,652
2010-06-11T09:26:00.000
1
0
0
1
python,c,ruby,programming-languages
3,021,731
6
false
0
0
So. For your first question, the interface between the C API and the OS API is the C runtime. On Windows this is some incarnation of MSVCRT.DLL, whereas on Linux this is glibc. For the second, the native language for most GUI toolkits is either C or C++. Higher-level languages seeking to use them require bindings which translate back and forth between the language and the C/C++ API. For the third, these high-level languages only appear to be used for "small scripts". The simple fact is that they are far more expressive than C or C++, which means that they have equal or more capabilities than a C or C++ program while being written in fewer lines of code.
5
3
0
so I asked here few days ago about C# and its principles. Now, if I may, I have some additional general questions about some languages, because for novice like me, it seems a bit confusing. To be exact I want to ask more about language functions capabilities than syntax and so. To be honest, its just these special functions that bothers me and make me so confused. For example, C has its printf(), Pascal has writeln() and so. I know in basic the output in assembler of these functions would be similar, every language has more or less its special functions. For console output, for file manipulation, etc. But all these functions are de-facto part of its OS API, so why is for example in C distinguished between C standard library functions and (on Windows) WinAPI functions when even printf() has to use some Windows feature, call some of its function to actually show desired text on console window, becouse the actual "showing" is done by OS. Where is the line between language functions and system API? Now languages I don't quite understand - Python, Ruby and similar. To be more specific, I know they are similar to java and C# in term they are compiled into bytecode. But, I do not unerstand what are its capabilities in term of building GUI applications. I saw tutorial for using Ruby to program GUI applications on Linux and Windows. But isn´t that just some kind of upgrade? I mean fram other tutorials It seemed like these languages was first intended for small scripts than building big applications. I hope you understand why I am confused. If you do, please help me sort it out a bit, I have no one to ask.
Help me sort programming languages a bit
0.033321
0
0
333
3,021,652
2010-06-11T09:26:00.000
2
0
0
1
python,c,ruby,programming-languages
3,021,849
6
true
0
0
At the bottom you have the OS kernel itself - code that runs in a special CPU mode that allows direct access to otherwise protected resources. You will never have to deal with this unless you're an OS developer. Then comes a do-not-cross line seperating this "kernel space" from "user space". Everything you do as "normal" developer is done in user space. The OS kernel exports a limited number of very basic functions into user space, dubbed "system calls". Open a file, read / write a number of bytes, closing the file, for example. Because these system calls usually require some Assembler code developers don't want to be bothered with, they are "wrapped" in (usually) C code functions: open(), read(), write(), close(). Now come two sets of APIs available to the developer: The OS API, and the standard language API. The standard language API provides functions that can be used on any platform supporting the language: fopen(), fputc(), fgetc(), fclose(). It will also provide higher-level functions that make life easier: fprintf(), for example. The OS API provides its own set of functions. These are not portable to a different operating system, but might be more intuitive to use, or more powerful, or merely different. OpenFile(), ReadFile(), WriteFile(), CloseFile(). Again, higher-level functions might be available, i.e. PrintLn(). The two sets of functions might partially rely on each other, or make system calls directly, but that shouldn't bother you too much. You should, however, decide beforehand which set of functions you will want to use for your project, because mixing the two sets - while not a mistake in itself - opens a whole new can of worms (i.e., potential errors).
5
3
0
so I asked here few days ago about C# and its principles. Now, if I may, I have some additional general questions about some languages, because for novice like me, it seems a bit confusing. To be exact I want to ask more about language functions capabilities than syntax and so. To be honest, its just these special functions that bothers me and make me so confused. For example, C has its printf(), Pascal has writeln() and so. I know in basic the output in assembler of these functions would be similar, every language has more or less its special functions. For console output, for file manipulation, etc. But all these functions are de-facto part of its OS API, so why is for example in C distinguished between C standard library functions and (on Windows) WinAPI functions when even printf() has to use some Windows feature, call some of its function to actually show desired text on console window, becouse the actual "showing" is done by OS. Where is the line between language functions and system API? Now languages I don't quite understand - Python, Ruby and similar. To be more specific, I know they are similar to java and C# in term they are compiled into bytecode. But, I do not unerstand what are its capabilities in term of building GUI applications. I saw tutorial for using Ruby to program GUI applications on Linux and Windows. But isn´t that just some kind of upgrade? I mean fram other tutorials It seemed like these languages was first intended for small scripts than building big applications. I hope you understand why I am confused. If you do, please help me sort it out a bit, I have no one to ask.
Help me sort programming languages a bit
1.2
0
0
333
3,021,652
2010-06-11T09:26:00.000
2
0
0
1
python,c,ruby,programming-languages
3,021,738
6
false
0
0
C is portable. That means that on different systems the assembler output for printf will be different... this is something the compiler does based on what your target system is. Write C code and compile as a Linux app and the output will be different than as a Win32 app, and also different than if you compile the exact same code for an iPhone or something like that. Internally, the C standard libraries might wrap a call to Win32 API when you call printf, but that's not really your concern in most cases. The C standard library (like printf and other I/O for files and stuff) wraps the low-level OS or hardware code needed to do what you want. It's worth noting the same effect happens in Java, but in a different way. At a broad level: In Java, the code you write always compiles to the same byte-code. But then when the JVM runs this byte-code, the JRE translates it to machine-specific instructions at run-time, rather than at compile-time on C.
5
3
0
so I asked here few days ago about C# and its principles. Now, if I may, I have some additional general questions about some languages, because for novice like me, it seems a bit confusing. To be exact I want to ask more about language functions capabilities than syntax and so. To be honest, its just these special functions that bothers me and make me so confused. For example, C has its printf(), Pascal has writeln() and so. I know in basic the output in assembler of these functions would be similar, every language has more or less its special functions. For console output, for file manipulation, etc. But all these functions are de-facto part of its OS API, so why is for example in C distinguished between C standard library functions and (on Windows) WinAPI functions when even printf() has to use some Windows feature, call some of its function to actually show desired text on console window, becouse the actual "showing" is done by OS. Where is the line between language functions and system API? Now languages I don't quite understand - Python, Ruby and similar. To be more specific, I know they are similar to java and C# in term they are compiled into bytecode. But, I do not unerstand what are its capabilities in term of building GUI applications. I saw tutorial for using Ruby to program GUI applications on Linux and Windows. But isn´t that just some kind of upgrade? I mean fram other tutorials It seemed like these languages was first intended for small scripts than building big applications. I hope you understand why I am confused. If you do, please help me sort it out a bit, I have no one to ask.
Help me sort programming languages a bit
0.066568
0
0
333
3,021,652
2010-06-11T09:26:00.000
0
0
0
1
python,c,ruby,programming-languages
3,021,872
6
false
0
0
C's printf() is a wrapper. You can use it and compile your code under any OS, but the resulting machine code will be different. In Windows, it might call some function inside the Windows API. In Linux, it will use the Linux API. You ask why is the Windows API distinguished. That's because, if you're programming for Windows, you can use it to do some OS-specific things like create GUIs, manipulate console text instead of just printing, asking for OS resources, and stuff like that. An API like that exists for Linux and Mac (and I guess all the other OS's) too, and they let you do more or less the same things. Unlike printf(), though, they are not portable. You ask what is the line between language functions and the system API. The language functions simply call the OS's API. You can call these yourself, but then you won't be able to compile your code on different systems. Python and Ruby (and some others) are interpreted. They are compiled to bytecode behind the scenes, but all the user sees is that double-clicking the source file will run it. No need to compile. That means, obviously, that they're slower than compiled languages. However, their dynamic nature makes for faster development, since you usually need less code to do the same thing (I said usually). That doesn't mean these languages can't be used for "big" applications: There are GUI libraries for them. That's because these are general purpose languages, unlike some others like Bash.
5
3
0
so I asked here few days ago about C# and its principles. Now, if I may, I have some additional general questions about some languages, because for novice like me, it seems a bit confusing. To be exact I want to ask more about language functions capabilities than syntax and so. To be honest, its just these special functions that bothers me and make me so confused. For example, C has its printf(), Pascal has writeln() and so. I know in basic the output in assembler of these functions would be similar, every language has more or less its special functions. For console output, for file manipulation, etc. But all these functions are de-facto part of its OS API, so why is for example in C distinguished between C standard library functions and (on Windows) WinAPI functions when even printf() has to use some Windows feature, call some of its function to actually show desired text on console window, becouse the actual "showing" is done by OS. Where is the line between language functions and system API? Now languages I don't quite understand - Python, Ruby and similar. To be more specific, I know they are similar to java and C# in term they are compiled into bytecode. But, I do not unerstand what are its capabilities in term of building GUI applications. I saw tutorial for using Ruby to program GUI applications on Linux and Windows. But isn´t that just some kind of upgrade? I mean fram other tutorials It seemed like these languages was first intended for small scripts than building big applications. I hope you understand why I am confused. If you do, please help me sort it out a bit, I have no one to ask.
Help me sort programming languages a bit
0
0
0
333
3,021,921
2010-06-11T10:12:00.000
2
0
0
0
php,python,model-view-controller,cakephp,application-server
3,022,395
2
true
1
0
How do go about implementing this? Too big a question for an answer here. Certainly you don't want 2 sets of code for the scraping (1 for scheduled, 1 for demand) in addition to the added complication, you really don't want to be running job which will take an indefinite time to complete within the thread generated by a request to your webserver - user requests for a scrape should be run via the scheduling mechanism and reported back to users (although if necessary you could use Ajax polling to give the illusion that it's happening in the same thread). What frame work(s) should I use? Frameworks are not magic bullets. And you shouldn't be choosing a framework based primarily on the nature of the application you are writing. Certainly if specific, critical functionality is precluded by a specific framework, then you are using the wrong framework - but in my experience that has never been the case - you just need to write some code yourself. using something more complex than a cron job Yes, a cron job is probably not the right way to go for lots of reasons. If it were me I'd look at writing a daemon which would schedule scrapes (and accept connections from web page scripts to enqueue additional scrapes). But I'd run the scrapes as separate processes. Is MVC a good architecture for this? (I'm new to MVC, architectures etc.) No. Don't start by thinking whether a pattern fits the application - patterns are a useful tool for teaching but describe what code is not what it will be (Your application might include some MVC patterns - but it should also include lots of other ones). C.
1
3
0
I'm building a web application, and I need to use an architecture that allows me to run it over two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB. Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user. I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server. What frame work(s) should I use for this kind of job? Is MVC and Cakephp a good solution? If so will I be able to control and monitor the Python code using it? Thanks
Web application architecture, and application servers?
1.2
1
0
1,161
3,022,232
2010-06-11T11:13:00.000
4
0
0
0
php,python,ruby,performance,math
3,022,304
2
false
0
0
The best option is probably the language you're most familiar with. My second consideration would be if you need to use any special maths libraries and whether they're supported in each of the languages.
2
4
0
Criteria for 'better': fast in math and simple (few fields, many records) db transactions, convenient to develop/read/extend, flexible, connectible. The task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selecting/inserting sets of floats and doing maths with them). The choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, or JavaScript (node.js). All the data is to be stored in a relational database (due to its heavily multidimensional nature); all the communication with outer world is to be done by means of web services.
What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?
0.379949
1
0
451
3,022,232
2010-06-11T11:13:00.000
10
0
0
0
php,python,ruby,performance,math
3,022,242
2
true
0
0
I would suggest Python with it's great Scientifical/Mathematical libraries (SciPy, NumPy). Otherwise the languages are not differing so much, although I doubt that Ruby, PHP or JS can keep up with the speed of Python or Perl. And what the comments below here say: at this moment, go for the latest Python2 (which is Python2.7). This has mature versions of all needed libraries, and if you follow the coding guidelines, transferring some day to Python 3 will be only a small pain.
2
4
0
Criteria for 'better': fast in math and simple (few fields, many records) db transactions, convenient to develop/read/extend, flexible, connectible. The task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selecting/inserting sets of floats and doing maths with them). The choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, or JavaScript (node.js). All the data is to be stored in a relational database (due to its heavily multidimensional nature); all the communication with outer world is to be done by means of web services.
What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?
1.2
1
0
451
3,023,136
2010-06-11T13:31:00.000
2
0
0
0
python
4,815,094
3
false
0
0
Not strictly answering what you asked, but if you are running on a windows platform you could spawn a process to do it for you. Taken from Wikipedia: Microsoft Windows provides two command-line tools for creation and extraction of CAB files. They are MAKECAB.EXE (included within Windows packages such as 'ie501sp2.exe' and 'orktools.msi'; also available from the SDK, see below) and EXTRACT.EXE (included on the installation CD), respectively. Windows XP also provides the EXPAND.EXE command.
1
7
0
Is it somehow possible to extract .cab files in python?
How to extract a windows cabinet file in python
0.132549
0
0
3,924
3,024,191
2010-06-11T15:44:00.000
1
0
0
0
python,session,e-commerce,web.py
3,024,470
1
false
1
0
If very much depends on your system ofcourse. But personally I always try to merge the data and immediately store it in the same way as it would be stored as when the user would be logged in. So if you store it in a session for an anonymous user and in the database for any authenticated user. Just merge all data as soon as you login.
1
0
0
This is a general question, or perhaps a request for pointers to other open source projects to look at: I'm wondering how people merge an anonymous user's session data into the authenticated user data when a user logs in. For example, someone is browsing around your websites saving various items as favourites. He's not logged in, so they're saved to an anonymous user's data. Then he logs in, and we need to merge all that data into their (possibly existing) user data. Is this done different ways in an ad-hoc fashion for different applications? Or are there some best practices or other projects people can direct me to?
How to merge or copy anonymous session data into user data when user logs in?
0.197375
0
0
140
3,024,344
2010-06-11T16:06:00.000
0
1
0
0
python,call,modem,gsm
3,028,528
1
false
0
0
i don't know for this specific model, but GSM modem are generally handled as a communication port. they are mapped as a communication port (COMXX under windows, don't know for linux). the documentation of the modem will give you a set of AT command which will allow you to configure the modem so that it notifies incoming calls to the port. just open the port, send the configuration commands and listen for incoming events. (you should also be able to receive SMS this way).
1
0
0
Ideally I'd like to find a library for Python. All I need is the caller number, I do not need to answer the call.
Is it possible to detect an incoming call to a GSM modem (HUAWEI E160) plugged into the USB port?
0
0
0
1,371
3,024,663
2010-06-11T16:55:00.000
1
0
0
1
java,python,google-app-engine,google-cloud-datastore
3,025,435
1
false
1
0
I followed what was suggested in the error logs and that worked for me: Empty the index.yaml file (create a backup first) Run vacuum_indexes again Look at your app's admin console and don't go to the next step till all your indexes are deleted. Specify the indexes you want to be created in index.yaml Run update_indexes Look at your app's admin console and it should show that your indexes are now building. Enjoy the fruits of your labor :) Cheers, Keyur
1
1
0
I have a Java app deployed on app engine and I use appcfg.py of the Python SDK to vacuum and update my indexes. Yesterday I first ran vacuum_indexes and that completed successfully - i.e. it en-queued tasks to delete my existing indexes. The next step was probably a mistake on my part - I then ran update_indexes even though my previous indexes weren't yet deleted. Needless to say that my update_indexes call errored out. So much so that now when I look at my app engine console, it shows the status of all my indexes as "Error". A day has passed an it still shows the status on my indexes as "Error". Can someone help my out of my fix?! Thanks, Keyur P.S.: I have posted this on the GAE forums as well but hoping SO users have faced and resolved this issue as well.
Google App Engine - update_indexes error
0.197375
0
0
221
3,025,378
2010-06-11T18:46:00.000
1
1
0
0
python,django,apache,mod-wsgi
3,027,902
2
false
1
0
I guess, you had a value of 1 for MaxClients / MaxRequestsPerChild and/or ThreadsPerChild in your Apache settings. So Apache had to startup Django for every mod_python call. That's why it took so long. If you have a wsgi-daemon, then a restart takes only place if you "touch" the wsgi script.
2
2
0
I have a variable in init of a module which get loaded from the database and takes about 15 seconds. For django development server everything is working fine but looks like with apache2 and mod_wsgi the module is loaded with every request (taking 15 seconds). Any idea about this behavior? Update: I have enabled daemon mode in mod wsgi, looks like its not reloading the modules now! needs more testing and I will update.
Python module being reloaded for each request with django and mod_wsgi
0.099668
0
0
629
3,025,378
2010-06-11T18:46:00.000
3
1
0
0
python,django,apache,mod-wsgi
3,032,332
2
true
1
0
You were likely ignoring the fact that in embedded mode of mod_wsgi or with mod_python, the application is multiprocess. Thus requests may go to different processes and you will see a delay the first time a process which hasn't been hit before is encountered. In mod_wsgi daemon mode the default has only a single process. That or as someone else mentioned you had MaxRequestsPerChild set to 1, which is a really bad idea.
2
2
0
I have a variable in init of a module which get loaded from the database and takes about 15 seconds. For django development server everything is working fine but looks like with apache2 and mod_wsgi the module is loaded with every request (taking 15 seconds). Any idea about this behavior? Update: I have enabled daemon mode in mod wsgi, looks like its not reloading the modules now! needs more testing and I will update.
Python module being reloaded for each request with django and mod_wsgi
1.2
0
0
629
3,026,731
2010-06-11T22:48:00.000
1
0
0
0
python,html,templates
3,026,766
5
false
1
0
I would highly recommend using templates. Templates help to encourage a good MVC structure to your application. Python code that emits HTML, IMHO, is wrong. The reason I say that is because Python code should be responsible for doing logic and not have to worry about presentation. Template syntax is usually restrictive enough that you can't really do much logic within the template, but you can do any presentation specific type logic that you may need. ymmv.
1
5
0
I have a web-app consisting of some html forms for maintaining some tables (SQlite, with CherryPy for web-server stuff). First I did it entirely 'the Python way', and generated html strings via. code, with common headers, footers, etc. defined as functions in a separate module. I also like the idea of templates, so I tried Jinja2, which I find quite developer-friendly. In the beginning I thought templates were the way to go, but that was when pages were simple. Once .css and .js files were introduced (not necessarily in the same folder as the .html files), and an ever-increasing number of {{...}} variables and {%...%} commands were introduced, things started getting messy at design-time, even though they looked great at run-time. Things got even more difficult when I needed additional javascript in the or sections. As far as I can see, the main advantages of using templates are: Non-dynamic elements of page can easily be viewed in browser during design. Except for {} placeholders, html is kept separate from python code. If your company has a web-page designer, they can still design without knowing Python. while some disadvantages are: {{}} delimiters visible when viewed at design-time in browser Associated .css and .js files have to be in same folder to see effects in browser at design-time. Data, variables, lists, etc., must be prepared in advanced and either declared globally or passed as parameters to render() function. So - when to use 'hard-coded' HTML, and when to use templates? I am not sure of the best way to go, so I would be interested to hear other developers' views. TIA, Alan
Templates vs. coded HTML
0.039979
0
0
2,377
3,026,786
2010-06-11T23:02:00.000
0
1
1
0
python,syntax,editor,text-editor
3,026,797
2
false
0
0
Not sure what .odt has to do with any of this. I could see some sort of BNF being able to describe (almost) any syntax: Just run the text and the BNF through a parser, and apply a color scheme to the terminals. You could even get a bit more fancy, since you'd have the syntax tree. In reality, I think most syntax files take an easier approach, such as regular expressions. This would put then somewhere above regular expressions but not really quite context-free in terms of power. As for file formats, if you re-use something that exists, then you can just loot and pillage (subject to license agreements) their syntax file data.
1
0
0
Hey as a project to improve my programing skills I've begun programing a nice code editor in python to teach myself project management, version control, and gui programming. I was wanting to utilize syntax files made for other programs so I could have a large collection already. I was wondering if there was any kind of universal syntax file format much in the same sense as .odt files. I heard of one once in a forum, it had a website, but I can't remember it now. If not I may just try to use gedit syntax files or geany. thanks
Universal syntax file format?
0
0
0
252
3,026,881
2010-06-11T23:34:00.000
0
0
0
0
python,api,youtube,gdata
3,027,001
1
false
0
0
Setting an environment variable (e.g. import os; os.environ['BLAH']='BLUH' once at the start of your program "seems cumbersome"?! What does count as "non-cumbersome" for you, pray?
1
0
0
I'm working a script that will upload videos to YouTube with different accounts. Is there a way to use HTTPS or SOCKS proxies to filter all the requests. My client doesn't want to leave any footprints for Google. The only way I found was to set the proxy environment variable beforehand but this seems cumbersome. Is there some way I'm missing? Thanks :)
How to use a Proxy with Youtube API? (Python)
0
0
1
1,129
3,027,973
2010-06-12T08:12:00.000
1
0
0
0
python,django,apache2
3,033,505
4
false
1
0
You need to persist the info server-side, integrity isn't critical, throughput and latency are important. That means you should use some sort of key-value store. Memcached and redis have keys that expire. You probably have memcached already installed, so use that. You can reset expiry time of the user:last-seen:$username key every visit, or you can use mawimawi's cookie technique and have expiry = 4 * cookie-lifetime.
4
7
0
I have a running django/apache2 + memcached app (ubuntu) and would like to keep track of logged in users that are online. What would be the best way to track this? I would prefer not writing to the database each time a logged in user loads a page; but what other options are there?
What is the best way to implement a 'last seen' function in a django web app?
0.049958
0
0
1,321
3,027,973
2010-06-12T08:12:00.000
0
0
0
0
python,django,apache2
3,028,869
4
false
1
0
You can't do that in django without using a database/persistent-storage because of the same reason why django sessions are stored in database: There can be multiple instances of your applications running and the must synchronize their states+data through a single persistence source [1] Alternatively, you might want to write this information in a folder in a file named with user id and then check its create/modified date to find the required information.
4
7
0
I have a running django/apache2 + memcached app (ubuntu) and would like to keep track of logged in users that are online. What would be the best way to track this? I would prefer not writing to the database each time a logged in user loads a page; but what other options are there?
What is the best way to implement a 'last seen' function in a django web app?
0
0
0
1,321
3,027,973
2010-06-12T08:12:00.000
1
0
0
0
python,django,apache2
3,028,094
4
false
1
0
A hashmap or a queue in memory with a task running every hour or so to persist it.
4
7
0
I have a running django/apache2 + memcached app (ubuntu) and would like to keep track of logged in users that are online. What would be the best way to track this? I would prefer not writing to the database each time a logged in user loads a page; but what other options are there?
What is the best way to implement a 'last seen' function in a django web app?
0.049958
0
0
1,321
3,027,973
2010-06-12T08:12:00.000
4
0
0
0
python,django,apache2
3,028,086
4
false
1
0
An approach might be: you create a middleware that does the following on process_response: check for a cookie called 'online', but only if the user is authenticated if the cookie is not there, set a cookie called 'online' with value '1' set the lifespan of the cookie to 10 minutes update the 'last_login' field of auth.User for this user with the current datetime now you have all currently logged in users in your auth.User table. All Users that have a last_login newer than datetime.now()-interval(15minutes) might be considered "online". The database will be written for every logged in user about every 10 minutes. Adjust the values "10" and "15" to your needs. The advantage here is that database writes are rare (according to your two numeric settings 10/15). And for speed optimization make sure that last_login is indexed, so a filter on this field including Count is really fast. Hope this helps.
4
7
0
I have a running django/apache2 + memcached app (ubuntu) and would like to keep track of logged in users that are online. What would be the best way to track this? I would prefer not writing to the database each time a logged in user loads a page; but what other options are there?
What is the best way to implement a 'last seen' function in a django web app?
0.197375
0
0
1,321
3,028,561
2010-06-12T11:52:00.000
1
0
1
0
python
3,028,597
1
true
0
0
Yes, just do setup.py install again.
1
2
0
I have downloaded and install a python library, via setup.py , python2.5 setup.py install ... now the version is changed at the source . a newer library is available. originally , i have clone it via mercurial, and install it. right now , i have updated repository. how do i use the newer version ? overwrite the installation ? by simply doing setup.py install again ?
Newbie : installing and upgrading python module
1.2
0
0
122
3,029,824
2010-06-12T19:18:00.000
-2
1
0
0
php,python,perl,download
3,029,877
2
false
0
0
There are scripts out there that output the file in chunks, recording how many bytes they've echoed out, but those are completely unreliable and you can't accurately ascertain whether or not the user successfully received the complete file. The short answer is no, really, unless you write your own download manager (in Java) that runs a callback to your server when the download completes.
1
0
0
Is there a way I can programmatically determine the status of a download in Chrome or Mozilla Firefox? I would like to know if the download was aborted or completed successfully. For writing the code I'd be using either Perl, PHP or Python. Please help. Thank You.
Programmatically determining the status of a file download
-0.197375
0
1
676
3,030,277
2010-06-12T22:11:00.000
2
1
0
0
python,django,logging
3,031,000
7
true
0
0
The only way I know of is to edit manage.py itself... not very elegant, of course, but at least it should get you to where you need to be.
1
7
0
I utilize the standard python logging module. When I call python manage.py test I'd like to disable logging before all the tests are ran. Is there a signal or some other kind of hook I could use to call logging.disable? Or is there some other way to disable logging when python manage.py test is ran?
Disable logging during manage.py test?
1.2
0
0
5,944
3,030,585
2010-06-13T00:14:00.000
60
0
0
1
python,eclipse,google-app-engine,pydev
3,030,645
3
true
1
0
/usr/local/google_appengine - that's a symlink that links to the SDK.
2
31
0
I need to know for creating a Pydev Google App Engine Project in Eclipse.
Where is the Google App Engine SDK path on OSX?
1.2
0
0
17,766
3,030,585
2010-06-13T00:14:00.000
28
0
0
1
python,eclipse,google-app-engine,pydev
5,189,889
3
false
1
0
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine
2
31
0
I need to know for creating a Pydev Google App Engine Project in Eclipse.
Where is the Google App Engine SDK path on OSX?
1
0
0
17,766
3,030,593
2010-06-13T00:22:00.000
1
0
0
1
python,django,google-app-engine
3,031,594
2
false
1
0
It's possible, as Alex demonstrates, but it's not really a good idea: The performance characteristics of the development server are not the same as those of the production environment, so something that executes quickly locally may not be nearly as quick in production, and vice versa. Also, your user facing tasks should definitely not be so slow as to approach the 30 second limit.
1
3
0
Hey, i was wondering if there is a way to enforce the 30 seconds limit that is being enforced online at the appengine production servers to the local dev server? its impossible to test if i reach the limit before going production. maybe some django middlware?
is there any way to enforce the 30 seconds limit on local appengine dev server?
0.099668
0
0
138
3,030,936
2010-06-13T03:16:00.000
1
1
0
1
python,wsgi,cherokee,uwsgi
5,033,390
3
false
1
0
There seems to be an issue with the 'make' method of installation on the uwsgi docs. Use 'python uwsgiconfig.py --build' instead. That worked for me. Cherokee, Django running on Ubuntu 10.10.
1
2
0
Has anyone tried using uWSGI with Cherokee? Can you share your experiences and what documents you relied upon the most? I am trying to get started from the documentation on both (uWSGI and Cherokee) websites. Nothing works yet. I am using Ubuntu 10.04. Edit: To clarify, Cherokee has been working fine. I am getting the error message: uWSGI Error, wsgi application not found So something must be wrong with my configurations. Or maybe my application.
uWSGI with Cherokee: first steps
0.066568
0
0
2,309
3,031,225
2010-06-13T05:45:00.000
5
1
0
0
java,python,trading
3,031,234
7
false
1
0
Pick the language you are most familiar with. If you know them all equally and speed is a real concern, pick C.
6
5
0
I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. Java C++ C# Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: real time production weekly simulations of a large number of systems weekly/monthly optimizations of portfolios large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.
Which programming language for compute-intensive trading portfolio simulation?
0.141893
0
0
995
3,031,225
2010-06-13T05:45:00.000
4
1
0
0
java,python,trading
3,031,266
7
false
1
0
Write it in your preferred language. To me that sounds like python. When you start running the system you can profile it and see where the bottlenecks are. Once you do some basic optimisations if it's still not acceptable you can rewrite portions in C. A consideration could be writing this in iron python to take advantage of the clr and dlr in .net. Then you can leverage .net 4 and parallel extensions. If anything will give you performance increases it'll be some flavour of threading which .net does extremely well. Edit: Just wanted to make this part clear. From the description, it sounds like parallel processing / multithreading is where the majority of the performance gains are going to come from.
6
5
0
I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. Java C++ C# Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: real time production weekly simulations of a large number of systems weekly/monthly optimizations of portfolios large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.
Which programming language for compute-intensive trading portfolio simulation?
0.113791
0
0
995
3,031,225
2010-06-13T05:45:00.000
4
1
0
0
java,python,trading
3,031,544
7
true
1
0
I would pick Java for this task. In terms of RAM, the difference between Java and C++ is that in Java, each Object has an overhead of 8 Bytes (using the Sun 32-bit JVM or the Sun 64-bit JVM with compressed pointers). So if you have millions of objects flying around, this can make a difference. In terms of speed, Java and C++ are almost equal at that scale. So the more important thing for me is the development time. If you make a mistake in C++, you get a segmentation fault (and sometimes you don't even get that), while in Java you get a nice Exception with a stack trace. I have always preferred this. In C++ you can have collections of primitive types, which Java hasn't. You would have to use external libraries to get them. If you have real-time requirements, the Java garbage collector may be a nuisance, since it takes some minutes to collect a 20 GB heap, even on machines with 24 cores. But if you don't create too many temporary objects during runtime, that should be fine, too. It's just that your program can make that garbage collection pause whenever you don't expect it.
6
5
0
I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. Java C++ C# Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: real time production weekly simulations of a large number of systems weekly/monthly optimizations of portfolios large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.
Which programming language for compute-intensive trading portfolio simulation?
1.2
0
0
995
3,031,225
2010-06-13T05:45:00.000
3
1
0
0
java,python,trading
3,031,844
7
false
1
0
Why only one language for your system? If I were you, I will build the entire system in Python, but C or C++ will be used for performance-critical components. In this way, you will have a very flexible and extendable system with fast-enough performance. You can find even tools to generate wrappers automatically (e.g. SWIG, Cython). Python and C/C++/Java/Fortran are not competing each other; they are complementing.
6
5
0
I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. Java C++ C# Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: real time production weekly simulations of a large number of systems weekly/monthly optimizations of portfolios large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.
Which programming language for compute-intensive trading portfolio simulation?
0.085505
0
0
995
3,031,225
2010-06-13T05:45:00.000
5
1
0
0
java,python,trading
3,035,998
7
false
1
0
While I am a huge fan of Python and personaly I'm not a great lover of Java, in this case I have to concede that Java is the right way to go. For many projects Python's performance just isn't a problem, but in your case even minor performance penalties will add up extremely quickly. I know this isn't a real-time simulation, but even for batch processing it's still a factor to take into consideration. If it turns out the load is too big for one virtual server, an implementation that's twice as fast will halve your virtual server costs. For many projects I'd also argue that Python will allow you to develop a solution faster, but here I'm not sure that would be the case. Java has world-class development tools and top-drawer enterprise grade frameworks for parallell processing and cross-server deployment and while Python has solutions in this area, Java clearly has the edge. You also have architectural options with Java that Python can't match, such as Javaspaces. I would argue that C and C++ impose too much of a development overhead for a project like this. They're viable inthat if you are very familiar with those languages I'm sure it would be doable, but other than the potential for higher performance, they have nothing else to bring to the table. C# is just a rewrite of Java. That's not a bad thing if you're a Windows developer and if you prefer Windows I'd use C# rather than Java, but if you don't care about Windows there's no reason to care about C#.
6
5
0
I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. Java C++ C# Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: real time production weekly simulations of a large number of systems weekly/monthly optimizations of portfolios large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.
Which programming language for compute-intensive trading portfolio simulation?
0.141893
0
0
995
3,031,225
2010-06-13T05:45:00.000
0
1
0
0
java,python,trading
3,036,242
7
false
1
0
It is useful to look at the inner loop of your numerical code. After all you will spend most of your CPU-time inside this loop. If the inner loop is a matrix operation, then I suggest python and scipy, but of the inner loop if not a matrix operation, then I would worry about python being slow. (Or maybe I would wrap c++ in python using swig or boost::python) The benefit of python is that it is easy to debug, and you save a lot of time by not having to compile all the time. This is especially useful for a project where you spend a lot of time programming deep internals.
6
5
0
I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. Java C++ C# Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: real time production weekly simulations of a large number of systems weekly/monthly optimizations of portfolios large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.
Which programming language for compute-intensive trading portfolio simulation?
0
0
0
995
3,031,358
2010-06-13T07:02:00.000
1
1
0
0
c++,python,c,hashtable,hash
3,031,578
3
false
0
0
When you want to learn, I suggest you look at the Java implementation of java.util.HashMap. It's clear code, well-documented and comparably short. Admitted, it's neither C, nor C++, nor Python, but you probably don't want to read the GNU libc++'s upcoming implementation of a hashtable, which above all consists of the complexity of the C++ standard template library. To begin with, you should read the definition of the java.util.Map interface. Then you can jump directly into the details of the java.util.HashMap. And everything that's missing you will find in java.util.AbstractMap. The implementation of a good hash function is independent of the programming language. The basic task of it is to map an arbitrarily large value set onto a small value set (usually some kind of integer type), so that the resulting values are evenly distributed.
1
0
0
Looking for good source code either in C or C++ or Python to understand how a hash function is implemented and also how a hash table is implemented using it. Very good material on how hash fn and hash table implementation works. Thanks in advance.
Looking for production quality Hash table/ unordered map implementation to learn?
0.066568
0
0
571
3,031,383
2010-06-13T07:09:00.000
2
1
0
0
python,django,build-process,build,bytecode
3,031,660
1
false
1
0
You shouldn't ever need to 'compile' your .pyc files manually. This is always done automatically at runtime by the Python interpreter. In rare instances, such as when you delete an entire .py module, you may need to manually delete the corresponding .pyc. But there's no need to do any other manual compiling. What makes you think you need to do this?
1
2
0
Whenever I change my python source files in my Django project, the .pyc files become out of date. Of course that's because I need to recompile them in order to test them through my local Apache web server. I would like to get around this manual process by employing some automatic means of compiling them on save, or on build through Eclipse, or something like that. What's the best and proper way to do this?
Eclipse + Django: How to get bytecode output when python source files change?
0.379949
0
0
463
3,031,483
2010-06-13T07:47:00.000
2
0
0
0
python,django
3,031,565
2
true
1
0
Something else to remember: You need to maintain a browser session with the remote site so that site knows which CAPTCHA you're trying to solve. Lots of webclients allow you to store your cookies and I'd suggest you dump them in the Django Session of the user you're doing the screen scraping for. Then load them back up when you submit the CAPTCHA. Here's how I see the full turn of events: User places search request Query remote site If not CAPTCHA, GOTO #10 Save remote cookies in local session Download image captcha (perhaps to session too?) Present CAPTCHA to your user and a form User Submits CAPTCHA You load up cookies from #4 and submit the form as a POST GOTO #3 Process the data off the page, present to user, high-five yourself.
2
1
0
I have a tricky Django problem which didn't occur to me when I was developing it. My Django application allows a user to sign up and store his login credentials for a sites. The Django application basically allows the user to search this other site (by scraping content off it) and returns the result to the user. For each query, it does a couple of queries of the other site. This seemed to work fine but sometimes, the other site slaps me with a CAPTCHA. I've written the code to get the CAPTCHA image and I need to return this to the user so he can type it in but I don't know how. My search request (the query, the username and the password) in my Django application gets passed to a view which in turn calls the backend that does the scraping/search. When a CAPTCHA is detected, I'd like to raise a client side event or something on those lines and display the CAPTCHA to the user and wait for the user's input so that I can resume my search. I would somehow need to persist my backend object between calls. I've tried pickling it but it doesn't work because I get the Can't pickle 'lock' object error. I don't know to implement this though. Any help/ideas? Thanks a ton.
Raising events and object persistence in Django
1.2
0
0
267
3,031,483
2010-06-13T07:47:00.000
0
0
0
0
python,django
6,689,964
2
false
1
0
request.session['name'] = variable will store it then, variable = request.session['name'] will retrieve it. Remember though, its not a database, just a simple session store and shouldn't be relied on for anything critical
2
1
0
I have a tricky Django problem which didn't occur to me when I was developing it. My Django application allows a user to sign up and store his login credentials for a sites. The Django application basically allows the user to search this other site (by scraping content off it) and returns the result to the user. For each query, it does a couple of queries of the other site. This seemed to work fine but sometimes, the other site slaps me with a CAPTCHA. I've written the code to get the CAPTCHA image and I need to return this to the user so he can type it in but I don't know how. My search request (the query, the username and the password) in my Django application gets passed to a view which in turn calls the backend that does the scraping/search. When a CAPTCHA is detected, I'd like to raise a client side event or something on those lines and display the CAPTCHA to the user and wait for the user's input so that I can resume my search. I would somehow need to persist my backend object between calls. I've tried pickling it but it doesn't work because I get the Can't pickle 'lock' object error. I don't know to implement this though. Any help/ideas? Thanks a ton.
Raising events and object persistence in Django
0
0
0
267
3,032,207
2010-06-13T12:24:00.000
0
0
1
0
python,dictionary
3,032,870
3
false
0
0
"Premature optimization is the root of all evil," as C.A.R. Hoare said. Rather than ask if this implementation is efficient, perhaps you should ask if this implementation is efficient enough. If it meets your performance requirements and the code is easy to understand, perhaps you should leave it be. If your code is not giving you the performance you need, I'd suggest Shin's answer. It seems to be the simplest way of getting additional performance.
1
0
0
I have a number of processes running which are controlled by remote clients. A tcp server controls access to these processes, only one client per process. The processes are given an id number in the range of 0 -> n-1. Were 'n' is the number of processes. I use a dictionary to map this id to the client sockets file descriptor. On startup I populate the dictionary with the ids as keys and socket fd of 'None' for the values, i.e no clients and all pocesses are available When a client connects, I map the id to the sockets fd. When a client disconnects I set the value for this id to None, i.e. process is available. So everytime a client connects I have to check each entry in the dictionary for a process which has a socket fd entry of None. If there are then the client is allowed to connect. This solution does not seem very elegant, are there other data structures which would be more suitable for solving this? Thanks
Is a python dictionary the best data structure to solve this problem?
0
0
0
288
3,032,378
2010-06-13T13:29:00.000
0
0
0
1
python
3,032,409
2
false
0
0
Well here is an idea... place a status somewhere else, that can be polled/queried. when the process starts, post the 'running' status. have the script check here to see if the process is running. I would also use a seperate place to post control values. e.g. set a value to the 'control set' and have the process look for those values whenever it gets to decision points in its runtime behavior.
2
0
0
I want a script to start and interact with a long running process. The process is started first time the script is executed, after that the script can be executed repeatedly, but will detect that the process is already running. The script should be able to interact with the process. I would like this to work on Unix and Windows. I am unsure how I do this. Specifically how do I detect if the process is already running and open a pipe to it? Should I use sockets (e.g. registering the server process on a known port and then check if it responds) or should I use "named pipes"? Or is there some easier way?
Detecting and interacting with long running process
0
0
0
317
3,032,378
2010-06-13T13:29:00.000
2
0
0
1
python
3,032,536
2
true
0
0
Sockets are easier to make portable between Windows and any other OS, so that's what I would recommend it over named pipes (that's why e.g. IDLE uses sockets rather than named pipes -- the latter require platform-dependent code on Windows, e.g. via ctypes [[or third-party win32all or cython &c]], while sockets just work).
2
0
0
I want a script to start and interact with a long running process. The process is started first time the script is executed, after that the script can be executed repeatedly, but will detect that the process is already running. The script should be able to interact with the process. I would like this to work on Unix and Windows. I am unsure how I do this. Specifically how do I detect if the process is already running and open a pipe to it? Should I use sockets (e.g. registering the server process on a known port and then check if it responds) or should I use "named pipes"? Or is there some easier way?
Detecting and interacting with long running process
1.2
0
0
317
3,032,519
2010-06-13T14:18:00.000
2
0
0
0
python,django,django-admin
3,032,552
3
true
1
0
When I look into the source code of Django, I find out the reason. Somewhere in the django.core.management.commands.runserver module, a WSGIHandler object is wrapped inside an AdminMediaHandler. According to the document, AdminMediaHandler is a WSGI middleware that intercepts calls to the admin media directory, as defined by the ADMIN_MEDIA_PREFIX setting, and serves those images. Use this ONLY LOCALLY, for development! This hasn't been tested for security and is not super efficient. And that's why the admin media files can only be found automatically when I was using the test server. Now I just go ahead and set up the admin media url mapping manually :)
1
3
0
When I was using the built-in simple server, everything is OK, the admin interface is beautiful: python manage.py runserver However, when I try to serve my application using a wsgi server with django.core.handlers.wsgi.WSGIHandler, Django seems to forget where the admin media files is, and the admin page is not styled at all: gunicorn_django How did this happen?
Why can't Django find my admin media files once I leave the built-in runserver?
1.2
0
0
1,064
3,032,805
2010-06-13T15:41:00.000
15
0
1
1
python,multiprocessing
3,032,818
6
true
0
0
From the Python docs: When a process exits, it attempts to terminate all of its daemonic child processes. This is the expected behavior.
1
17
0
I want a script to start a new process, such that the new process continues running after the initial script exits. I expected that I could use multiprocessing.Process to start a new process, and set daemon=True so that the main script may exit while the created process continues running. But it seems that the second process is silently terminated when the main script exits. Is this expected behavior, or am I doing something wrong?
Starting a separate process
1.2
0
0
44,315
3,032,905
2010-06-13T16:09:00.000
1
0
0
0
python,pygtk,gtktreeview
3,056,761
2
true
0
1
What I ended up doing was extending gtk.ListStore and use my custom list. I also hijacked the append() method so that not only it will append a [str, str, etc] into the ListStore, but also the actual model inside a custom list property of the class that extends ListStore. Then, when the user double clicks the row, I fetch the requested model by the row's index in the ListStore, which corresponds to the model's index in the custom list.
1
1
0
I have a list of Project objects, that I display in a GtkTreeView. I am trying to open a dialog with a Project's details when the user double-clicks on the item's row in the TreeView. Right now I get the selected value from the TreeView (which is the name of the Project) via get_selection(), and search for that Project by name in my own list to corelate the selection with my own model. However, this doesn't feel quite right (plus, it assumes that a Project's name is unique), and I was wondering if there is a more elegant way of doing it.
How to correlate gtk.ListStore items with my own models
1.2
0
0
296
3,033,009
2010-06-13T16:31:00.000
2
0
0
1
python,google-app-engine,http-error
3,033,229
3
false
1
0
I agree that the correlation between startup log messages and 500 errors is not necessarily causal. However, it could be and pocoa should take steps to ensure that his startup time is low and that time consuming tasks be deferred when possible. One log entry and one 500 error does not mean much, but a few with time correlated probably points to excessive startup costs.
1
1
0
This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application. I've handled all the situations, also DeadlineExceededError too. But sometimes I see these error messages in error logs. That request took about 10k ms, so it's not exceeded the limit too. But there is no other specific message about this error. All I know is that it returned HTTP 500. Is there anyone know the reason of these error messages? Thank you.
App Engine HTTP 500s
0.132549
0
0
829
3,033,329
2010-06-13T18:09:00.000
2
1
1
0
c++,python,c,performance,programming-languages
3,033,341
11
false
0
0
C and C++ compile to native code- that is, they run directly on the CPU. Python is an interpreted language, which means that the Python code you write must go through many, many stages of abstraction before it can become executable machine code.
3
86
0
Why does Python seem slower, on average, than C/C++? I learned Python as my first programming language, but I've only just started with C and already I feel I can see a clear difference.
Why are Python Programs often slower than the Equivalent Program Written in C or C++?
0.036348
0
0
44,926
3,033,329
2010-06-13T18:09:00.000
8
1
1
0
c++,python,c,performance,programming-languages
3,033,355
11
false
0
0
The difference between python and C is the usual difference between an interpreted (bytecode) and compiled (to native) language. Personally, I don't really see python as slow, it manages just fine. If you try to use it outside of its realm, of course, it will be slower. But for that, you can write C extensions for python, which puts time-critical algorithms in native code, making it way faster.
3
86
0
Why does Python seem slower, on average, than C/C++? I learned Python as my first programming language, but I've only just started with C and already I feel I can see a clear difference.
Why are Python Programs often slower than the Equivalent Program Written in C or C++?
1
0
0
44,926
3,033,329
2010-06-13T18:09:00.000
25
1
1
0
c++,python,c,performance,programming-languages
3,033,545
11
false
0
0
Compilation vs interpretation isn't important here: Python is compiled, and it's a tiny part of the runtime cost for any non-trivial program. The primary costs are: the lack of an integer type which corresponds to native integers (making all integer operations vastly more expensive), the lack of static typing (which makes resolution of methods more difficult, and means that the types of values must be checked at runtime), and the lack of unboxed values (which reduce memory usage, and can avoid a level of indirection). Not that any of these things aren't possible or can't be made more efficient in Python, but the choice has been made to favor programmer convenience and flexibility, and language cleanness over runtime speed. Some of these costs may be overcome by clever JIT compilation, but the benefits Python provides will always come at some cost.
3
86
0
Why does Python seem slower, on average, than C/C++? I learned Python as my first programming language, but I've only just started with C and already I feel I can see a clear difference.
Why are Python Programs often slower than the Equivalent Program Written in C or C++?
1
0
0
44,926
3,034,304
2010-06-13T23:32:00.000
0
0
1
0
python,macos,installation
3,034,631
2
false
0
0
You should be able to delete the packages you've installed from /Library/Python/2.*/site-packages/. I do not think any package installers will install by default to /System/Library, which should save you from needing to remove Python itself. That said, you could also use virtualenv with --no-site-packages, and just ignore whatever packages you've installed system-wide without needing to remove them.
1
3
0
I was wondering if anyone had tips on how to completely remove a python installation form Mac OSX (10.5.8) ... including virtual environments and its related binaries. Over the past few years I've completely messed up the installed site-packages, virtual-environments, etc. and the only way I can see to fix it is to just uninstall everything and re-install. I'd like to completely re-do everything and use virtualenv, pip, etc. from the beginning. On the other hand if anyone knows a way to do this without removing python and re-installing I'd be happy to here about it. Thanks, Will
Removing python and then re-installing on Mac OSX
0
0
0
6,935
3,034,640
2010-06-14T01:51:00.000
2
1
0
0
python,textmate
3,069,494
2
false
0
0
Also, with textmate, do I actually define a project in textmate or do I just work on the files and textmate doesn't create its own .project type file ? You can do both. You can create a new project in TextMate by going to File -> New Project and add your files manually, or you can drag a folder into TextMate and it will create a project from those files (you can add other files later). Note that the second method will not create a .tmproj file, though, so if you want to "keep" that project, you'll have to save it (File -> Save Project).
1
2
0
I'm using textmate for the first time basically, and I am lost as to what keys map to these funny symbols. using python bundles, what keys do I press for: run run with tests run project unit tests Also, with textmate, do I actually define a project in textmate or do I just work on the files and textmate doesn't create its own .project type file ?
new to mac and textmate, can someone explain these shortcuts?
0.197375
0
0
527
3,034,659
2010-06-14T02:03:00.000
3
0
1
0
python,linux,python-imaging-library
3,034,709
6
true
0
0
Well, for one thing, im.show is only intended for debugging purpose, it isn't guaranteed to work. Nevertheless, you can always look at the source (open "pydoc PIL", the FILE section points out where a module is located): In Windows, PIL will use "start /wait filename" In Macs, it uses "open -a /Applications/Preview.app" and on Linux, either 'display' if found or otherwise 'xdg-open'.
3
3
0
When toying with images in the python shell, I use image.show(), where image is an instance of Image. Long ago nothing happened, but after defining a symlink to mirage named "xv", I was happy. The last few days, show() will bring up both ImageMagick's display and also Mirage. It's not clear where show() gets information on what to run. Documentation wasn't helpful. How to make it behave and bring up only what it thinks is xv?
PIL's Image.show() brings up *two* different viewers
1.2
0
0
6,659
3,034,659
2010-06-14T02:03:00.000
0
0
1
0
python,linux,python-imaging-library
3,037,360
6
false
0
0
it is able to specify the viewer as command argument to the show method, e.g. img.show(command='feh')
3
3
0
When toying with images in the python shell, I use image.show(), where image is an instance of Image. Long ago nothing happened, but after defining a symlink to mirage named "xv", I was happy. The last few days, show() will bring up both ImageMagick's display and also Mirage. It's not clear where show() gets information on what to run. Documentation wasn't helpful. How to make it behave and bring up only what it thinks is xv?
PIL's Image.show() brings up *two* different viewers
0
0
0
6,659
3,034,659
2010-06-14T02:03:00.000
5
0
1
0
python,linux,python-imaging-library
12,149,541
6
false
0
0
A little out-off-dated but... I solved this problem changing the code of file /usr/lib/python2.7/dist-packages/PIL/ImageShow.py. Is missing a return on method show of Viewer class (near line 66): return self.show_image(image, **options).
3
3
0
When toying with images in the python shell, I use image.show(), where image is an instance of Image. Long ago nothing happened, but after defining a symlink to mirage named "xv", I was happy. The last few days, show() will bring up both ImageMagick's display and also Mirage. It's not clear where show() gets information on what to run. Documentation wasn't helpful. How to make it behave and bring up only what it thinks is xv?
PIL's Image.show() brings up *two* different viewers
0.16514
0
0
6,659
3,035,028
2010-06-14T04:59:00.000
0
0
0
0
python,installation,numpy,matplotlib
5,926,995
2
false
0
0
Following Justin's comment ... here is the equivalent file for Linux: /usr/lib/pymodules/python2.6/matplotlib/__init__.py sudo edit that to fix the troublesome line to: if not ((int(nn[0]) >= 1 and int(nn[1]) >= 1) or int(nn[0]) >= 2): Thanks Justin Peel!
1
1
1
I installed matplotlib using the Mac disk image installer for MacOS 10.5 and Python 2.5. I installed numpy then tried to import matplotlib but got this error: ImportError: numpy 1.1 or later is required; you have 2.0.0.dev8462. It seems to that version 2.0.0.dev8462 would be later than version 1.1 but I am guessing that matplotlib got confused with the ".dev8462" in the version. Is there any workaround to this?
Can't import matplotlib
0
0
0
1,283
3,035,152
2010-06-14T05:41:00.000
0
0
1
0
python,zope.interface
3,035,237
2
true
0
0
vcvarsall.bat is a batch file that comes with MSVC. Make sure that it is in your %PATH%.
1
2
0
During setup, I'm like missing vcvarsall.bat running build running build_py running build_ext building '_zope_interface_coptimizations' extension error: Unable to find vcvarsall.bat
how do i install zope interface with python 2.6?
1.2
0
0
1,515
3,035,390
2010-06-14T06:40:00.000
4
1
0
0
python,telnet
3,035,415
2
true
0
0
Extended keys (non-alphanumeric or symbol) are composed of a sequence of single characters, with the sequence depending on the terminal you have told the telnet server you are using. You will need to send all characters in the sequence in order to make it work. Here, using od -c <<< 'CtrlVF2' I was able to see a sequence of \x1b0Q with the xterm terminal.
1
4
0
I have to send F2 key to telnet host. How do I send it using python...using getch() I found that the character < used for the F2 key but when sending >, its not working. I think there is a way to send special function keys but I am not able to find it. If somebody knows please help me. Thanks in advance
how to send F2 key to remote host using python
1.2
0
1
2,908
3,035,572
2010-06-14T07:25:00.000
2
1
1
0
python
43,765,592
4
false
0
0
You can also look for already installed instances. OpenOffice / LibreOffice Look at the environment variable UNO_PATH or into the default install directories, for example for Windows and LO5 %ProgramFiles(x86)%\LibreOffice 5\program\python.exe Gimp look into the default install directories, for example for Windows C:\Program Files\GIMP 2\Python and so on...
1
9
0
I once read about minimal python installation without a lot of the libraries that come with the python default installation but could not find it on the web... What I want to do is to just pack a script with the python stuff required to execute it and make portable. Does any one know about something like that? Thanks
Micropython or minimal python installation
0.099668
0
0
14,823
3,036,049
2010-06-14T09:09:00.000
0
0
0
1
python,django,transactions,multiprocessing,rabbitmq
3,036,073
2
false
1
0
This sounds brittle to me: You have a web app which posts to a queue and then inserts the initial state into the database. What happens if the consumer processes the message before the web app can commit the initial state? What happens if the web app tries to insert the new state while the DB is locked by the consumer? To fix this, the web app should add the initial state to the message and the consumer should be the only one ever writing to the DB. [EDIT] And you might also have an issue with logging. Check that races between the web app and the consumer produce the appropriate errors in the log by putting a message to the queue without modifying the DB. [EDIT2] Some ideas: How about showing just the number of pending tasks? For this, the web app could write into table 1 and the consumer writes into table 2 and the admin if would show the difference. Why can't the web app see the pending tasks which the consumer has in the queue? Maybe you should have two consumers. The first consumer just adds the task to the DB, commits and then sends a message to the second consumer with just the primary key of the new row. The admin iface could read the table while the second consumer writes to it. Last idea: Commit the transaction before you enqueue the message. For this, you simply have to send "commit" to the database. It will feel odd (and I certainly don't recommend it for any case) but here, it might make sense to commit the new row manually (i.e. before you return to your framework which handles the normal transaction logic).
2
0
0
I am building a logging-bridge between rabbitmq messages and Django application to store background task state in the database for further investigation/review, also to make it possible to re-publish tasks via the Django admin interface. I guess it's nothing fancy, just a standard Producer-Consumer pattern. Web application publishes to message queue and inserts initial task state into the database Consumer, which is a separate python process, handles the message and updates the task state depending on task output The problem is, some tasks are missing in the db and therefore never executed. I suspect it's because Consumer receives the message earlier than db commit is performed. So basically, returning from Model.save() doesn't mean the transaction has ended and the whole communication breaks. Is there any way I could fix this? Maybe some kind of post_transaction signal I could use? Thank you in advance.
Storing task state between multiple django processes
0
0
0
537
3,036,049
2010-06-14T09:09:00.000
0
0
0
1
python,django,transactions,multiprocessing,rabbitmq
3,036,410
2
false
1
0
Web application publishes to message queue and inserts initial task state into the database Do not do this. Web application publishes to the queue. Done. Present results via template and finish the web transaction. A consumer fetches from the queue and does things. For example, it might append to a log to the database for presentation to the user. The consumer may also post additional status to the database as it executes things. Indeed, many applications have multiple queues with multiple produce/consumer relationships. Each process might append things to a log. The presentation must then summarize the log entries. Often, the last one is a sufficient summary, but sometimes you need a count or information from earlier entries.
2
0
0
I am building a logging-bridge between rabbitmq messages and Django application to store background task state in the database for further investigation/review, also to make it possible to re-publish tasks via the Django admin interface. I guess it's nothing fancy, just a standard Producer-Consumer pattern. Web application publishes to message queue and inserts initial task state into the database Consumer, which is a separate python process, handles the message and updates the task state depending on task output The problem is, some tasks are missing in the db and therefore never executed. I suspect it's because Consumer receives the message earlier than db commit is performed. So basically, returning from Model.save() doesn't mean the transaction has ended and the whole communication breaks. Is there any way I could fix this? Maybe some kind of post_transaction signal I could use? Thank you in advance.
Storing task state between multiple django processes
0
0
0
537
3,036,157
2010-06-14T09:29:00.000
1
0
0
0
python,windows,django,iis,iis-6
3,036,701
3
false
1
0
I think that you can execute an iisreset via a commandline. I've never tried that with Django but it should work and be quite simple to implement.
1
0
0
I'm serving a Django app behind IIS 6. I'm wondering if I can restart IIS 6 within Python/Django and what one of the best ways to do would be. Help would be great!
Restarting IIS6 - Python
0.066568
0
0
1,575
3,036,680
2010-06-14T11:05:00.000
0
0
0
0
python,django,django-admin
3,052,792
4
false
1
0
Well, that's how I've done it. I made custom admin template "change_list.html". Custom template tag creates a list of all existing galleries. Filtering is made like this: class PhotoAdmin(admin.ModelAdmin): ... def queryset(self, request): if request.COOKIES.has_key("gallery"): gallery = Gallery.objects.filter(title_slug=request.COOKIES["gallery"]) if len(gallery)>0: return gallery[0].photos.all() return super(PhotoAdmin, self).queryset(request) Cookie is set with javascript.
1
1
0
There's photologue application, simple photo gallery for django, implementing Photo and Gallery objects. Gallery object has ManyToMany field, which references Photo objects. I need to be able to get list of all Photos for a given Gallery. Is it possible to add Gallery filter to Photo's admin page? If it's possible, how to do it best?
Django admin, filter objects by ManyToMany reference
0
0
0
2,800
3,037,273
2010-06-14T12:44:00.000
2
0
0
0
python,django,django-orm
42,020,136
3
false
1
0
Count Works on RawQuerySet ModelName.objects.raw("select 1 as id , COUNT(*) from modelnames_modelname")
1
10
0
I'm using a raw query and i'm having trouble finding out how to get the number of results it returns. Is there a way? edit .count() doesnt work. it returns: 'RawQuerySet' object has no attribute 'count'
Get number of results from Django's raw() query function
0.132549
0
0
10,644