Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
4,180,390
2010-11-14T23:40:00.000
4
1
0
1
python,ssh
4,180,771
4
false
0
0
On Linux machines, you can run the script with 'at'. echo "python scriptname.py" ¦ at now
1
23
0
I want to execute a Python script on several (15+) remote machine using SSH. After invoking the script/command I need to disconnect ssh session and keep the processes running in background for as long as they are required to. I have used Paramiko and PySSH in past so have no problems using them again. Only thing I need to know is how to disconnect a ssh session in python (since normally local script would wait for each remote machine to complete processing before moving on).
Execute remote python script via SSH
0.197375
0
0
51,995
4,180,836
2010-11-15T01:46:00.000
5
0
1
0
python,c,cython
4,180,878
6
false
0
1
I've found that a lot of the time, especially for larger libraries, you wind up spending a tremendous amount of time just configuring the Cython project to build, knowing which structures to import, bridging the C code into Python in either direction etc. While Cython is a nice stopgap (and significantly more pleasant than pure C/C++ development), the amount of C++ you'd have to learn to effectively use it basically means you're going to have to bite the bullet and learn C++ anyway. How about PyGame?
4
19
0
How practical would it be to use Cython as the primary programming language for a game? I am a experienced Python programmer and I absolutely love it, but I'm admittedly a novice when it comes to game programming specifically. I know that typically Python is considered too slow to do any serious game programming, which is why Cython is interesting to me. With Cython I can use a Python-like language with the speed of C. I understand that I'll probably need to learn a bit of C/C++ anyway, but it seems like Cython would speed up development time quite a bit in comparison. So, is it practical? And would I still be able to use C/C++ libraries like OpenGL, OpenAL, and Bullet Physics?
Using Cython for game development?
0.16514
0
0
9,169
4,180,836
2010-11-15T01:46:00.000
-1
0
1
0
python,c,cython
4,445,486
6
false
0
1
Threads!!! A good modern game must use threads. Cython practically forbids their use, holding to GIL (global interpreter lock) the entire time, making your code run in sequence. If you are not writing a huge game, than Python/Cython is okay. But Cython is no good as a modern language without good thread support.
4
19
0
How practical would it be to use Cython as the primary programming language for a game? I am a experienced Python programmer and I absolutely love it, but I'm admittedly a novice when it comes to game programming specifically. I know that typically Python is considered too slow to do any serious game programming, which is why Cython is interesting to me. With Cython I can use a Python-like language with the speed of C. I understand that I'll probably need to learn a bit of C/C++ anyway, but it seems like Cython would speed up development time quite a bit in comparison. So, is it practical? And would I still be able to use C/C++ libraries like OpenGL, OpenAL, and Bullet Physics?
Using Cython for game development?
-0.033321
0
0
9,169
4,180,836
2010-11-15T01:46:00.000
7
0
1
0
python,c,cython
5,638,409
6
false
0
1
at this date (12th of April 2011) unixmab83 is wrong. Cython doesn't forbid the use of threads, you just needs to use the no_gil special statements. Beside the bindins of c++ is now functional in cython. We do use it for something which is close to gamedev. So while I cannot be final on this, cython is a valid candidate.
4
19
0
How practical would it be to use Cython as the primary programming language for a game? I am a experienced Python programmer and I absolutely love it, but I'm admittedly a novice when it comes to game programming specifically. I know that typically Python is considered too slow to do any serious game programming, which is why Cython is interesting to me. With Cython I can use a Python-like language with the speed of C. I understand that I'll probably need to learn a bit of C/C++ anyway, but it seems like Cython would speed up development time quite a bit in comparison. So, is it practical? And would I still be able to use C/C++ libraries like OpenGL, OpenAL, and Bullet Physics?
Using Cython for game development?
1
0
0
9,169
4,180,836
2010-11-15T01:46:00.000
0
0
1
0
python,c,cython
17,047,294
6
false
0
1
I know Cython and you do not have to know C/C++. You will use static typing but very easy. The hardest part is to get the compiling working, I think on Windows this is done over visual studio thing. There is something like a standard library including math for example. The speed gain is not too big but this depends on your scope. ctypes was much faster (pure C) but the connection to Python was very slow so that i decided to look for Cython which can still be dynamic. For speed gain in a game Cython will be the right choice but i would name this performance also limited.
4
19
0
How practical would it be to use Cython as the primary programming language for a game? I am a experienced Python programmer and I absolutely love it, but I'm admittedly a novice when it comes to game programming specifically. I know that typically Python is considered too slow to do any serious game programming, which is why Cython is interesting to me. With Cython I can use a Python-like language with the speed of C. I understand that I'll probably need to learn a bit of C/C++ anyway, but it seems like Cython would speed up development time quite a bit in comparison. So, is it practical? And would I still be able to use C/C++ libraries like OpenGL, OpenAL, and Bullet Physics?
Using Cython for game development?
0
0
0
9,169
4,182,603
2010-11-15T08:26:00.000
4
0
1
0
python,python-2.7,unicode,utf-8
63,293,431
12
false
0
0
First, str in Python is represented in Unicode. Second, UTF-8 is an encoding standard to encode Unicode string to bytes. There are many encoding standards out there (e.g. UTF-16, ASCII, SHIFT-JIS, etc.). When the client sends data to your server and they are using UTF-8, they are sending a bunch of bytes not str. You received a str because the "library" or "framework" that you are using, has implicitly converted some random bytes to str. Under the hood, there is just a bunch of bytes. You just need ask the "library" to give you the request content in bytes and you will handle the decoding yourself (if library can't give you then it is trying to do black magic then you shouldn't use it). Decode UTF-8 encoded bytes to str: bs.decode('utf-8') Encode str to UTF-8 bytes: s.encode('utf-8')
1
230
0
I have a browser which sends utf-8 characters to my Python server, but when I retrieve it from the query string, the encoding that Python returns is ASCII. How can I convert the plain string to utf-8? NOTE: The string passed from the web is already UTF-8 encoded, I just want to make Python to treat it as UTF-8 not ASCII.
How to convert a string to utf-8 in Python
0.066568
0
1
817,441
4,183,158
2010-11-15T09:51:00.000
0
0
0
0
python,django,email
4,183,238
4
false
1
0
You can for example write script for importing comments from mailbox(for example 1-3 minutes for cron). You should connect to special mailbox which collects replies from users(comments). Every mail have own header and title. You really can find out which post user try to comment(by header or title), and then import django enviroment and insert new recods.
2
2
0
I have Django app that presents a list of items that you can add comments to. What i basically want to do is something like the Facebook did: when someone post a comment on your item, you will receive an e-mail. What I want to do, is when you reply to that e-mail, the reply to be posted as a comment reply on the website. What should I use to achieve this using python as much as possible ? Maybe even Django ?
How to post a comment on e-mail reply?
0
0
1
422
4,183,158
2010-11-15T09:51:00.000
-1
0
0
0
python,django,email
18,135,014
4
false
1
0
I think a good way is how Google+ handles it using a + on email address it can be reply+id-or hash-of-parent@domain.com then u must write a worker that check the POP server and
2
2
0
I have Django app that presents a list of items that you can add comments to. What i basically want to do is something like the Facebook did: when someone post a comment on your item, you will receive an e-mail. What I want to do, is when you reply to that e-mail, the reply to be posted as a comment reply on the website. What should I use to achieve this using python as much as possible ? Maybe even Django ?
How to post a comment on e-mail reply?
-0.049958
0
1
422
4,183,554
2010-11-15T10:43:00.000
2
0
0
0
python,mysql,django
4,190,033
2
false
1
0
A seperate DB table is definitely the "right" way to do it, because mysql has to send all the data from your TEXT fields every time you query. As you add more rows and the TEXT fields get bigger, you'll start to notice performance issues and eventually crash the server. Also, you'll be able to use VARCHAR and add a unique index to the paths, making lookups lightning fast.
1
4
0
I am building a website using Django, and this website uses blocks which are enabled for a certain page. Right now I use a textfield containing paths were a block is enabled. When a page is requested, Django retrieves all blocks from database and does re.search on the TextField. However, I was wondering if it is not a better idea to use a separate DB table for block/paths, were each row contains a single path and reference to a block, in terms of overhead.
Python: RE vs. Query
0.197375
0
0
184
4,185,061
2010-11-15T14:03:00.000
1
0
0
0
python,android
4,185,138
5
false
0
1
No, not currently. ASE (Android Scripting Environment) allows you to do simple script apps, but you can only write proper Android apps in Java.
1
112
0
Can I program for Android using Python? I seem to have stumbled upon many links while searching... however neither of them is concrete. Any suggestions? I want to write apps for Android but really don't want to get into Java for all this. PS: My question is whether I can write proper, full fledged apps for Android.
Android Python Programming
0.039979
0
0
84,928
4,186,099
2010-11-15T15:51:00.000
2
0
0
0
python,django,mediatemple
4,186,941
1
true
1
0
mod_python is built for 2.4, but Django is installed for 2.7. Either build mod_python for 2.7, install Django under 2.4, or put a local copy of Django with your project so that the version of Python doesn't matter.
1
0
0
Although running "python" from the shell runs Python v2.7, Django is loading files for python2.4, as shown in the error when I load a django site: Mod_python error: "PythonHandler django.core.handlers.modpython" Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 287, in HandlerDispatch log=debug) File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 461, in import_module f, p, d = imp.find_module(parts[i], path) ImportError: No module named django I think Django is installed for version 2.7 and that's why the bottom says "No module named django" This is my first django install (it's on a mediatemple DV server) so I wouldn't be surprised if I'm doing something stupid. Thanks!
Django running wrong version
1.2
0
0
790
4,186,384
2010-11-15T16:18:00.000
1
0
0
0
python,html,oracle,coldfusion
4,186,505
6
false
1
0
Most people, in this case, would use a framework. The best documented and most popular framework in Python is Django. It has good database support (including Oracle), and you'll have the easiest time getting help using it since there's such an active Django community. You can try some other frameworks, but if you're tied to Python I'd recommend Django. Of course, Jython (if it's an option), would make your job very easy. You could take the existing Java framework you have and just use Jython to build a frontend (and continue to use your Java applet and Java classes and Java server). The memory problem is an interesting one; I'd be curious to see what you come up with.
1
7
0
We're rewriting a website used by one of our clients. The user traffic on it is very low, less than 100 unique visitors a week. It's basically just a nice interface to their data in our databases. It allows them to query and filter on different sets of data of theirs. We're rewriting the site in Python, re-using the same Oracle database that the data is currently on. The current version is written in an old, old version of Coldfusion. One of the things that Coldfusion does well though is displays tons of database records on a single page. It's capable of displaying hundreds of thousands of rows at once without crashing the browser. It uses a Java applet, and it looks like the contents of the rows are perhaps compressed and passed in through the HTML or something. There is a large block of data in the HTML but it's not displayed - it's just rendered by the Java applet. I've tried several JavaScript solutions but they all hinge on the fact that the data will be present in an HTML table or something along those lines. This causes browsers to freeze and run out of memory. Does anyone know of any solutions to this situation? Our client loves the ability to scroll through all of this data without clicking a "next page" link.
How to display database query results of 100,000 rows or more with HTML?
0.033321
1
0
3,442
4,186,981
2010-11-15T17:13:00.000
2
0
0
0
python,tkinter
4,194,418
1
true
0
1
Short answer is no, but you could try a ComboBox mega-widget (quick search will throw some suitable examples up there) which could be a 'good enough' alternative (in fact with it being a combined entry field and scrolled list you could make it 'smart' by including auto-search / auto-complete - 60 items in a drop down is a lot :)
1
1
0
I've got an option menu that's about 60 items long and, needless to say, I can't see it all on the screen at once. Is there a way that I can make the OptionMenu widget in tkinter scrollable?
Make OptionMenu Widget Scrollable?
1.2
0
0
1,975
4,188,202
2010-11-15T19:48:00.000
2
0
0
0
python,performance,data-structures,implementation
4,188,260
3
false
0
0
I would consider building a dictionary with keys that are tuples or lists. Eg: my_dict(("col_2", "row_24")) would get you this element. Starting from there, it would be pretty easy (if not extremely fast for very large databases) to write 'get_col' and 'get_row' methods, as well as 'get_row_slice' and 'get_col_slice' from the 2 preceding ones to gain access to your methods. Using a whole dictionary like that will have 2 advantages. 1) Getting a single element will be faster than your 2 proposed methods; 2) If you want to have different number of elements (or missing elements) in your columns, this will make it extremely easy and memory efficient. Just a thought :) I'll be curious to see what packages people will suggest! Cheers
2
6
0
I am implementing a class that resembles a typical database table: has named columns and unnamed rows has a primary key by which I can refer to the rows supports retrieval and assignment by primary key and column title can be asked to add unique or non-unique index for any of the columns, allowing fast retrieval of a row (or set of rows) which have a given value in that column removal of a row is fast and is implemented as "soft-delete": the row is kept physically, but is marked for deletion and won't show up in any subsequent retrieval operations addition of a column is fast rows are rarely added columns are rarely deleted I decided to implement the class directly rather than use a wrapper around sqlite. What would be a good data structure to use? Just as an example, one approach I was thinking about is a dictionary. Its keys are the values in the primary key column of the table; its values are the rows implemented in one of these ways: As lists. Column numbers are mapped into column titles (using a list for one direction and a map for the other). Here, a retrieval operation would first convert column title into column number, and then find the corresponding element in the list. As dictionaries. Column titles are the keys of this dictionary. Not sure about the pros/cons of the two. The reasons I want to write my own code are: I need to track row deletions. That is, at any time I want to be able to report which rows where deleted and for what "reason" (the "reason" is passed to my delete method). I need some reporting during indexing (e.g., while an non-unique index is being built, I want to check certain conditions and report if they are violated)
How to implement database-style table in Python
0.132549
1
0
1,361
4,188,202
2010-11-15T19:48:00.000
0
0
0
0
python,performance,data-structures,implementation
4,231,416
3
false
0
0
You really should use SQLite. For your first reason (tracking deletion reasons) you can easily implement this by having a second table that you "move" rows to on deletion. The reason can be tracked in additional column in that table or another table you can join. If a deletion reason isn't always required then you can even use triggers on your source table to copy rows about to be deleted, and/or have a user defined function that can get the reason. The indexing reason is somewhat covered by constraints etc but I can't directly address it without more details.
2
6
0
I am implementing a class that resembles a typical database table: has named columns and unnamed rows has a primary key by which I can refer to the rows supports retrieval and assignment by primary key and column title can be asked to add unique or non-unique index for any of the columns, allowing fast retrieval of a row (or set of rows) which have a given value in that column removal of a row is fast and is implemented as "soft-delete": the row is kept physically, but is marked for deletion and won't show up in any subsequent retrieval operations addition of a column is fast rows are rarely added columns are rarely deleted I decided to implement the class directly rather than use a wrapper around sqlite. What would be a good data structure to use? Just as an example, one approach I was thinking about is a dictionary. Its keys are the values in the primary key column of the table; its values are the rows implemented in one of these ways: As lists. Column numbers are mapped into column titles (using a list for one direction and a map for the other). Here, a retrieval operation would first convert column title into column number, and then find the corresponding element in the list. As dictionaries. Column titles are the keys of this dictionary. Not sure about the pros/cons of the two. The reasons I want to write my own code are: I need to track row deletions. That is, at any time I want to be able to report which rows where deleted and for what "reason" (the "reason" is passed to my delete method). I need some reporting during indexing (e.g., while an non-unique index is being built, I want to check certain conditions and report if they are violated)
How to implement database-style table in Python
0
1
0
1,361
4,188,273
2010-11-15T19:58:00.000
1
1
1
0
java,c++,python,boost-python
4,227,430
3
false
0
0
I would disagree about Boost::Python. It can get cumbersome when wrapping an existing c++-centric library and trying not to change the interface. But that is not what you are looking to do. You are looking to push the heavy lifting of an existing python solution in to a faster language. That means that you can control the interface. If you are in control of the interface, you can keep it python-friendly, and bp-friendly (IE: avoid problematic things like pointers and immutable types as l-values) In that case, Boost::Python can be as simple as telling it which functions you want to call from python.
2
5
0
I have a system currently written in Python that can be separated into backend and frontend layers. Python is too slow, so I want to rewrite the backend in a fast compiled language while keeping the frontend in Python, in a way that lets the backend functionality be called from Python. What are the best choices to do so? I've considered cython but it's very limited and cumbersome to write, and not that much faster. From what I remember of Boost Python for C++, it's very annoying to maintain the bridge between languages. Are there better choices? My main factors are: speed of execution speed of compilation language is declarative
Language choices for writing very fast abstractions interfacing with Python?
0.066568
0
0
247
4,188,273
2010-11-15T19:58:00.000
2
1
1
0
java,c++,python,boost-python
4,188,359
3
false
0
0
If you used Jython you could call into Java back-end routines easily (trivially). Java's about twice as slow as c and 10x faster than python last time I checked.
2
5
0
I have a system currently written in Python that can be separated into backend and frontend layers. Python is too slow, so I want to rewrite the backend in a fast compiled language while keeping the frontend in Python, in a way that lets the backend functionality be called from Python. What are the best choices to do so? I've considered cython but it's very limited and cumbersome to write, and not that much faster. From what I remember of Boost Python for C++, it's very annoying to maintain the bridge between languages. Are there better choices? My main factors are: speed of execution speed of compilation language is declarative
Language choices for writing very fast abstractions interfacing with Python?
0.132549
0
0
247
4,188,723
2010-11-15T20:55:00.000
0
0
0
0
python,urllib2
4,188,773
5
false
0
0
why not write a very simple CGI script in bash that just sleeps for the required timeout period?
1
6
0
I want to to test my application's handling of timeouts when grabbing data via urllib2, and I want to have some way to force the request to timeout. Short of finding a very very slow internet connection, what method can I use? I seem to remember an interesting application/suite for simulating these sorts of things. Maybe someone knows the link?
How can I force urllib2 to time out?
0
0
1
6,193
4,189,123
2010-11-15T21:46:00.000
5
0
1
0
python,linux
4,192,596
3
false
0
0
sysconf(SC_CLK_TCK) does not give the frequency of the timer interrupts in Linux. It gives the frequency of jiffies which is visible to userspace in things like the counters in various directories in /proc The actual frequency is hidden from userspace, deliberately. Indeed, some systems use dynamic ticks or "tickless" systems, so there aren't really any at all. All the userspace interfaces use the value from SC_CLK_TCK, which as far as I can see is always 100 under Linux.
1
15
0
I'd like to know the HZ of the system, i.e. how many mili seconds is one jiffy from Python code.
Python: How to get number of mili seconds per jiffy
0.321513
0
0
10,155
4,189,717
2010-11-15T23:00:00.000
20
0
0
1
python,process,pid
4,189,752
5
true
0
0
Under Linux, you can read proc filesystem. File /proc/<pid>/cmdline contains the commandline.
1
26
0
This should be simple, but I'm just not seeing it. If I have a process ID, how can I use that to grab info about the process such as the process name.
Get process name by PID
1.2
0
0
26,492
4,190,695
2010-11-16T02:32:00.000
3
0
1
1
python,vim
4,190,782
3
false
0
0
Once you've got the line number, you can run gvim filename -c 12 and it will go to line 12 (this is because -c <command> is "Execute <command> after loading the first file", so -c 12 is just saying run :12 after loading the file). So I'm not sure if you really need Python at all in this case; just sending the line number direct to gvim may be all you need.
1
2
0
I don't use Python very often, but I sometimes develop simple tools in it to make my life easier. My most frequently used is a log checker/crude debugger for SAS. It reads the SAS log line by line checking for any errors in my list and dumps anything it finds into standard out (I'm running Python 2.6 in a RedHat Linux environment) - along with the error, it prints the line number of that error (not that that's super useful). What I'd really like to do is to optionally feed the script a line number and have it open the SAS log itself in GVIM and display it scrolled down to the line number I've specified. I haven't had any luck finding a way to do this - I've looked pretty thoroughly on Google to no avail. Any ideas would be greatly appreciated. Thanks! Jeremy
Python: open and display a text file at a specific line
0.197375
0
0
2,373
4,191,171
2010-11-16T04:14:00.000
4
0
1
0
python,algorithm,python-3.x,duplicates
4,191,279
9
false
0
0
It's not apparent what's the point of finding one arbitrary element that is a duplicate or 1 or more other elements of the collection ... do you want to remove it? merge its attributes with those of its twins / triplets / ... / N-tuplets? In any case, that's an O(N) operation, which if repeated until no more duplicates are detected is an O(N ** 2) operation. However you can get a bulk deal at the algorithm warehouse: sort the collection -- O(N*log(N)) -- and then use itertools.groupby to bunch up the duplicates and cruise through the bunches, ignoring the bunches of size 1 and doing whatever you want with the bunches of size > 1 -- all of that is only about O(N).
3
11
0
I have a container cont. If I want to find out if it has duplicates, I'll just check len(cont) == len(set(cont)). Suppose I want to find a duplicate element if it exists (just any arbitrary duplicate element). Is there any neat and efficient way to write that? [Python 3]
Python: find a duplicate in a container efficiently
0.088656
0
0
2,558
4,191,171
2010-11-16T04:14:00.000
0
0
1
0
python,algorithm,python-3.x,duplicates
4,191,196
9
false
0
0
You have to scan all the elements for the duplicates as they can be just the last ones you check, so you can't get more efficient than worst case O(N) time, just like linear search. But a simple linear search to find a duplicate will use up O(N) memory, because you need to track what you've seen so far. If the array is sorted you can find duplicates in O(N) time without using any additional memory because duplicate pairs will be next to each other.
3
11
0
I have a container cont. If I want to find out if it has duplicates, I'll just check len(cont) == len(set(cont)). Suppose I want to find a duplicate element if it exists (just any arbitrary duplicate element). Is there any neat and efficient way to write that? [Python 3]
Python: find a duplicate in a container efficiently
0
0
0
2,558
4,191,171
2010-11-16T04:14:00.000
7
0
1
0
python,algorithm,python-3.x,duplicates
4,191,185
9
false
0
0
You can start adding them to the set and as soon as you try to add the element that is already in the set you found a duplicate.
3
11
0
I have a container cont. If I want to find out if it has duplicates, I'll just check len(cont) == len(set(cont)). Suppose I want to find a duplicate element if it exists (just any arbitrary duplicate element). Is there any neat and efficient way to write that? [Python 3]
Python: find a duplicate in a container efficiently
1
0
0
2,558
4,191,370
2010-11-16T05:06:00.000
-3
0
0
0
python,mysql,unicode,sqlalchemy
4,192,633
2
false
0
0
Sorry, i don't know about the connector, i use MySQLDB and it is working quite nicely. I work in UTF8 as well and i didn't have any problem.
1
1
0
I am using the mysql connector (https://launchpad.net/myconnpy) with SQLAlchemy and, though the table is definitely UTF8, any string columns returned are just normal strings not unicode. The documentation doesn't list any specific parameters for UTF8/unicode support for the mysql connector driver so I borrowed from the mysqldb driver. Here is my connect string: mysql+mysqlconnector://user:pass@myserver.com/mydbname?charset=utf8&use_unicode=0 I'd really prefer to keep using this all-python mysql driver. Any suggestions?
MySql Connector (python) and SQLAlchemy Unicode problem
-0.291313
1
0
1,239
4,192,339
2010-11-16T08:40:00.000
1
0
0
0
python,django,web-applications,django-models
4,192,566
3
false
1
0
That is Django's ORM: it maps classes to tables. What else did you expect? There needs to be some way of specifying what the fields are, though, before you can use them, and that's managed through the models.Model class and the various models.Field subclasses. You can certainly use your classes as mixins in order to use the existing business logic on top of the field definitions.
2
3
0
I've written some python code to accomplish a task. Currently, there are 4-5 classes that I'm storing in separate files. I'd now like to change this whole thing into a database-backed web app. I've been reading tutorials on Django, and so far I get the impression that I'll need to manually specify the fields and their types for every "model" that I use. This is a little surprising to me, since I was expecting some kind of ORM capability that would just take the existing classes I've already defined, and map them onto a database somehow, in a manner abstracted away from me. Is this not the case? Am I missing something? It looks like I need to specify all the fields and types in the file 'models.py'. Okay, now beyond those specifics, does anyone have any general tips on the best way to migrate an object-oriented desktop application to a web application? Thanks!
Python app to django web app
0.066568
0
0
242
4,192,339
2010-11-16T08:40:00.000
0
0
0
0
python,django,web-applications,django-models
4,192,813
3
false
1
0
If you are thinking about a database backend based web app, you have to specify what fields of the data you want to store and what type of the value you want stored. There is an abstraction that introspects the db to convert it into the django models.py format. But I know not of any that introspects a python class and stores arbitrary data into db. How would that even work? Are the objects, now, stored as a pickle?
2
3
0
I've written some python code to accomplish a task. Currently, there are 4-5 classes that I'm storing in separate files. I'd now like to change this whole thing into a database-backed web app. I've been reading tutorials on Django, and so far I get the impression that I'll need to manually specify the fields and their types for every "model" that I use. This is a little surprising to me, since I was expecting some kind of ORM capability that would just take the existing classes I've already defined, and map them onto a database somehow, in a manner abstracted away from me. Is this not the case? Am I missing something? It looks like I need to specify all the fields and types in the file 'models.py'. Okay, now beyond those specifics, does anyone have any general tips on the best way to migrate an object-oriented desktop application to a web application? Thanks!
Python app to django web app
0
0
0
242
4,192,384
2010-11-16T08:48:00.000
4
0
0
0
python,cairo,pycairo
4,193,093
3
true
0
0
You can add a rectangle covering the whole drawing area to your coastline path and set the fill rule to cairo.FILL_RULE_EVEN_ODD. Calling fill() after this fills the area outside your original path. (If you choose the correct orientation for your rectangle you can skip setting the fill rule.)
3
5
0
is there a way to fill everything outside of a closed path (polygon)? Background: I'd like to render some maps with coastlines - so sometimes I need to fill the sea with blue color, so I thought it would be the easiest and in my situation the most efficient to fill everything outside of this coastline polygon with blue color. Thanks in advance!
(py)cairo - fill
1.2
0
0
988
4,192,384
2010-11-16T08:48:00.000
0
0
0
0
python,cairo,pycairo
4,193,586
3
false
0
0
While you could create a closed path the size of the surface and then fill it with a solidpattern (the fill rule won't matter for a simple rectangle), it would be easier to just use the context paint() method which will fill the current clip region (that is initially set to the entire surface). It's important to do this before drawing the map/coastline boundaries and filling them so they will be on top of the background.
3
5
0
is there a way to fill everything outside of a closed path (polygon)? Background: I'd like to render some maps with coastlines - so sometimes I need to fill the sea with blue color, so I thought it would be the easiest and in my situation the most efficient to fill everything outside of this coastline polygon with blue color. Thanks in advance!
(py)cairo - fill
0
0
0
988
4,192,384
2010-11-16T08:48:00.000
0
0
0
0
python,cairo,pycairo
4,192,401
3
false
0
0
Draw a big blue rectangle over the entire cairo surface and then draw your coastline on top of that?
3
5
0
is there a way to fill everything outside of a closed path (polygon)? Background: I'd like to render some maps with coastlines - so sometimes I need to fill the sea with blue color, so I thought it would be the easiest and in my situation the most efficient to fill everything outside of this coastline polygon with blue color. Thanks in advance!
(py)cairo - fill
0
0
0
988
4,192,675
2010-11-16T09:28:00.000
1
0
0
0
python,django
4,192,721
3
false
1
0
Of course you need to render the template - and you do that via the context. How is it not working?
1
0
0
I want to email a template in django. The template has one variable say name. I want to filled this value. How to do that. context is not working because i don't need to render the page.
how to filled the data in django templates
0.066568
0
0
182
4,195,202
2010-11-16T14:32:00.000
7
0
1
0
python,serialization,pickle,deserialization
4,195,787
6
false
0
0
Are you load()ing the pickled data directly from the file? What about to try to load the file into the memory and then do the load? I would start with trying the cStringIO(); alternatively you may try to write your own version of StringIO that would use buffer() to slice the memory which would reduce the needed copy() operations (cStringIO still may be faster, but you'll have to try). There are sometimes huge performance bottlenecks when doing these kinds of operations especially on Windows platform; the Windows system is somehow very unoptimized for doing lots of small reads while UNIXes cope quite well; if load() does lot of small reads or you are calling load() several times to read the data, this would help.
5
12
0
We've got a Python-based web server that unpickles a number of large data files on startup using cPickle. The data files (pickled using HIGHEST_PROTOCOL) are around 0.4 GB on disk and load into memory as about 1.2 GB of Python objects -- this takes about 20 seconds. We're using Python 2.6 on 64-bit Windows machines. The bottleneck is certainly not disk (it takes less than 0.5s to actually read that much data), but memory allocation and object creation (there are millions of objects being created). We want to reduce the 20s to decrease startup time. Is there any way to deserialize more than 1GB of objects into Python much faster than cPickle (like 5-10x)? Because the execution time is bound by memory allocation and object creation, I presume using another unpickling technique such as JSON wouldn't help here. I know some interpreted languages have a way to save their entire memory image as a disk file, so they can load it back into memory all in one go, without allocation/creation for each object. Is there a way to do this, or achieve something similar, in Python?
How to deserialize 1GB of objects into Python faster than cPickle?
1
0
0
6,355
4,195,202
2010-11-16T14:32:00.000
3
0
1
0
python,serialization,pickle,deserialization
4,195,560
6
false
0
0
Did you try sacrificing efficiency of pickling by not using HIGHEST_PROTOCOL? It isn't clear what performance costs are associated with using this protocol, but it might be worth a try.
5
12
0
We've got a Python-based web server that unpickles a number of large data files on startup using cPickle. The data files (pickled using HIGHEST_PROTOCOL) are around 0.4 GB on disk and load into memory as about 1.2 GB of Python objects -- this takes about 20 seconds. We're using Python 2.6 on 64-bit Windows machines. The bottleneck is certainly not disk (it takes less than 0.5s to actually read that much data), but memory allocation and object creation (there are millions of objects being created). We want to reduce the 20s to decrease startup time. Is there any way to deserialize more than 1GB of objects into Python much faster than cPickle (like 5-10x)? Because the execution time is bound by memory allocation and object creation, I presume using another unpickling technique such as JSON wouldn't help here. I know some interpreted languages have a way to save their entire memory image as a disk file, so they can load it back into memory all in one go, without allocation/creation for each object. Is there a way to do this, or achieve something similar, in Python?
How to deserialize 1GB of objects into Python faster than cPickle?
0.099668
0
0
6,355
4,195,202
2010-11-16T14:32:00.000
2
0
1
0
python,serialization,pickle,deserialization
4,195,650
6
false
0
0
Impossible to answer this without knowing more about what sort of data you are loading and how you are using it. If it is some sort of business logic, maybe you should try turning it into a pre-compiled module; If it is structured data, can you delegate it to a database and only pull what is needed? Does the data have a regular structure? Is there any way to divide it up and decide what is required and only then load it?
5
12
0
We've got a Python-based web server that unpickles a number of large data files on startup using cPickle. The data files (pickled using HIGHEST_PROTOCOL) are around 0.4 GB on disk and load into memory as about 1.2 GB of Python objects -- this takes about 20 seconds. We're using Python 2.6 on 64-bit Windows machines. The bottleneck is certainly not disk (it takes less than 0.5s to actually read that much data), but memory allocation and object creation (there are millions of objects being created). We want to reduce the 20s to decrease startup time. Is there any way to deserialize more than 1GB of objects into Python much faster than cPickle (like 5-10x)? Because the execution time is bound by memory allocation and object creation, I presume using another unpickling technique such as JSON wouldn't help here. I know some interpreted languages have a way to save their entire memory image as a disk file, so they can load it back into memory all in one go, without allocation/creation for each object. Is there a way to do this, or achieve something similar, in Python?
How to deserialize 1GB of objects into Python faster than cPickle?
0.066568
0
0
6,355
4,195,202
2010-11-16T14:32:00.000
2
0
1
0
python,serialization,pickle,deserialization
4,196,441
6
false
0
0
I'll add another answer that might be helpful - if you can, can you try to define _slots_ on the class that is most commonly created? This may be a little limiting and impossible, however it seems to have cut the time needed for initialization on my test to about a half.
5
12
0
We've got a Python-based web server that unpickles a number of large data files on startup using cPickle. The data files (pickled using HIGHEST_PROTOCOL) are around 0.4 GB on disk and load into memory as about 1.2 GB of Python objects -- this takes about 20 seconds. We're using Python 2.6 on 64-bit Windows machines. The bottleneck is certainly not disk (it takes less than 0.5s to actually read that much data), but memory allocation and object creation (there are millions of objects being created). We want to reduce the 20s to decrease startup time. Is there any way to deserialize more than 1GB of objects into Python much faster than cPickle (like 5-10x)? Because the execution time is bound by memory allocation and object creation, I presume using another unpickling technique such as JSON wouldn't help here. I know some interpreted languages have a way to save their entire memory image as a disk file, so they can load it back into memory all in one go, without allocation/creation for each object. Is there a way to do this, or achieve something similar, in Python?
How to deserialize 1GB of objects into Python faster than cPickle?
0.066568
0
0
6,355
4,195,202
2010-11-16T14:32:00.000
4
0
1
0
python,serialization,pickle,deserialization
4,195,322
6
false
0
0
I haven't used cPickle (or Python) but in cases like this I think the best strategy is to avoid unnecessary loading of the objects until they are really needed - say load after start up on a different thread, actually its usually better to avoid unnecessary loading/initialization at anytime for obvious reasons. Google 'lazy loading' or 'lazy initialization'. If you really need all the objects to do some task before server start up then maybe you can try to implement a manual custom deserialization method, in other words implement something yourself if you have intimate knowledge of the data you will deal with which can help you 'squeeze' better performance then the general tool for dealing with it.
5
12
0
We've got a Python-based web server that unpickles a number of large data files on startup using cPickle. The data files (pickled using HIGHEST_PROTOCOL) are around 0.4 GB on disk and load into memory as about 1.2 GB of Python objects -- this takes about 20 seconds. We're using Python 2.6 on 64-bit Windows machines. The bottleneck is certainly not disk (it takes less than 0.5s to actually read that much data), but memory allocation and object creation (there are millions of objects being created). We want to reduce the 20s to decrease startup time. Is there any way to deserialize more than 1GB of objects into Python much faster than cPickle (like 5-10x)? Because the execution time is bound by memory allocation and object creation, I presume using another unpickling technique such as JSON wouldn't help here. I know some interpreted languages have a way to save their entire memory image as a disk file, so they can load it back into memory all in one go, without allocation/creation for each object. Is there a way to do this, or achieve something similar, in Python?
How to deserialize 1GB of objects into Python faster than cPickle?
0.132549
0
0
6,355
4,196,389
2010-11-16T16:25:00.000
1
0
1
1
python,command,command-line-arguments,tuples
4,196,432
8
false
0
0
Iterate through sys.argv until you reach another flag.
1
2
0
I have a program which provides a command line input like this: python2.6 prog.py -p a1 b1 c1 Now, we can have any number of input parameters i.e. -p a1 and -p a1 c1 b1 e2 are both possibilities. I want to create a tuple based on the variable input parameters. Any suggestions on how to do this would be very helpful! A fixed length tuple would be easy, but I am not sure how to implement a variable length one. thanks.
Python: Create a tuple from a command line input
0.024995
0
0
5,466
4,198,069
2010-11-16T19:20:00.000
0
1
0
0
javascript,python,flash
4,198,087
2
false
1
0
No, not really. Not like you can examine the DOM of a webpage. You can download and decompile the swf, but you may or may not be able to get all the info you want out.
2
0
0
I would like to be able to access all the components of say a Flash image gallery on someone else's site. I want to be able to find the images, image coordinates, action script code, audio files, video, etc. I do not want to manipulate these elements, I just want to view them and their related information. Is this possible via scripting languages like Ruby, Python or Javascript?
Is it possible to access the internal elements of an embedded Flash object via a scripting language?
0
0
0
110
4,198,069
2010-11-16T19:20:00.000
0
1
0
0
javascript,python,flash
4,198,271
2
true
1
0
You can if (and only if) your application domain is the same.
2
0
0
I would like to be able to access all the components of say a Flash image gallery on someone else's site. I want to be able to find the images, image coordinates, action script code, audio files, video, etc. I do not want to manipulate these elements, I just want to view them and their related information. Is this possible via scripting languages like Ruby, Python or Javascript?
Is it possible to access the internal elements of an embedded Flash object via a scripting language?
1.2
0
0
110
4,198,416
2010-11-16T20:04:00.000
0
0
0
0
python,xml,scripting,vbscript,batch-file
4,230,800
5
true
0
0
I was able to get this to work by using the vbscript solutions provided. The reasons I hadn't committed to a Visual Basic script before was that I didn't think it was possible to execute this script remotely with PsExec. It turns out I solved this problem as well with the help of Server Fault. In case you are interested in how that works, cscript.exe is the command parameter of PsExec and the vbscript file serves as the argument of cscript. Thanks for all the help, everyone!
1
0
0
I am looking to write a program that searches for the tags in an xml document and changes the string between the tags from localhost to manager. The tag might appear in the xml document multiple times, and the document does have a definite path. Would python or vbscript make the most sense for this problem? And can anyone provide a template so I can get started? That would be great. Thanks.
batch script or python program to edit string in xml tags
1.2
0
1
1,031
4,199,278
2010-11-16T21:41:00.000
4
0
1
0
python,tkinter
22,458,776
6
false
0
1
In the end I just initialized the variable as tk.IntVar() instead of tk.StringVar() That way you don't have to cast it as an int, it will always be one, and the default value from the Entry element will now be 0 instead of '' That's how I approached it anyway, seems the simplest way and avoid the need for a lot of the validation you'd need to do otherwise...
1
5
0
How do I get an integer type from the Tkinter Entry widget? If I try to get the value using variable_name.get(), it says it is a str. If I try to change the type using int(variable_name.get()), it says int can only accept a string or number. Any help would be welcome!
Get an Integer from Entry
0.132549
0
0
34,695
4,199,442
2010-11-16T21:57:00.000
3
1
0
0
python,wsgi
4,200,386
2
false
0
0
If you are new to Python and Python web application development, then ignore all the hosting issues to begin with and don't start from scratch. Simply go get a full featured Python web framework such as Django or web2py and learn how to write Python web applications using their in built development web server. You will only cause yourself much pain by trying to solve distinct problem of production web hosting first.
1
0
0
I am completely new to Python-- never used it before today. I am interested in devloping Python applications for the web. I would like to check to see if my web server supports WSGI or running python apps in some way. Let's say I have a .py file that prints "Hello world!". How can I test to see if my server supports processing this file? FYI, this is a Mac OS X server 10.5. So I know Python is installed (It's installed on Mac OS X by default), but I don't know if it's set up to process .py files server-side and return the results. BTW, I'm coming from a PHP background, so this is a bit foreign to me. I've looked at the python docs re: wgsi, cgi, etc. but since I haven't done anything concrete yet, it's not quite making sense.
Beginner Python question about making a web app
0.291313
0
0
2,547
4,199,870
2010-11-16T22:51:00.000
0
0
1
0
python,windows-7-x64
13,767,468
3
false
0
0
I'd suggest the 32-bit version. You may run into issues because many libraries are only available in 32 bit and do not work with the x64 installation of Python. The error messages aren't always very clear in that case either. I spent a few hours trying to figure out why OpenCV didn't work because of this.
3
3
0
About to start going through the 'Learn Python The Hard Way' book and I am at the 'Installation' chapter, the book says to get 2.x... but should I get 64 or 32 bit? Does it matter one way or another? If so, how? I am running Windows 7 x64. Thanks!
For learning Python does it make a difference if I use 32 or 64-bit Python?
0
0
0
532
4,199,870
2010-11-16T22:51:00.000
5
0
1
0
python,windows-7-x64
4,199,919
3
true
0
0
Use 32-bit. Currently the 64 bit versions of python don't behave the way you might think they would (unless you've already researched it) and can create some installation issues with other libraries, especially on Windows. For learning, 32bit is a much better option.
3
3
0
About to start going through the 'Learn Python The Hard Way' book and I am at the 'Installation' chapter, the book says to get 2.x... but should I get 64 or 32 bit? Does it matter one way or another? If so, how? I am running Windows 7 x64. Thanks!
For learning Python does it make a difference if I use 32 or 64-bit Python?
1.2
0
0
532
4,199,870
2010-11-16T22:51:00.000
0
0
1
0
python,windows-7-x64
4,199,915
3
false
0
0
It shouldn't make much of a difference. If you want to use specific libraries, they might only have installation packages for 32 bit and not 64 bit -- but more and more packages are offering 64 bit packages. On the other hand, unless you are expecting to write applications that use more than 2GBs of memory, you won't need 64 bit support.
3
3
0
About to start going through the 'Learn Python The Hard Way' book and I am at the 'Installation' chapter, the book says to get 2.x... but should I get 64 or 32 bit? Does it matter one way or another? If so, how? I am running Windows 7 x64. Thanks!
For learning Python does it make a difference if I use 32 or 64-bit Python?
0
0
0
532
4,200,486
2010-11-17T00:34:00.000
6
0
1
0
python,macos,ipython,pip
5,074,385
4
true
0
0
I just got the same problem, and I think I found a solution. After update your python to a non-default version, say 2.6.6, you must also reinstall setup-tools. To verify if your easy_install is correctly installed, type "which easy_install" to see if easy_install is under "/Library/Frameworks/Python.framework/Versions/2.6/bin/" directory, if not, it might show in "/usr/bin/", then you need to download the right version of setup-tools, for example, for 2.6.6, it's setuptools-0.6c11-py2.6.egg, and use "sh ./setuptools-0.6c11-py2.6.egg" to install it. After all that, you need to reopen Teminal just to make all env-variables get refreshed, and then when you use "easy_install ipython", you can get the right version working with ipython. It works for me, and I hope it can help you, too.
2
5
0
OK guys this is tricky, and I haven't even found a suitable solution on the IPython website. I'm working on OSX Snow Leopard. I've installed IPython using easy-install, plus all the additional basic add-ons: $ sudo easy-install readline pexpect nose ipython Everything worked OK and installed correctly. The problem is that IPython uses the python 2.6.1 interpreter, but I would like to use the python 2.6.6 or python 2.7. It is necessary since I'm using the "pygame" module, which only works with my python 2.6.6 installation. How can I do that? Thanks in advance. Another solution: (besides the already accepeted answer, thanks for that btw.) I just used pip to pip uninstall ipython and then sudo pip install ipython. This installed it against my latest python version. Thanks for the other version though! I've come to use pip for all my python installation necessities instead of easy_install as of late.
Installing IPython to work with a non-default python version (i.e python2.6.6/python2.7) on OSX Snow Leopard
1.2
0
0
7,658
4,200,486
2010-11-17T00:34:00.000
0
0
1
0
python,macos,ipython,pip
4,200,997
4
false
0
0
Why not see if just changing the shebang line in the python\scripts directory works? There should be an IPython starter shell script there. Point it at the full path of the desired Python load and give it a whirl. Not sure of the full path to the scripts directory on OS X. On Windows it's at c:\Python2x\Scripts.
2
5
0
OK guys this is tricky, and I haven't even found a suitable solution on the IPython website. I'm working on OSX Snow Leopard. I've installed IPython using easy-install, plus all the additional basic add-ons: $ sudo easy-install readline pexpect nose ipython Everything worked OK and installed correctly. The problem is that IPython uses the python 2.6.1 interpreter, but I would like to use the python 2.6.6 or python 2.7. It is necessary since I'm using the "pygame" module, which only works with my python 2.6.6 installation. How can I do that? Thanks in advance. Another solution: (besides the already accepeted answer, thanks for that btw.) I just used pip to pip uninstall ipython and then sudo pip install ipython. This installed it against my latest python version. Thanks for the other version though! I've come to use pip for all my python installation necessities instead of easy_install as of late.
Installing IPython to work with a non-default python version (i.e python2.6.6/python2.7) on OSX Snow Leopard
0
0
0
7,658
4,200,644
2010-11-17T01:10:00.000
0
0
1
0
python,macos,pygame
4,200,863
1
true
0
1
My guess is that you installed it for 2.6 and so it is residing in 2.6's library directory. Install it in 2.7's library directory and you should be good to go. I don't know OSX so I can't help with the details but a little bit of googling shouldn't be too hard. The problem is that the two python installations have distinct import paths.
1
0
0
Like the subject says: Does the latest stable pygame release work with python2.7? I've got both versions installed on my OSX Snow Leopard, but import pygame only works on python2.6 - That's the official distro which is 2.6.6, not the pre-installed one which is 2.6.1). And if it does work, how can I make it work on my machine? What am I doing wrong? Thanks in advance.
Does the latest stable pygame release work with python2.7?
1.2
0
0
324
4,201,455
2010-11-17T04:20:00.000
0
0
0
0
python,sqlalchemy
65,843,088
6
false
0
0
commit () records these changes in the database. flush () is always called as part of the commit () (1) call. When you use a Session object to query a database, the query returns results from both the database and the reddened parts of the unrecorded transaction it is performing.
1
569
0
What the difference is between flush() and commit() in SQLAlchemy? I've read the docs, but am none the wiser - they seem to assume a pre-understanding that I don't have. I'm particularly interested in their impact on memory usage. I'm loading some data into a database from a series of files (around 5 million rows in total) and my session is occasionally falling over - it's a large database and a machine with not much memory. I'm wondering if I'm using too many commit() and not enough flush() calls - but without really understanding what the difference is, it's hard to tell!
SQLAlchemy: What's the difference between flush() and commit()?
0
1
0
180,674
4,201,590
2010-11-17T04:46:00.000
1
0
1
0
python
4,201,648
4
false
0
0
No, don't check for types explicitly. Python is a duck typed language. If the wrong type is passed, a TypeError will be raised. That's it. You need not bother about the type, that is the responsibility of the programmer.
1
0
0
I have a class that wants to be initialized from a few possible inputs. However a combination of no function overloading and my relative inexperience with the language makes me unsure of how to proceed. Any advice?
Is it good form to have an __init__ method that checks the type of its input?
0.049958
0
0
2,458
4,201,846
2010-11-17T05:47:00.000
1
1
1
0
python,domain-driven-design
4,205,497
6
false
1
0
If Domain Driven Design is an effectively defined design pattern, why does it matter what language you're using? Advice for design philosophies and the like should be largely language agnostic. They're higher level than the language, so to speak.
4
45
0
Domain driven design has become my architecture of choice. I've been able to find a abundance of books & tutorials for applying DDD principles within the ASP.net framework. It mostly seems inspired from what Java developers have been doing for a good while now. For my personal projects, I'm starting to lean more towards Python even though I'm finding it difficult to abandon static typing. I was hoping to find lots of help with applying DDD using a dynamic language. There doesn't seem to be anything out there about Python & DDD. Why is that? Obviously DDD can apply quite well to Python. Do people not take on as large of projects in Python? Or is applying DDD simply easier in Python given the dynamic typing therefore reducing the amount of required learning? Perhaps my questionning is due to my lack of experience with Python. Any advice you might have for me will be appreciated.
Why does domain driven design seem only popular with static languages like C# & Java?
0.033321
0
0
14,335
4,201,846
2010-11-17T05:47:00.000
2
1
1
0
python,domain-driven-design
4,224,643
6
false
1
0
Most books on design/coding techniques such as TDD and design patterns are written in Java or C#, since that is currently the lowest common denominator language and have the widest user base, or at least the largest base of people who can read and understand the language. This is done largely for marketing reasons so that they appeals to the largest demographic. That does not mean the the techniques are not applicable to or used in other languages. From what I know of DDD most of the principles are language independent and AFAICR the original DDD book had almost no code samples in it (but it is a couple of years since I read it, so I may be mistaken).
4
45
0
Domain driven design has become my architecture of choice. I've been able to find a abundance of books & tutorials for applying DDD principles within the ASP.net framework. It mostly seems inspired from what Java developers have been doing for a good while now. For my personal projects, I'm starting to lean more towards Python even though I'm finding it difficult to abandon static typing. I was hoping to find lots of help with applying DDD using a dynamic language. There doesn't seem to be anything out there about Python & DDD. Why is that? Obviously DDD can apply quite well to Python. Do people not take on as large of projects in Python? Or is applying DDD simply easier in Python given the dynamic typing therefore reducing the amount of required learning? Perhaps my questionning is due to my lack of experience with Python. Any advice you might have for me will be appreciated.
Why does domain driven design seem only popular with static languages like C# & Java?
0.066568
0
0
14,335
4,201,846
2010-11-17T05:47:00.000
5
1
1
0
python,domain-driven-design
12,297,993
6
false
1
0
Python seems to be not too popular in enterprises till now compared to Java (but I believe the wind is in that direction. An example is Django, which was created by a newspaper company). Most programmers working with python are likely either into scientific computing or into web applications. Both of these fields relates to (computer) sciences, not domain-specific businesses, whereas DDD is most applicable within domain-specific businesses. So I would argue that it is mostly a matter of legacy. C# and Java were targeted towards enterprise applications from the start.
4
45
0
Domain driven design has become my architecture of choice. I've been able to find a abundance of books & tutorials for applying DDD principles within the ASP.net framework. It mostly seems inspired from what Java developers have been doing for a good while now. For my personal projects, I'm starting to lean more towards Python even though I'm finding it difficult to abandon static typing. I was hoping to find lots of help with applying DDD using a dynamic language. There doesn't seem to be anything out there about Python & DDD. Why is that? Obviously DDD can apply quite well to Python. Do people not take on as large of projects in Python? Or is applying DDD simply easier in Python given the dynamic typing therefore reducing the amount of required learning? Perhaps my questionning is due to my lack of experience with Python. Any advice you might have for me will be appreciated.
Why does domain driven design seem only popular with static languages like C# & Java?
0.16514
0
0
14,335
4,201,846
2010-11-17T05:47:00.000
20
1
1
0
python,domain-driven-design
4,208,311
6
true
1
0
I think it is definitely popular elsewhere, especially functional languages. However, certain patterns associated with the Big Blue Book are not as applicable in dynamic languages and frameworks like Rails tend to lead people away from ideas of bounded context However, the true thrust of DDD being ubiquitous language is certainly prevalent in dynamic languages. Rubyists especially takes a great deal of joy in constructing domain specific languages - think of how cucumber features end up looking, that's as DDD as it gets! Keep in mind, DDD is not a new idea at all, it was just repackaged in a way that got good uptake from C# and Java guys. Those same ideas are around elsewhere under different banners.
4
45
0
Domain driven design has become my architecture of choice. I've been able to find a abundance of books & tutorials for applying DDD principles within the ASP.net framework. It mostly seems inspired from what Java developers have been doing for a good while now. For my personal projects, I'm starting to lean more towards Python even though I'm finding it difficult to abandon static typing. I was hoping to find lots of help with applying DDD using a dynamic language. There doesn't seem to be anything out there about Python & DDD. Why is that? Obviously DDD can apply quite well to Python. Do people not take on as large of projects in Python? Or is applying DDD simply easier in Python given the dynamic typing therefore reducing the amount of required learning? Perhaps my questionning is due to my lack of experience with Python. Any advice you might have for me will be appreciated.
Why does domain driven design seem only popular with static languages like C# & Java?
1.2
0
0
14,335
4,201,948
2010-11-17T06:12:00.000
0
0
0
0
python,django,model-view-controller,web-applications
4,202,031
2
false
1
0
There's certainly the {% include %} tag, which allows you to include templates directly inside of another template. It also gets everything that the enclosed template gets, so if you are using the RequestContext that means it has access to everything in the request variable. However, it seems you're saying that you want to somehow actually call the register view and login views and embed the result into your page. This could in theory be possible by writing a custom tag that calls the URL using an http GET and then outputting the resulting HTML from the request. I wouldn't recommend this. Instead, for the front page, go ahead and create two forms that point to the appropriate URLs in the django-registration application.
1
1
0
I was wondering if this would be possible to implement (as an app/middleware): I install the django-registration app. I then create my site-base app for making some generic page views. I want to put a login form and a registration form on a the front page. So I go in and I modify the /register/login.html and the register/register.html templates to fit my front page design (html stuff). I then go to my main page index.html file and I go to the spot in my html where I want those blocks (login & register) to go, and I add {% load "register/login.html" %} and a {% load "register/register.html" %}. Now, when the urlconf calls my index's view, the template will reach the LOAD trigger and will call the LOGIN view so that all of its form.elements are passed to it, and the REGISTER view is called for its elements too. Then, those completed (rendered) views are passed to my index.html and plugged into the spot where I put the LOAD statements. Can the above be done currently? My goal is to take the various apps available and plug them into my project without touching any of their code (I want to ensure that I can upgrade the individual apps later and not break anything in my project because I added custom stuff...). If the above is possible currently, could someone please provide some documentation/tutorials/howtos for best practices in re-using other peoples apps?
Reusing django re-usable apps
0
0
0
268
4,202,017
2010-11-17T06:25:00.000
1
0
1
0
python,hash,python-3.x,containers,nested
4,209,588
3
true
0
0
You could just serialize the parameters into something like JSON, and use that for the hash.
2
5
0
[Python 3.1] I am trying to create a hash for a container that may have nested containers in it, with unknown depth. At all levels of the hierarchy, there are only built-in types. What's a good way to do that? Why I need it: I am caching the result of some calculations in a pickle object (on disk). I would need to invalidate that cached file if the function is called with different parameters (this happens rarely, so I'm not going to save more than one file to disk). The hash will be used to compare the parameters.
Python: how to create a hash of nested containers
1.2
0
0
673
4,202,017
2010-11-17T06:25:00.000
2
0
1
0
python,hash,python-3.x,containers,nested
4,202,158
3
false
0
0
If all the containers are tuples, and all the contained objects are hashable, then the main container should be hashable.
2
5
0
[Python 3.1] I am trying to create a hash for a container that may have nested containers in it, with unknown depth. At all levels of the hierarchy, there are only built-in types. What's a good way to do that? Why I need it: I am caching the result of some calculations in a pickle object (on disk). I would need to invalidate that cached file if the function is called with different parameters (this happens rarely, so I'm not going to save more than one file to disk). The hash will be used to compare the parameters.
Python: how to create a hash of nested containers
0.132549
0
0
673
4,202,358
2010-11-17T07:34:00.000
-1
1
0
0
python,django,django-models
4,203,124
3
true
0
0
Rename the fixture to something else than initial_data
2
5
0
is there a way to run syncdb without loading fixtures? xo
how do run syncdb without loading fixtures?
1.2
0
0
1,286
4,202,358
2010-11-17T07:34:00.000
0
1
0
0
python,django,django-models
15,206,734
3
false
0
0
best to name your fixtures something_else.json, then run syncdb (and migrate if needed), followed by manage.py loaddata something_else.json
2
5
0
is there a way to run syncdb without loading fixtures? xo
how do run syncdb without loading fixtures?
0
0
0
1,286
4,202,455
2010-11-17T07:54:00.000
4
1
1
0
c++,python,c
4,202,951
4
false
0
0
I would say it depends on what you want to achieve (cheesy answer...) The truth is, learning language is a long process. If you plan on learning a language as a step toward learning another language, you're probably wasting your time. It takes a good year to be proficient with C++, and that is with basic knowledge of algorithms and object concepts. And I only mean proficient, meaning you can get things done, but certainly not expert or anything. So the real question is, do you want to spend a year learning C++ before beginning to learn Python ? If the ultimate goal is to program in Python... it doesn't seem worth it.
3
3
0
I want to learn python, but I feel I should learn C or C++ to get a solid base to build on. I already know some C/C++ as well as other programming languages, which does help. So, should I master C/C++ first?
Is it worth learning C/C++ before learning Python?
0.197375
0
0
12,034
4,202,455
2010-11-17T07:54:00.000
1
1
1
0
c++,python,c
4,202,502
4
false
0
0
In my opinion you should defiantly learn Python before attempting to learn C or C++ as you will get a better understanding of the core concepts, C++ is mush lower level than Python so you will need to make more commands to do something that you can do in one line in python.
3
3
0
I want to learn python, but I feel I should learn C or C++ to get a solid base to build on. I already know some C/C++ as well as other programming languages, which does help. So, should I master C/C++ first?
Is it worth learning C/C++ before learning Python?
0.049958
0
0
12,034
4,202,455
2010-11-17T07:54:00.000
2
1
1
0
c++,python,c
4,202,571
4
false
0
0
Real mastery of a language takes time and lots of practice .. its analogous to learning a natural language like French . you have to do a lot of practice in it. but then different languages teach you different programming methodologies. python and c++ are all object oriented languages so you will be learning the same programming methodology The order in which you learn languages doesn't really matter but starting from a lower abstraction to higher one makes understanding some things easier..
3
3
0
I want to learn python, but I feel I should learn C or C++ to get a solid base to build on. I already know some C/C++ as well as other programming languages, which does help. So, should I master C/C++ first?
Is it worth learning C/C++ before learning Python?
0.099668
0
0
12,034
4,202,822
2010-11-17T08:54:00.000
2
0
1
0
python,visual-c++,wxpython,py2exe,comtypes
4,207,773
3
true
0
0
Python 2.5 and 2.7 (and all other versions of Python) co-exist very well. You may need to change your path to use the correct version of Python. You will need to install the Python 2.5 builds of wxPython and py2exe. You will also need to install comtypes for Python 2.5. That installer will detect your Python installations by checking the registry.
1
0
0
I was writing a code which uses wxPython and comtypes. I have python 2.7 installed on my machine (Windows) along with wxPython, comtypes and py2exe. while trying to build it I got the following error: error: MSVCP90.dll: No such file or directory So, I did research I came to know about two solutions: 1. Copy Microsoft.VC90.CRT.manifest and msvcp90.dll to your machine and prepare your setup like as follows: from distutils.core import setup import py2exe from glob import glob data_files=[("Microsoft.VC90.CRT", glob(r'c:\shared_dlls*.*'))] setup(data_files=data_files, console=['main.pyw']) 2. Use Python 2.5 along with wxPython, comtypes and py2exe Now, I have following questions: In first case: a. Do I need to have Visual Studio license in order to use these files? or that can be used without any worries. b. What if I compile it using aforementioned method? Does it still require MSVC90.dll on the user machine to execute? I think - No. Please correct me if I'm wrong. I want to remove any dependency and give user an exe which the user can directly execute without any dependency. In Second case: As I have Python_2.7 installed on my machine along with aforementioned modules. I would like to know can I install Python 2.5 on the same machine? Can they co-exist? If yes, do I need to install another copy of wxPython, comtypes and py2exe for this. Please suggest me what is the best solution. How should I proceed? It's kind of blocking me. Thanks in advance!
Can Python 2.5 and 2.7 coexist along with wxPython, py2exe and comtypes (try to resolve MSVCP90.dll problem)?
1.2
0
0
1,072
4,203,417
2010-11-17T10:21:00.000
0
0
0
0
python,django
5,241,559
3
false
1
0
Another option might be to create separate URL conf that resolve to the same view, and passing in the source view as a kwargs to the view.
2
4
0
In my Django app I have multiple pages displaying a link that loads a new page displaying a form. When the form is submitted, what is the cleanest way to redirect to the originating page from which this form was accessed? originating page -> form page -> originating page Using a next variable seems unellegant since I have to set it as a GET variable on the originating page link, and then set it as a hidden POST variable in my form? Any other ideas would be appreciated.
Django: How do I redirect to page where form originated
0
0
0
2,498
4,203,417
2010-11-17T10:21:00.000
6
0
0
0
python,django
4,203,496
3
true
1
0
There are a couple of options, all with the cons and benefits ofcourse. passing the originating page withi POST/GET storing the originating page in the session (won't work with multiple tabs obviously) storing the originating page in a cookie (won't work with multiple tabs either) if it's a single page, redirect to the referrer. Doesn't seem possible in your case Personally I think using a next parameter is your best option, but do remember to secure it (only relative urls, no javascript stuff, csrf framework) so you won't have any security problems with it.
2
4
0
In my Django app I have multiple pages displaying a link that loads a new page displaying a form. When the form is submitted, what is the cleanest way to redirect to the originating page from which this form was accessed? originating page -> form page -> originating page Using a next variable seems unellegant since I have to set it as a GET variable on the originating page link, and then set it as a hidden POST variable in my form? Any other ideas would be appreciated.
Django: How do I redirect to page where form originated
1.2
0
0
2,498
4,203,614
2010-11-17T10:51:00.000
1
0
0
0
python,user-interface,opengl,directx
4,250,496
5
false
0
1
You can use Qt Scene Framework with OpenGL rendering. There are many examples on Nokia site.
1
8
0
I am looking for a Python GUI library that I can rewrite the rendering / drawing. It has to support basic widgets (buttons, combo boxes, list boxes, text editors, scrolls,), layout management, event handling The thing that I am looking for is to use my custom Direct3D and OpenGL renderer for all of the GUI's drawing / rendering. edit suggested by S.Lott: I need to use this GUI for a 3D editor, since I have to drag and drop a lot of things from the GUI elements to the 3d render area, I wanted to use a GUI system that renders with Direct3D (preffered) or OpenGL. It also has to have a nice look. It is difficult to achieve this with GUI's like WPF, since WPF does not have a handle. Also it needs to be absolutly free for commercial use. edit: I would also like to use the rendering context I initialized for the 3d part in my application
Python GUI with custom render/drawing
0.039979
0
0
2,152
4,204,075
2010-11-17T11:52:00.000
1
0
1
0
python
4,204,108
3
false
0
0
I don't think you are missing anything. I like to picture variables in python as the name written on 'labels' that are attached to boxes but can change its placement by assignment, whereas in other languages, assignment changes the box's contents (and the assignment operator can be overloaded). Beginners can write quite complex applications without being aware of that, but they are usually messy programs.
1
3
0
I know that "variable assignment" in python is in fact a binding / re-bindign of a name (the variable) to an object. This brings the question: is it possible to have proper assignment in python, eg make an object equal to another object? I guess there is no need for that in python: Inmutable objects cannot be 'assigned to' since they can't be changed Mutable objects could potentially be assigned to, since they can change, and this could be useful, since you may want to manipulate a copy of dictionary separately from the original one. However, in these cases the python philosophy is to offer a cloning method on the mutable object, so you can bind a copy rather than the original. So I guess the answer is that there is no assignment in python, the best way to mimic it would be binding to a cloned object I simply wanted to share the question in case I'm missing something important here Thanks EDIT: Both Lie Ryan and Sven Marnach answers are good, I guess the overall answer is a mix of both: For user defined types, use the idiom: a.dict = dict(b.dict) (I guess this has problems as well if the assigned class has redefined attribute access methods, but lets not be fussy :)) For mutable built-ins (lists and dicts) use the cloning / copying methods they provide (eg slices, update) finally inmutable built-ins can't be changed so can't be assigned I'll choose Lie Ryan because it's an elegant idiom that I hadn't thought of. Thanks!
assignment in python
0.066568
0
0
5,566
4,206,000
2010-11-17T15:30:00.000
22
0
0
0
python,django,apache,mod-wsgi,django-wsgi
4,206,134
3
true
1
0
My suggestion is that you run the application in daemon mode. This way you won't be required to restart apache, just touch my_handler.wsgi and the daemon will know to restart the app. The apache httpd will not be only yours (in production) so it is fair not to restart it on every update.
2
30
0
I configured my development server this way: Ubuntu, Apache, mod_wsgi, Python 2.6 I work on the server from another computer connected to it. Most of the times the changes don't affect the application unless I restart Apache. In some cases the changes take effect without restarting the webserver, but after let's say 3 or 4 page loads the application might behave like it used to behave previous to the changes. Until now I just reloaded everytime apache as I have the development server here with me, but HELL after a while got so annoying. How can I avoid this? I can't work with the development server as I need an environment that is as close as possible as the production one. Thanks
Django + apache & mod_wsgi: having to restart apache after changes
1.2
0
0
23,853
4,206,000
2010-11-17T15:30:00.000
-1
0
0
0
python,django,apache,mod-wsgi,django-wsgi
4,206,153
3
false
1
0
Apache loads Django environment when starting and keep running it even when source is changed. I suggest you to use Django 'runserver' (which automatically restarts on changes) in heavy development sessions, unless you need some Apache-specific features (such as multi-thread). Note also that changes in templates do not require the restart of the web server.
2
30
0
I configured my development server this way: Ubuntu, Apache, mod_wsgi, Python 2.6 I work on the server from another computer connected to it. Most of the times the changes don't affect the application unless I restart Apache. In some cases the changes take effect without restarting the webserver, but after let's say 3 or 4 page loads the application might behave like it used to behave previous to the changes. Until now I just reloaded everytime apache as I have the development server here with me, but HELL after a while got so annoying. How can I avoid this? I can't work with the development server as I need an environment that is as close as possible as the production one. Thanks
Django + apache & mod_wsgi: having to restart apache after changes
-0.066568
0
0
23,853
4,208,146
2010-11-17T19:08:00.000
0
0
0
0
python,sql,sqlite
4,208,359
3
false
0
0
The main issue is that you're trying to compare a Python string (m.hexdigest()) with a tuple. Additionally, another poster's suggestion that you use SQL for the comparison is probably good advice. Another SQL suggestion would be to fix your columns -- TEXT for everything probably isn't what you want; an index on your hashkey column is very likely a good thing.
1
0
0
I'm using sqlite with python. I'm implementing the POP3 protocol. I have a table msg_id text date text from_sender text subject text body text hashkey text Now I need to check for duplicate messages by checking the message id of the message retrieved against the existing msg_id's in the table. I encrypted the msg_id using md5 and put it in the hashkey column. Whenever I retrieve mail, I hash the message id and check it with the table values. Heres what I do. def check_duplicate(new): conn = sql.connect("mail") c = conn.cursor() m = hashlib.md5() m.update(new) c.execute("select hashkey from mail") for row in c: if m.hexdigest() == row: return 0 else: continue return 1 It just refuses to work correctly. I tried printing the row value, it shows it in unicode, thats where the problem lies as it cannot compare properly. Is there a better way to do this, or to improve my method?
Comparing sql values
0
1
0
481
4,208,659
2010-11-17T20:04:00.000
0
0
1
1
python,environment-variables
4,208,698
2
false
0
0
If the other modules belongs to the same package you should be responsible to locate them if they are not stored in the conventional format (i.e. append the path with sys). If the other modules are user-configurable then the user have to specify the installation path trough PYTHONPATH
1
2
0
So, it turned out i was missing a semi-colon from my PYTHONPATH definition. But this only got me so far. for some reason, my script did NOT work as a scheduled task (on WinXP) until I explicitly added a directory from PYTHONPATH to the top of my script. Question is: When do I need to explicitly append something to my path and when can I simply rely on the environment variables?
When to use sys.path.append and when modifying %PYTHONPATH% is enough
0
0
0
2,369
4,208,989
2010-11-17T20:46:00.000
0
0
0
0
python,mysql,html,caching
4,210,460
4
false
0
0
How are you getting the price? If you are scrapping the data from the normal HTML page using a tool such as BeautifulSoup, that may be slowing down the round-trip time. In this case, it might help to compute a fast checksum (such as MD5) from the page to see if it has changed, before parsing it. If you are using a API which gives a short XML version of the price, this is probably not an issue.
1
1
0
I'm currently working on a site that makes several calls to big name online sellers like eBay and Amazon to fetch prices for certain items. The issue is, currently it takes a few seconds (as far as I can tell, this time is from making the calls) to load the results, which I'd like to be more instant (~10 seconds is too much in my opinion). I've already cached other information that I need to fetch, but that information is static. Is there a way that I can cache the prices but update them only when needed? The code is in Python and I store info in a mySQL database. I was thinking of somehow using chron or something along that lines to update it every so often, but it would be nice if there was a simpler and less intense approach to this problem. Thanks!
Caching online prices fetched via API unless they change
0
0
1
196
4,209,962
2010-11-17T22:47:00.000
1
0
1
0
python,class,iterator,multiprocessing
4,210,167
2
false
0
0
Yes, if you are not updating data on the class itself that needs to be shared across the instances, multiprocessing is the tool for you in this case.
2
2
0
I have a class that loops over some data files, processes them, and then writes new data back out. The analysis of each file is completely independent of the others. The class contains information needed by the analysis in its attributes, but the analysis does not need to change any attributes of the class. Thus I can make the analysis of one data file a single method of my class. The analysis could in principle be done in parallel since each data file is independent. As an aside, I was considering making my class iterable. Can I use the multiprocessing module to spawn processes that are methods of my class? I need to use multiprocessing because I'm using third party code that has a really bad memory leak (fills up all 24Gb of memory after about 100 data files). If not, how would you go about doing this? Would you just use a normal function called by my class (passing all the information I need as arguments) instead of a method? How are arguments passed to functions in multiprocessing? Does it make a deep copy?
Python - Are class methods multiprocess safe?
0.099668
0
0
1,618
4,209,962
2010-11-17T22:47:00.000
0
0
1
0
python,class,iterator,multiprocessing
4,210,056
2
false
0
0
You're not mentioning your process using any external resources, so it should be fork()-safe. Fork duplicates the memory and file descriptors, program state is identical in the parent and the child. Unless you're using windows which can't fork, go for it.
2
2
0
I have a class that loops over some data files, processes them, and then writes new data back out. The analysis of each file is completely independent of the others. The class contains information needed by the analysis in its attributes, but the analysis does not need to change any attributes of the class. Thus I can make the analysis of one data file a single method of my class. The analysis could in principle be done in parallel since each data file is independent. As an aside, I was considering making my class iterable. Can I use the multiprocessing module to spawn processes that are methods of my class? I need to use multiprocessing because I'm using third party code that has a really bad memory leak (fills up all 24Gb of memory after about 100 data files). If not, how would you go about doing this? Would you just use a normal function called by my class (passing all the information I need as arguments) instead of a method? How are arguments passed to functions in multiprocessing? Does it make a deep copy?
Python - Are class methods multiprocess safe?
0
0
0
1,618
4,210,057
2010-11-17T22:58:00.000
6
0
1
0
python,performance,mongodb
4,210,090
5
false
0
0
It depends.. if you need to read sequenced data, file might be faster, if you need to read random data, database has better chances to be optimized to your needs. (after all - database reads it's records from a file as well, but it has an internal structure and algorithms to enhance performance, it can use the memory in a smarter way, and do a lot in the background so the results will come faster) in an intensive case of random reading - I will go with the database option.
5
4
0
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation? I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file. Which would be faster? thanks in advance.
Is a file read faster than reading data from the database?
1
1
0
3,529
4,210,057
2010-11-17T22:58:00.000
1
0
1
0
python,performance,mongodb
49,248,435
5
false
0
0
Reading from a database can be more efficient, because you can access records directly and make use of indexes etc. With normal flat files you basically have to read them sequentially. (Mainframes support direct access files, but these are sort of halfway between flat files and databases). If you are in a multi-user environment, you must make sure that your data remain consistent even if multiple users try updates at the same time. With flat files, you have to lock the file for all but one user until she is ready with her update, and then lock for the next. Databases can do locking on row level. You can make a file based system as efficient as a database, but that effort amounts to writing a database system yourself.
5
4
0
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation? I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file. Which would be faster? thanks in advance.
Is a file read faster than reading data from the database?
0.039979
1
0
3,529
4,210,057
2010-11-17T22:58:00.000
3
0
1
0
python,performance,mongodb
4,210,106
5
false
0
0
There are too many factors to offer a concrete answer, but here's a list for you to consider: Disk bandwidth Disk latency Disk cache Network bandwidth MongoDB cluster size Volume of MongoDB client activity (the disk only has one "client" unless your machine is busy with other workloads)
5
4
0
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation? I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file. Which would be faster? thanks in advance.
Is a file read faster than reading data from the database?
0.119427
1
0
3,529
4,210,057
2010-11-17T22:58:00.000
0
0
1
0
python,performance,mongodb
4,210,113
5
false
0
0
If caching is not used sequential IO operations are faster with files by definition. Databases eventually use files, but they have more layers to pass before data hit the file. But if you want to query data using database is more efficient, because if you choose files you will have to implement it yourselves. For your task i recommend to research clustering for different databases, they can scale to your rate.
5
4
0
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation? I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file. Which would be faster? thanks in advance.
Is a file read faster than reading data from the database?
0
1
0
3,529
4,210,057
2010-11-17T22:58:00.000
4
0
1
0
python,performance,mongodb
4,210,368
5
false
0
0
Try it and tell us the answer.
5
4
0
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation? I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file. Which would be faster? thanks in advance.
Is a file read faster than reading data from the database?
0.158649
1
0
3,529
4,210,980
2010-11-18T02:02:00.000
0
0
1
0
python,debugging,pdb
70,714,363
2
false
0
0
If state is a property of A with a setter method, then you can set a break point inside its setter method and the execution will break whenever there is an attempt to change it.
1
2
0
I am using python 2.4 and trying to debug a twisted application. Is there any way by which I can perhaps put a watch on an object and break execution when its value changes. For Example To start with A.state="connected" What I want is a notification or pause in execution when A.state changes its value. I am new to pdb and twisted so if you feel this question needs more info, I can provide it.
Monitor the state of an object in pdb
0
0
0
1,938
4,212,877
2010-11-18T08:29:00.000
50
0
0
1
python,asynchronous,nonblocking,tornado
4,213,777
2
true
1
0
There is a server and a webframework. When should we use framework and when can we replace it with other one? This distinction is a bit blurry. If you are only serving static pages, you would use one of the fast servers like lighthttpd. Otherwise, most servers provide a varying complexity of framework to develop web applications. Tornado is a good web framework. Twisted is even more capable and is considered a good networking framework. It has support for lot of protocols. Tornado and Twisted are frameworks that provide support non-blocking, asynchronous web / networking application development. When should Tornado be used? When is it useless? When using it, what should be taken into account? By its very nature, Async / Non-Blocking I/O works great when it is I/O intensive and not computation intensive. Most web / networking applications suits well for this model. If your application demands certain computational intensive task to be done then it has to be delegated to some other service that can handle it better. While Tornado / Twisted can do the job of web server, responding to web requests. How can we make inefficient site using Tornado? Do any thing computational intensive task Introduce blocking operations But I guess it's not a silver bullet and if we just blindly run Django-based or any other site with Tornado it won't give any performance boost. Performance is usually a characteristic of complete web application architecture. You can bring down the performance with most web frameworks, if the application is not designed properly. Think about caching, load balancing etc. Tornado and Twisted provide reasonable performance and they are good for building performant web applications. You can check out the testimonials for both twisted and tornado to see what they are capable of.
1
87
0
Ok, Tornado is non-blocking and quite fast and it can handle a lot of standing requests easily. But I guess it's not a silver bullet and if we just blindly run Django-based or any other site with Tornado it won't give any performance boost. I couldn't find comprehensive explanation of this, so I'm asking it here: When should Tornado be used? When is it useless? When using it, what should be taken into account? How can we make inefficient site using Tornado? There is a server and a webframework. When should we use framework and when can we replace it with other one?
When and how to use Tornado? When is it useless?
1.2
0
0
29,364
4,213,091
2010-11-18T09:01:00.000
3
1
1
1
python,bash,environment-variables,crontab
4,213,327
3
false
0
0
Use a command line option that only cron will use. Or a symlink to give the script a different name when called by cron. You can then use sys.argv[0]to distinguish between the two ways to call the script.
1
15
0
Imagine a script is running in these 2 sets of "conditions": live action, set up in sudo crontab debug, when I run it from console ./my-script.py What I'd like to achieve is an automatic detection of "debug mode", without me specifying an argument (e.g. --debug) for the script. Is there a convention about how to do this? Is there a variable that can tell me who the script owner is? Whether script has a console at stdout? Run a ps | grep to determine that? Thank you for your time.
Detect if python script is run from console or by crontab
0.197375
0
0
4,781
4,213,138
2010-11-18T09:09:00.000
1
0
0
0
python,django,django-apps
4,214,852
3
false
1
0
In my opinion, there is no benefits for middleware and decorators. My rule of thumb: If it has a model and/or views, I'll make it an app.. Even for custom template tags I chose to make it an egg and import it into the apps that will be using it. Good question.
3
3
0
When developing some functionality for use with django. In this case a middleware and some other utils like a decorator. Is there any upside of making it into a Django App. The library has no models, so there is no point in a models.py (which you need to make django see it as an app), or putting into INSTALLED_APPS. But I see people doing it anyway, what are the benefits?
Any benefits of turning libraries for Django into an App?
0.066568
0
0
116
4,213,138
2010-11-18T09:09:00.000
0
0
0
0
python,django,django-apps
4,215,661
3
false
1
0
IMO it's handy to instantly see the list of used apps/libraries- if you miss anything, you can just pip install or easy_install it in the blink of an eye.
3
3
0
When developing some functionality for use with django. In this case a middleware and some other utils like a decorator. Is there any upside of making it into a Django App. The library has no models, so there is no point in a models.py (which you need to make django see it as an app), or putting into INSTALLED_APPS. But I see people doing it anyway, what are the benefits?
Any benefits of turning libraries for Django into an App?
0
0
0
116
4,213,138
2010-11-18T09:09:00.000
2
0
0
0
python,django,django-apps
4,213,863
3
true
1
0
You'll have to make it an app if you want to provide templates, template tags or filters with your library. Otherwise, Django won't pick them up.
3
3
0
When developing some functionality for use with django. In this case a middleware and some other utils like a decorator. Is there any upside of making it into a Django App. The library has no models, so there is no point in a models.py (which you need to make django see it as an app), or putting into INSTALLED_APPS. But I see people doing it anyway, what are the benefits?
Any benefits of turning libraries for Django into an App?
1.2
0
0
116
4,213,696
2010-11-18T10:15:00.000
0
0
0
0
python,mysql,xml,r
4,214,098
4
false
0
0
We do something like this at work sometimes but not in python. In that case, each usage requires a custom program to be written. We only have a SAX parser available. Using an XML decoder to get a dictionary/hash in a single step would help a lot. At the very least you'd have to tell it which tags map to which to tables and fields, no pre-existing lib can know that...
3
3
0
Is there a generic/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables.
Parsing an xml file and storing it into a database
0
0
1
2,008
4,213,696
2010-11-18T10:15:00.000
1
0
0
0
python,mysql,xml,r
4,214,476
4
false
0
0
There's the XML package for reading XML into R, and the RMySQL package for writing data from R into MySQL. Between the two there's a lot of work. XML surpasses the scope of a RDBMS like MySQL so something that could handle any XML thrown at it would be either ridiculously complex or trivially useless.
3
3
0
Is there a generic/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables.
Parsing an xml file and storing it into a database
0.049958
0
1
2,008
4,213,696
2010-11-18T10:15:00.000
4
0
0
0
python,mysql,xml,r
4,213,749
4
false
0
0
They're three separate operations: parsing, table creation, and data population. You can do all three with python, but there's nothing "automatic" about it. I don't think it's so easy. For example, XML is hierarchical and SQL is relational, set-based. I don't think it's always so easy to get a good relational schema for every single XML stream you can encounter.
3
3
0
Is there a generic/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables.
Parsing an xml file and storing it into a database
0.197375
0
1
2,008
4,214,868
2010-11-18T12:44:00.000
13
0
0
0
python,machine-learning,svm,libsvm
4,215,056
8
true
0
0
LIBSVM reads the data from a tuple containing two lists. The first list contains the classes and the second list contains the input data. create simple dataset with two possible classes you also need to specify which kernel you want to use by creating svm_parameter. >> from libsvm import * >> prob = svm_problem([1,-1],[[1,0,1],[-1,0,-1]]) >> param = svm_parameter(kernel_type = LINEAR, C = 10) ## training the model >> m = svm_model(prob, param) #testing the model >> m.predict([1, 1, 1])
2
25
1
I am in dire need of a classification task example using LibSVM in python. I don't know how the Input should look like and which function is responsible for training and which one for testing Thanks
An example using python bindings for SVM library, LIBSVM
1.2
0
0
50,030
4,214,868
2010-11-18T12:44:00.000
3
0
0
0
python,machine-learning,svm,libsvm
8,302,624
8
false
0
0
Adding to @shinNoNoir : param.kernel_type represents the type of kernel function you want to use, 0: Linear 1: polynomial 2: RBF 3: Sigmoid Also have in mind that, svm_problem(y,x) : here y is the class labels and x is the class instances and x and y can only be lists,tuples and dictionaries.(no numpy array)
2
25
1
I am in dire need of a classification task example using LibSVM in python. I don't know how the Input should look like and which function is responsible for training and which one for testing Thanks
An example using python bindings for SVM library, LIBSVM
0.07486
0
0
50,030
4,215,164
2010-11-18T13:19:00.000
0
0
1
0
python,macos
4,215,219
5
false
1
0
Python development on a Mac will be similar to Python development on other *NIX-based operating systems which, in some ways, can be easier than on Windows. As long as none of the modules you are using are Windows-only then you should have no problem!
2
3
0
I've been developing python web apps using django and appengine. I'm planning on buying a macbook to develop iPhone apps. I wonder if I will be able to develop my python apps without too much changes on a mac , or if keeping them on a PC will be better? Thanks
Python development under Mac
0
0
0
836
4,215,164
2010-11-18T13:19:00.000
1
0
1
0
python,macos
4,215,195
5
false
1
0
Developing python for app-engine on a mac works like a charm.
2
3
0
I've been developing python web apps using django and appengine. I'm planning on buying a macbook to develop iPhone apps. I wonder if I will be able to develop my python apps without too much changes on a mac , or if keeping them on a PC will be better? Thanks
Python development under Mac
0.039979
0
0
836
4,215,472
2010-11-18T13:54:00.000
1
0
1
0
python,max
4,215,531
5
false
0
0
A fairly efficient solution is a variation of quicksort where recursion is limited to the right part of the pivot until the pivot point position is higher than the number of elements required (with a few extra conditions to deal with border cases of course). The standard library has heapq.nlargest, as pointed out by others here.
1
38
0
Is there some function which would return me the N highest elements from some list? I.e. if max(l) returns the single highest element, sth. like max(l, count=10) would return me a list of the 10 highest numbers (or less if l is smaller). Or what would be an efficient easy way to get these? (Except the obvious canonical implementation; also, no such things which involve sorting the whole list first because that would be inefficient compared to the canonical solution.)
Python: take max N elements from some list
0.039979
0
0
38,790
4,215,816
2010-11-18T14:31:00.000
1
0
0
0
jquery,python,twitter,twisted
4,217,142
2
false
1
0
use an ajax call with a setInterval, add new content - if any - on the success function of JQuery's AJAX to the according div.
1
2
0
I wanna add page auto-update on my web site. It's written in Python and jquery, so I wanna try Twisted (or another COMET thing). The problem is about I don't know what exactly I need and what docs I have to read.
Page auto-update like in Twitter
0.099668
0
0
753
4,216,009
2010-11-18T14:49:00.000
7
1
0
1
python,c,licensing,cpu,motherboard
4,216,127
5
false
0
0
Under Linux, you could use "lshw -quiet -xml" and parse its output. You'll find plenty of system information here: cpuid, motherboard id and much more.
2
5
0
I'm trying to get the CPU serial or motherboard serial using C or Python for licensing purposes. Is it possible? I'm using Linux.
Getting CPU or motherboard serial number?
1
0
0
7,942
4,216,009
2010-11-18T14:49:00.000
0
1
0
1
python,c,licensing,cpu,motherboard
4,223,022
5
false
0
0
CPUs no longer obtain a serial number and it's been like that for a while now. For the CPUID - it's unique per CPU model therefore it doesn't help with licensing.
2
5
0
I'm trying to get the CPU serial or motherboard serial using C or Python for licensing purposes. Is it possible? I'm using Linux.
Getting CPU or motherboard serial number?
0
0
0
7,942
4,216,163
2010-11-18T15:03:00.000
1
0
0
0
python,linux,authentication,python-3.x,pam
8,396,713
2
false
0
0
+che the python pam module you linked to is not python3 compatible. there are three pam modules that i'm aware of {'pam', 'pypam', 'spypam'}, and none are py3 compatible. i've modified Chris AtLee's original pam package to work with python3. cleaning it up a bit before feeding back to him
1
2
0
I wrote the program that would need to authenticate users using their Linux usernames and passwords. I think it should do with PAM. I have tried searching from google PAM module for python3, but I did not find any. Is there a ready to use the PAM libraries, or try to make my own library? Is PAM usage some special security risks that should be taken into? I know that I can authenticate users with python3 spwd class but I dont want to use that, because then I have to run my program with root access.
Authenticate user in linux with python 3
0.099668
0
1
2,466
4,216,430
2010-11-18T15:25:00.000
1
0
0
0
python,open-source,video,content-management
4,216,611
6
false
1
0
You might want to tak ea look at zencoder for video encoding too.....
1
5
0
Is anyone aware of a open source CMS written in python using which I can make a site like YouTube?
Python CMS to create a video site like youtube?
0.033321
0
0
12,137
4,216,988
2010-11-18T16:16:00.000
0
0
1
0
c++,python,visual-studio-2010,python-c-api,python-embedding
4,277,222
2
true
0
1
Well, i finally found out what went wrong. I did compile my python27_d.dll with the same VC10 as my program itself. But my program is normally compiled as 64 bit executable. I just forgot to compile the dll for x64, too. I didnt think this would lead to such annoying behavoiur, as i believed i would get a linkr error then.
2
1
0
I am trying to embed some python code in a c++ application i am developing with ms visual studio c++ 2010. But when i run the program, it exits with code 0x01 when i call Py_initialize(). I dont know how to find out what went wrong. the help file says, Py_Initialize can't return an error value, it only fails fataly. But, why did it fail? I am using a self-compiled python27_d.dll, which i created with the msvs project files in the source downloads from python.org.
Tried to embed python in a visual studio 2010 c++ file, exits with code 1
1.2
0
0
1,264
4,216,988
2010-11-18T16:16:00.000
0
0
1
0
c++,python,visual-studio-2010,python-c-api,python-embedding
4,217,625
2
false
0
1
Is there simple 'hello world' type example of the Py_Initilize code in the python sdk you can start with? That will at least tell you if you have the compiler environment setup correctly, or if the error is in your usage.
2
1
0
I am trying to embed some python code in a c++ application i am developing with ms visual studio c++ 2010. But when i run the program, it exits with code 0x01 when i call Py_initialize(). I dont know how to find out what went wrong. the help file says, Py_Initialize can't return an error value, it only fails fataly. But, why did it fail? I am using a self-compiled python27_d.dll, which i created with the msvs project files in the source downloads from python.org.
Tried to embed python in a visual studio 2010 c++ file, exits with code 1
0
0
0
1,264