Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
1,679,844
2009-11-05T11:07:00.000
0
0
1
0
python,security,qt,pyqt
1,682,414
2
false
0
0
Depending on how your QPrinter deals with a file that already exists, you could use QTemporaryFile to create a file, then close the file and keep the reference to the QTemporaryFile object around until you are done with it. (This will also clean up the file for you when you destroy the object.)
1
1
0
I considered using tmpnam to set the output file name of a QPrinter. But the Python documentation recommends against using it. os.tmpnam() Return a unique path name that is reasonable for creating a temporary file. ... Applications are responsible for properly creating and managing files created using paths returned by tmpnam(); no automatic cleanup is provided. Warning Use of tmpnam() is vulnerable to symlink attacks; consider using tmpfile() (section File Object Creation) instead. Windows: Microsoft’s implementation of tmpnam() always creates a name in the root directory of the current drive, and that’s generally a poor location for a temp file (depending on privileges, you may not even be able to open a file using this name). Is this really insecure if my application doesn't need any special privileges? What are secure alternatives considering that I can only set a path as the output file name of the QPrinter?
How insecure is / replacement for tmpnam?
0
0
0
1,726
1,680,194
2009-11-05T12:17:00.000
1
0
1
0
python,regex,vim
55,582,911
11
false
0
0
There is a tricky way to do this if you have Vim compiled with +rightleft. You set 'allowrevins' which let you hit Ctrl+_ in insert mode to start Reverse Insert mode. It was originally made for inserting bidirectional scripts. Type your desired word in Insert mode, or move your cursor to the end of an already typed word. Hit Ctrl+_ and then pick a completion (i_Ctrl-x) method which is the most likely not to return any results for your word. Ysing Ctrl+e to cancel in-place completion does not seem to work in this case. I.e. for an unsyntactic text file you can hit in insert mode Ctrl+x Ctrl+d which is guaranteed to fail to find any macro/function names in the current file (See :h i_CTRL-X_CTRL-D and:h complete for more information). And voila! Completion lookup in reverse mode makes the looked up word reverse. Notice that the cursor will move to the beginning of that word (it's reversed direction of writing, remember?) You should then hit Ctrl+_ again to get back to regular insert mode and keyboard layout and go on with editing. Tip: You can set 'complete' exclusively (for the buffer, at least) to a completion option that is guaranteed to return no result. Just go over the options in :h 'complete'. This will make the easy i_Ctrl-N / i_Ctrl-P bindings available for a handy word reversal session. You can ofcourse further automate this with a macro, a function or a binding Note: Setting/resetting 'paste' and 'compatible' can set/reset 'allowrevins'. See :h allowrevins.
3
19
0
How can I reverse a word in Vim? Preferably with a regex or normal-mode commands, but other methods are welcome too: word => drow Thanks for your help! PS: I'm in windows XP Python is built in supported in my vim, but not Perl.
Reverse a word in Vim
0.01818
0
0
7,495
1,680,194
2009-11-05T12:17:00.000
0
0
1
0
python,regex,vim
1,680,392
11
false
0
0
If you have some time on your hands, you can bubble your way there by iteratively transposing characters (xp)...
3
19
0
How can I reverse a word in Vim? Preferably with a regex or normal-mode commands, but other methods are welcome too: word => drow Thanks for your help! PS: I'm in windows XP Python is built in supported in my vim, but not Perl.
Reverse a word in Vim
0
0
0
7,495
1,680,194
2009-11-05T12:17:00.000
0
0
1
0
python,regex,vim
71,982,856
11
false
0
0
you can use revins mode in order to do it: at the beginning type :set revins. from now on every letter you type will be inserted in a reverse order, until you type :set norevins to turn off. i.e, while revins is set, typing word will output drow. in order to change an existing word after revins mode is set, and the cursor on beginning of the word, type: dwi<C-r>"<ESC> explanation: dw deleted a word. i to enter insert mode <C-r>" to paste the last deleted or yaked text in insert mode, <ESC> to exit insert mode. remember to :set norevins at the end!
3
19
0
How can I reverse a word in Vim? Preferably with a regex or normal-mode commands, but other methods are welcome too: word => drow Thanks for your help! PS: I'm in windows XP Python is built in supported in my vim, but not Perl.
Reverse a word in Vim
0
0
0
7,495
1,680,311
2009-11-05T12:39:00.000
0
0
0
1
python,linux,ubuntu,gnome
1,680,360
4
false
0
1
This normally happens automatically when calling the gtk.main() function
1
7
0
In Gnome, whenever an application is started, the mouse cursor changes from normal to an activity indicator (a spinning wheel type thing on Ubuntu). Is there any way to inform Gnome (through some system call) when the application has finished launching so that the mouse cursor returns to normal without waiting for the usual timeout of 30 seconds to occur. I have a program in Pythong using GTK+ that is showing the icon even after launching, so what system call do I make?
GTK+ Startup Notification Icon
0
0
0
840
1,681,143
2009-11-05T15:08:00.000
0
0
1
0
python,datetime,time,timezone,pytz
18,385,592
8
false
0
0
Maybe try: import time print time.tzname #or time.tzname[time.daylight]
1
34
0
Is there a cross-platform function in python (or pytz) that returns a tzinfo object corresponding to the timezone currently set on the computer? environment variables cannot be counted on as they are not cross-platform
how to get tz_info object corresponding to current timezone?
0
0
0
32,220
1,681,208
2009-11-05T15:17:00.000
6
0
1
1
python,path,cross-platform,environment-variables
1,681,256
4
false
0
0
The caveat to be aware of with modifying environment variables in Python, is that there is no equivalent of the "export" shell command. There is no way of injecting changes into the current process, only child processes.
1
102
0
Is there a way to modify the PATH environment variable in a platform independent way using python? Something similar to os.path.join()?
Python: Platform independent way to modify PATH environment variable
1
0
0
88,027
1,682,831
2009-11-05T19:03:00.000
1
1
1
0
python,embedding
1,682,872
1
true
0
0
The only idea I could think of would be to simply give the user a method (menu option, etc) of executing scripts in the program. Correct. So certain classes, functions, objects, etc. would be exported to python, some script would do something, then said script could be run from the program. Correct. Would such a design be "safe?" Yes. Unless your users are malicious, psychotic sociopaths. They want to make your program do useful things. They bought/downloaded the software in the first place. They think it has value. They trusted your software. Why not trust them? Meaning if a script computes something, will the result be available to the program after execution of the script has finished? Programs like Apache do this all the time. You screw up the configuration ("script"), it crashes. Lesson learned? Don't screw up the configuration.
1
2
0
There are lots of tutorials/instructions on how to embed python in an application, but nothing (that I've seen) on overall design for how the embedded interpreter should be used and interact with the application. The only idea I could think of would be to simply give the user a method (menu option, etc) of executing scripts in the program. So certain classes, functions, objects, etc. would be exported to python, some script would do something, then said script could be run from the program. Would such a design be "safe?" Meaning is it feasible for a malicious/poorly-written script to "damage" the program and/or computer? I assume its possible depending on the functions available to the script (e.g: it could try to overwrite some important files, etc.) How might one prevent such from happening? (e.g: script certification, program design, etc.) This is implementation specific, but is it possible/feasible to have the effects of the script stay after its done running? Meaning if a script computes something, will the result be available to the program after execution of the script has finished? I think it is possible to do if the program were setup to interact with a specific script, but the program will be released before most scripts are written; and such a setup seems like a misuse of embedding a scripting language. Is there actually cases where you would want the result of a scripts execution to be available, or is this a contrived situation that doesn't really occur? Are there any other designs for embedding python? What about using python in a way similar to a plugin architecture? Thanks, Matthew A. Todd
Embedding Python Design
1.2
0
0
282
1,683,831
2009-11-05T21:39:00.000
1
0
0
1
python,windows,temporary-files
1,683,853
4
false
0
0
There shouldn't be such space limitation in Temp. If you wrote the app, I would recommend creating your files in ProgramData...
3
5
0
I have an application written in Python that's writing large amounts of data to the %TEMP% folder. Oddly, every once and awhile, it dies, returning IOError: [Errno 28] No space left on device. The drive has plenty of free space, %TEMP% is not its own partition, I'm an administrator, and the system has no quotas. Does Windows artificially put some types of limits on the data in %TEMP%? If not, any ideas on what could be causing this issue? EDIT: Following discussions below, I clarified the question to better explain what's going on.
Limitations of TEMP directory in Windows?
0.049958
0
0
13,119
1,683,831
2009-11-05T21:39:00.000
2
0
0
1
python,windows,temporary-files
1,683,908
4
false
0
0
Using a FAT32 filesystem I can imagine this happening when: Writing a lot of data to one file, and you reach the 4GB file size cap. Or when you are creating a lot of small files and reaching the 2^16-2 files per directory cap. Apart from this, I don't know of any limitations the system can impose on the temp folder, apart from the phyiscal partition actually being full. Another limitation is as Mike Atlas has suggested the GetTempFileName() function which creates files of type tmpXXXX.tmp. Although you might not be using it directly, verify that the %TEMP% folder does not contain too many of them (2^16). And maybe the obvious, have you tried emptying the %TEMP% folder before running the utility?
3
5
0
I have an application written in Python that's writing large amounts of data to the %TEMP% folder. Oddly, every once and awhile, it dies, returning IOError: [Errno 28] No space left on device. The drive has plenty of free space, %TEMP% is not its own partition, I'm an administrator, and the system has no quotas. Does Windows artificially put some types of limits on the data in %TEMP%? If not, any ideas on what could be causing this issue? EDIT: Following discussions below, I clarified the question to better explain what's going on.
Limitations of TEMP directory in Windows?
0.099668
0
0
13,119
1,683,831
2009-11-05T21:39:00.000
0
0
0
1
python,windows,temporary-files
1,683,911
4
false
0
0
There should be no trouble whatsoever with regard to your %TEMP% directory. What is your disk quota set to for %TEMP%'s hosting volume? Depending in part on what the apps themselves are doing, one of them may be throwing an error due to the disk quota being reached, which is a pain if this quota is set unreasonably high. If the quota is very high, try lowering it, which you can do as Administrator.
3
5
0
I have an application written in Python that's writing large amounts of data to the %TEMP% folder. Oddly, every once and awhile, it dies, returning IOError: [Errno 28] No space left on device. The drive has plenty of free space, %TEMP% is not its own partition, I'm an administrator, and the system has no quotas. Does Windows artificially put some types of limits on the data in %TEMP%? If not, any ideas on what could be causing this issue? EDIT: Following discussions below, I clarified the question to better explain what's going on.
Limitations of TEMP directory in Windows?
0
0
0
13,119
1,684,145
2009-11-05T22:37:00.000
-1
1
0
0
c#,.net,python,ruby
1,684,168
4
false
0
1
I have seen ways to call into Ruby / Python from c#. But it's easier the other way around.
1
8
0
I have a lot of APIs/Classes that I have developed in Ruby and Python that I would like to use in my .NET apps. Is it possible to instantiate a Ruby or Python Object in C# and call its methods? It seems that libraries like IronPython do the opposite of this. Meaning, they allow Python to utilize .NET objects, but not the reciprocal of this which is what I am looking for... Am I missing something here? Any ideas?
Call Ruby or Python API in C# .NET
-0.049958
0
0
5,027
1,685,558
2009-11-06T05:15:00.000
1
0
1
0
python
1,685,597
1
true
0
0
You could use r'(Start \d+.*?group=.*?name=.*?number=.*?end=\d+)*'.
1
0
0
I want to validate below data using regex and python. Below is the dump of the data which Can be stored in string variable Start 0 .......... group=..... name=...... number=.... end=(digits) Start 1 .......... group=..... name=...... number=.... end=(digits) Start 2 .......... group=..... name=...... number=.... end=(digits) Start 3 .......... group=..... name=...... number=.... end=(digits) Where ......is some random data need not to validate ... .. Start 100 .......... group=..... name=...... number=.... end=(digits) Thanks in advance
how to write regex for below format using python
1.2
0
0
92
1,686,192
2009-11-06T08:27:00.000
24
1
0
0
python,performance
1,686,232
8
false
0
0
This totally depends on the usecase. For long running applications (like servers), Java has proven to be extremely fast - even faster than C. This is possible as the JVM might compile hot bytecode to machine code. While doing this, it may take fully advantage of each and every feature of the CPU. This typically isn't possible for C, at least as soon as you leave your laboratory environment: just assume distributing a dozen of optimized builds to your clients - that simply won't work. But back to your question: it really depends. E.g. if startup time is an issue (which isn't an issue for a server application for instance) Java might not be the best choice. It may also depend on where your hot code areas are: If they are within native libraries with some Python code to simply glue them together, you will be able to get C like performance with Python as well. Typically, scripting languages will tend to be slower though - at least most of the time.
4
12
0
I'm a Java programmer and if there's one thing that I dislike about it, it would be speed. Java seems really slow, but a lot of the Python scriptsprograms I have written so far seem really fast. So I was just wondering if Python is faster than Java, or C# and how that compares to C/C++ (which I figure it'll be slower than)?
How fast is Python?
1
0
0
29,026
1,686,192
2009-11-06T08:27:00.000
0
1
0
0
python,performance
7,529,473
8
false
0
0
For the Python, velocity depends also for the interpreter implementations... I saw that pypy is generally faster than cpython.
4
12
0
I'm a Java programmer and if there's one thing that I dislike about it, it would be speed. Java seems really slow, but a lot of the Python scriptsprograms I have written so far seem really fast. So I was just wondering if Python is faster than Java, or C# and how that compares to C/C++ (which I figure it'll be slower than)?
How fast is Python?
0
0
0
29,026
1,686,192
2009-11-06T08:27:00.000
2
1
0
0
python,performance
1,686,811
8
false
0
0
It is very hard to make a truly objective and general comparison of the runtime speed of two languages. In comparing any two languages X and Y, one often finds X is faster than Y in some respects while being slower in others. For me, this makes any benchmarks/comparisons available online largely useless. The best way is to test it yourself and see how fast each language is for the job that you are doing. Having said that, there are certain things one should remember when testing languages like Java and Python. Code in these languages can often be speeded up significantly by using constructions more suited to the language (e.g. list comprehensions in Python, or using char[] and StringBuilder for certain String operations in Java). Moreover, for Python, using psyco can greatly boost the speed of the program. And then there is the whole issue of using appropriate data structures and keeping an eye on the runtime complexity of your code.
4
12
0
I'm a Java programmer and if there's one thing that I dislike about it, it would be speed. Java seems really slow, but a lot of the Python scriptsprograms I have written so far seem really fast. So I was just wondering if Python is faster than Java, or C# and how that compares to C/C++ (which I figure it'll be slower than)?
How fast is Python?
0.049958
0
0
29,026
1,686,192
2009-11-06T08:27:00.000
0
1
0
0
python,performance
1,686,388
8
false
0
0
It's a question you can't answer properly, because it all depends when it has to be fast. Java is good for huge servers, it's bad when you have to re-compile and test a lot of times your code (compilation is sooooooo slow). Python doesn't even have to be compiled to test ! In production environment, it's totally silly to say Java is faster than C... it's like saying C is faster than assembly. Anyway it's not possible to answer precisely : it all depends on what you want / need.
4
12
0
I'm a Java programmer and if there's one thing that I dislike about it, it would be speed. Java seems really slow, but a lot of the Python scriptsprograms I have written so far seem really fast. So I was just wondering if Python is faster than Java, or C# and how that compares to C/C++ (which I figure it'll be slower than)?
How fast is Python?
0
0
0
29,026
1,686,235
2009-11-06T08:37:00.000
2
0
0
0
python,model-view-controller,user-interface
1,738,316
3
true
1
1
The anomaly (from the MVC viewpoint) that makes this design difficult to make MVC-conformant is that you want to display information that, by your conceptualization, "does not live in a model". There is no such thing as "information that does not live in a model" in MVC: its conceptual root is "the models hold all the information, the views just do presentation tasks, the controllers mediate user interaction". It's quite possible that the information you're displaying doesn't "correspond to any business data", but (in an MVC worldview) this does not mean that info is "independent of the model", because there is no such thing -- it just means you need another model class (beyond whatever you're using to hold "business data"), to hold this "non-business" data!-) So when the user "instantiates a widget" (creates a directory-display view, presumably by some user action on some master/coordinating view, possibly on another existing widget if "cloning" is one of the ways to instantiate a widget), the controller's responsible for creating both a widget object and an instance of the "directory-display model class", and establish connection between them (normally by setting on the widget a reference to the relevant model instance), as well as telling the model to do its initial loading of information. When the user action on the widget implies an action on the model, the controller retrieves from the widget involved in the event the reference to the model instance, and sends that instance the appropriate request(s) (it's the model's business to let the view[s] interested in it know about changes to information -- typically by some observer pattern; it's definitely not the controller's business to feed the view with information -- that's really a very different approach from MVC!). Is the architectural investment required by MVC worth it, in your case, compared to a rougher approach where the information flows are less pristine and the model that should be there just doesn't exist? I'm a pragmatist and I definitely don't worship at the altar of MVC, but I think in this case the (relatively small) investment in sound, clear architecture may indeed repay itself in abundance. It's a question of envisioning the likely directions of change -- for example, what functionality that you don't need right now (but may well enter the picture soon afterwards) will be trivial to add if you go the proper MVC route, and would be a nightmare of ad-hoc kludges otherwise (or require a somewhat painful refactoring of the whole architecture)? All sort of likely things, from wanting to display the same directory information in different widgets to having a smarter "directory-information watching" model that can automatically refresh itself when needed (and supply the new info directly to interested views via the usual observer pattern, with no involvement by the controller), are natural and trivially easy with MVC (hey, that's the whole point of MVC, after all, so this is hardly surprising!-), kludgy and fragile with an ad-hoc corner-cutting architecture -- small investment, large potential returns, go for it! You may notice from the tone of the previous paragraph that I don't worship at the "extreme programming" altar either -- as a pragmatist, I will do a little "design up front" (especially in terms of putting in place a clean, flexible, extensible architecture, from the start, even if it's not indispensable right now) -- exactly because, in my experience, a little forethought and very modest investment, especially on the architectural front, pays back for itself many times over during a project's life (in such varied currencies as scalability, flexibility, extensibility, maintainability, security, and so forth, though not all of them will apply to every project -- e.g., in your case, security and scalability are not really a concern... but the other aspects will likely be!-). Just for generality, let me point out that this pragmatic attitude of mine does not justify excessive energy and time spent on picking an architecture (by definition of the word "excessive";-) -- being familiar with a few fundamental architectural patterns (and MVC is surely one of those) often reduces the initial investment in terms of time and effort -- once you recognize that such a classic architecture will serve you well, as in this case, it's really easy to see how to embody it (e.g., reject the idea of an "MVC without a M"!-), and it doesn't really take much more code compared to the kludgiest, ad-hoccest shortcuts!-)
1
4
0
I have a widget that displays a filesystem hierarchy for convenient browsing (basically a tree control and some associated toolbar buttons, such as "refresh"). Each of these widgets has a set of base directories for it to display (recursively). Assume that the user may instantiate as many of these widgets as they find convenient. Note that these widgets don't correspond to any business data -- they're independent of the model. Where should the (per-widget) set of base directories live in good MVC design? When the refresh button is pushed, an event is trapped by the controller, and the event contains the corresponding filesystem-browser widget. The controller determines the base directories for that particular widget (somehow), walks that directory path, and passes the widget some data to render. Two places I can think to store the base directories: The easy solution: make the base directories an instance variable on the widget and have the controller manipulate it to retain state for that widget. There's a conceptual issue with this, though: since the widget never looks at that instance variable, you're just projecting one of the responsibilities of the controller onto the widget. The more (technically, maybe conceptually) complex solution: Keep a {widget: base_directory_set} mapping in the controller with weak key references. The second way allows for easy expansion of controller responsibilities later on, as putting things in the controller tends to do -- for example, if I decided I later wanted to determine the set of all the base directories for all those widgets. There may be some piece of MVC knowledge I'm missing that solves this kind of problem well.
wxPython: How should I organize per-widget data in the controller?
1.2
0
0
519
1,686,768
2009-11-06T10:36:00.000
1
0
1
0
python,pylons
1,686,843
2
false
0
0
Because threading.local is new in Python 2.4. The StackedObjectProxy uses threading.local if it can.
1
2
0
It seems like threading.local is more straightforward and more robust.
Why does Pylons use StackedObjectProxies instead of threading.local?
0.099668
0
0
731
1,687,357
2009-11-06T12:45:00.000
-4
0
1
1
python,macos,python-3.x
67,923,827
23
false
0
0
You can do it from Terminal too. It's quite easy. You just need to type python3 --version and
7
140
0
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website? The reason why i am asking this question is because installer is not updating my terminal python version.
Updating Python on Mac
-1
0
0
519,726
1,687,357
2009-11-06T12:45:00.000
5
0
1
1
python,macos,python-3.x
1,687,431
23
false
0
0
I believe Python 3 can coexist with Python 2. Try invoking it using "python3" or "python3.1". If it fails, you might need to uninstall 2.6 before installing 3.1.
7
140
0
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website? The reason why i am asking this question is because installer is not updating my terminal python version.
Updating Python on Mac
0.043451
0
0
519,726
1,687,357
2009-11-06T12:45:00.000
3
0
1
1
python,macos,python-3.x
1,688,349
23
false
0
0
I personally wouldn't mess around with OSX's python like they said. My personally preference for stuff like this is just using MacPorts and installing the versions I want via command line. MacPorts puts everything into a separate direction (under /opt I believe), so it doesn't override or directly interfere with the regular system. It has all the usually features of any package management utilities if you are familiar with Linux distros. I would also suggest installing python_select via MacPorts and using that to select which python you want "active" (it will change the symlinks to point to the version you want). So at any time you can switch back to the Apple maintained version of python that came with OSX or you can switch to any of the ones installed via MacPorts.
7
140
0
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website? The reason why i am asking this question is because installer is not updating my terminal python version.
Updating Python on Mac
0.026081
0
0
519,726
1,687,357
2009-11-06T12:45:00.000
0
0
1
1
python,macos,python-3.x
47,333,658
23
false
0
0
First, install Homebrew (The missing package manager for macOS) if you haven': Type this in your terminal /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Now you can update your Python to python 3 by this command brew install python3 && cp /usr/local/bin/python3 /usr/local/bin/python Python 2 and python 3 can coexist so to open python 3, type python3 instead of python That's the easiest and the best way.
7
140
0
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website? The reason why i am asking this question is because installer is not updating my terminal python version.
Updating Python on Mac
0
0
0
519,726
1,687,357
2009-11-06T12:45:00.000
2
0
1
1
python,macos,python-3.x
66,423,786
23
false
0
0
Sometimes when you install Python from the install wizard on MAC it will not link to your bash profile. Since you are using homebrew, just to brew install python This would install the latest version of Python and then to link them brew link python@3.9
7
140
0
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website? The reason why i am asking this question is because installer is not updating my terminal python version.
Updating Python on Mac
0.01739
0
0
519,726
1,687,357
2009-11-06T12:45:00.000
1
0
1
1
python,macos,python-3.x
71,035,575
23
false
0
0
Install Home brew /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Install python 3 brew install python3 && cp /usr/local/bin/python3 /usr/local/bin/python Update python to latest version ln -s -f /usr/local/bin/python[your-latest-version-just-installed] /usr/local/bin/python
7
140
0
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website? The reason why i am asking this question is because installer is not updating my terminal python version.
Updating Python on Mac
0.008695
0
0
519,726
1,687,357
2009-11-06T12:45:00.000
1
0
1
1
python,macos,python-3.x
61,615,371
23
false
0
0
If it were me, I would just leave it as it is. Use python3 and pip3 to run your files since python and python3 can coexist. brew install python3 && cp /usr/local/bin/python3 /usr/local/bin/python You can use the above line but it might have unintended consequences.
7
140
0
I wanted to update my python 2.6.1 to 3.x on mac but i was wondering if its possible to do it using terminal or i have to download the installer from python website? The reason why i am asking this question is because installer is not updating my terminal python version.
Updating Python on Mac
0.008695
0
0
519,726
1,687,510
2009-11-06T13:10:00.000
9
0
0
0
python,nlp,nltk,chunking
1,687,712
2
true
0
0
You can get out of the box named entity chunking with the nltk.ne_chunk() method. It takes a list of POS tagged tuples: nltk.ne_chunk([('Barack', 'NNP'), ('Obama', 'NNP'), ('lives', 'NNS'), ('in', 'IN'), ('Washington', 'NNP')]) results in: Tree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]), ('lives', 'NNS'), ('in', 'IN'), Tree('GPE', [('Washington', 'NNP')])]) It identifies Barack as a person, but Obama as an organization. So, not perfect.
1
9
1
I am using their default POS tagging and default tokenization..and it seems sufficient. I'd like their default chunker too. I am reading the NLTK toolkit book, but it does not seem like they have a default chunker?
What is the default chunker for NLTK toolkit in Python?
1.2
0
0
4,560
1,688,845
2009-11-06T16:46:00.000
4
0
0
0
python,import
1,688,883
1
true
0
0
You have to build the extension from source yourself. It was valiant of you to try and "reverse the bytes", but only certain sections of the ELF file have word-oriented (as opposed to byte-oriented) data. Furthermore, it's unlikely that the dll in question was compiled for your system's CPU architecture.
1
1
0
Im attempting to use a C++ extension for Python called PySndObj. and getting an error I have never seen and cannot find anything about on the web :( ImportError: /home/nhnifong/SndObj-2.6.6/python/_sndobj.so: ELF file data encoding not little-endian I know that probably means the byte order is backwards, So I tried writing a little script that read the file 2 bytes at a time and switched their order before writing them back out. It didn't work. Anyone know what to do?
Why am I getting a little-endian error when importing .so file in python
1.2
0
0
2,256
1,689,015
2009-11-06T17:16:00.000
36
0
0
1
python,windows,shell
1,689,269
9
false
0
0
If you name your files with the ".pyw" extension, then windows will execute them with the pythonw.exe interpreter. This will not open the dos console for running your script.
3
71
0
Is there any way to run a Python script in Windows XP without a command shell momentarily appearing? I often need to automate WordPerfect (for work) with Python, and even if my script has no output, if I execute it from without WP an empty shell still pops up for a second before disappearing. Is there any way to prevent this? Some kind of output redirection perhaps?
Run Python script without Windows console appearing
1
0
0
105,900
1,689,015
2009-11-06T17:16:00.000
-2
0
0
1
python,windows,shell
62,431,147
9
false
0
0
I had the same problem. I tried many options, and all of them failed But I tried this method, and it magically worked!!!!! So, I had this python file (mod.py) in a folder, I used to run using command prompt When I used to close the cmd the gui is automatically closed.....(SAD), So I run it as follows C:\....>pythonw mod.py Don't forget pythonw "w" is IMP
3
71
0
Is there any way to run a Python script in Windows XP without a command shell momentarily appearing? I often need to automate WordPerfect (for work) with Python, and even if my script has no output, if I execute it from without WP an empty shell still pops up for a second before disappearing. Is there any way to prevent this? Some kind of output redirection perhaps?
Run Python script without Windows console appearing
-0.044415
0
0
105,900
1,689,015
2009-11-06T17:16:00.000
0
0
0
1
python,windows,shell
69,071,490
9
false
0
0
Turn of your window defender. And install pyinstaller package using pip install pyinstaller . After installing open cmd and type pyinstaller --onefile --noconsole filename.py
3
71
0
Is there any way to run a Python script in Windows XP without a command shell momentarily appearing? I often need to automate WordPerfect (for work) with Python, and even if my script has no output, if I execute it from without WP an empty shell still pops up for a second before disappearing. Is there any way to prevent this? Some kind of output redirection perhaps?
Run Python script without Windows console appearing
0
0
0
105,900
1,689,031
2009-11-06T17:18:00.000
1
0
0
0
python,mysql,django,overhead
1,689,143
4
false
1
0
There is always overhead in database calls, in your case the overhead is not that bad because the application and database are on the same machine so there is no network latency but there is still a significant cost. When you make a request to the database it has to prepare to service that request by doing a number of things including: Allocating resources (memory buffers, temp tables etc) to the database server connection/thread that will handle the request, De-serializing the sql and parameters (this is necessary even on one machine as this is an inter-process request unless you are using an embeded database) Checking whether the query exists in the query cache if not optimise it and put it in the cache. Note also that if your queries are not parametrised (that is the values are not separated from the SQL) this may result in cache misses for statements that should be the same meaning that each request results in the query being analysed and optimized each time. Process the query. Prepare and return the results to the client. This is just an overview of the kinds of things the most database management systems do to process an SQL request. You incur this overhead 500 times even if the the query itself runs relatively quickly. Bottom line database interactions even to local database are not as cheap as you might expect.
4
3
0
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL. The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc) So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages. In hindsight it's not all together surprising, but it seems that this can't be good. ...even if only a dozen or so of the queries take 1ms+ So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.
Overhead of a Round-trip to MySql?
0.049958
1
0
1,620
1,689,031
2009-11-06T17:18:00.000
3
0
0
0
python,mysql,django,overhead
1,689,146
4
false
1
0
The overhead of each queries is only part of the picture. The actual round trip time between your Django and Mysql servers is probably very small since most of your queries are coming back in less than a one millisecond. The bigger problem is that the number of queries issued to your database can quickly overwhelm it. 500 queries for a page is way to much, even 50 seems like a lot to me. If ten users view complicated pages you're now up to 5000 queries. The round trip time to the database server is more of a factor when the caller is accessing the database from a Wide Area Network, where roundtrips can easily be between 20ms and 100ms. I would definitely look into using some kind of caching.
4
3
0
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL. The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc) So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages. In hindsight it's not all together surprising, but it seems that this can't be good. ...even if only a dozen or so of the queries take 1ms+ So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.
Overhead of a Round-trip to MySql?
0.148885
1
0
1,620
1,689,031
2009-11-06T17:18:00.000
4
0
0
0
python,mysql,django,overhead
1,689,452
4
false
1
0
Just because you are using an ORM doesn't mean that you shouldn't do performance tuning. I had - like you - a home page of one of my applications that had low performance. I saw that I was doing hundreds of queries to display that page. I went looking at my code and realized that with some careful use of select_related() my queries would bring more of the data I needed - I went from hundreds of queries to tens. You can also run a SQL profiler and see if there aren't indices that would help your most common queries - you know, standard database stuff. Caching is also your friend, I would think. If a lot of a page is not changing, do you need to query the database every single time? If all else fails, remember: the ORM is great, and yes - you should try to use it because it is the Django philosophy; but you are not married to it. If you really have a usecase where studying and tuning the ORM navigation didn't help, if you are sure that you could do it much better with a standard query: use raw sql for that case.
4
3
0
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL. The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc) So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages. In hindsight it's not all together surprising, but it seems that this can't be good. ...even if only a dozen or so of the queries take 1ms+ So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.
Overhead of a Round-trip to MySql?
0.197375
1
0
1,620
1,689,031
2009-11-06T17:18:00.000
2
0
0
0
python,mysql,django,overhead
1,689,330
4
true
1
0
There are some ways to reduce the query volume. Use .filter() and .all() to get a bunch of things; pick and choose in the view function (or template via {%if%}). Python can process a batch of rows faster than MySQL. "But I could send too much to the template". True, but you'll execute fewer SQL requests. Measure to see which is better. This is what you used to do when you wrote SQL. It's not wrong -- it doesn't break the ORM -- but it optimizes the underlying DB work and puts the processing into the view function and the template. Avoid query navigation in the template. When you do {{foo.bar.baz.quux}}, SQL is used to get the bar associated with foo, then the baz associated with the bar, then the quux associated with baz. You may be able to reduce this query business with some careful .filter() and Python processing to assemble a useful tuple in the view function. Again, this was something you used to do when you hand-crafted SQL. In this case, you gather larger batches of ORM-managed objects in the view function and do your filtering in Python instead of via a lot of individual ORM requests. This doesn't break the ORM. It changes the usage profile from lots of little queries to a few bigger queries.
4
3
0
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL. The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc) So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages. In hindsight it's not all together surprising, but it seems that this can't be good. ...even if only a dozen or so of the queries take 1ms+ So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.
Overhead of a Round-trip to MySql?
1.2
1
0
1,620
1,689,570
2009-11-06T18:56:00.000
0
0
0
1
python,security,google-app-engine,automation
1,690,155
5
false
1
0
Can you break up the scraping process into independent chunks that can each finish in the timeframe of an appengine request? (which can run longer than one second btw). Then you can just spawn a bunch of tasks using the task API that when combined, accomplish the full scrape. Then use the cron API to spawn off those tasks every N minutes.
4
0
0
I'm writing an aggregation application which scrapes data from a couple of web sources and displays that data with a novel interface. The sites from which I'm scraping update every couple of minutes, and I want to make sure the data on my aggregator is up-to-date. What's the best way to periodically submit fresh data to my App Engine application from an automated script? Constraints: The application is written in Python. The scraping process for each site takes longer than one second, thus I cannot process the data in an App Engine handler. The host on which the updater script would run is shared, so I'd rather not store my password on disk. I'd like to check the code for the application into to our codebase. While my associates aren't malicious, they're pranksters, and I'd like to prevent them from inserting fake data into my app. I'm aware that App Engine supports some remote_api thingey, but I'd have to put that entry point behind authentication (see constraint 3) or hide the URL (see constraint 4). Suggestions?
How do I upload data to Google App Engine periodically?
0
0
0
400
1,689,570
2009-11-06T18:56:00.000
3
0
0
1
python,security,google-app-engine,automation
1,690,150
5
true
1
0
Write a Task Queue task or an App Engine cron job to handle this. I'm not sure where you heard that there's a limit of 1 second on any sort of App Engine operations - requests are limited to 30 seconds, and URL fetches have a maximum deadline of 10 seconds.
4
0
0
I'm writing an aggregation application which scrapes data from a couple of web sources and displays that data with a novel interface. The sites from which I'm scraping update every couple of minutes, and I want to make sure the data on my aggregator is up-to-date. What's the best way to periodically submit fresh data to my App Engine application from an automated script? Constraints: The application is written in Python. The scraping process for each site takes longer than one second, thus I cannot process the data in an App Engine handler. The host on which the updater script would run is shared, so I'd rather not store my password on disk. I'd like to check the code for the application into to our codebase. While my associates aren't malicious, they're pranksters, and I'd like to prevent them from inserting fake data into my app. I'm aware that App Engine supports some remote_api thingey, but I'd have to put that entry point behind authentication (see constraint 3) or hide the URL (see constraint 4). Suggestions?
How do I upload data to Google App Engine periodically?
1.2
0
0
400
1,689,570
2009-11-06T18:56:00.000
0
0
0
1
python,security,google-app-engine,automation
1,689,805
5
false
1
0
The only way to get data into AppEngine is to call up a Web app of yours (as a Web app) and feed it data through the usual HTTP-ish means, i.e. as parameters to a GET request (for short data) or to a POST (if long or binary). In other words, you'll have to craft your own little dataloader, which you will access as a Web app and which will in turn stash the data into the database behind AppEngine. You'll probably want at least password protection on that app so nobody loads bogus data into your app.
4
0
0
I'm writing an aggregation application which scrapes data from a couple of web sources and displays that data with a novel interface. The sites from which I'm scraping update every couple of minutes, and I want to make sure the data on my aggregator is up-to-date. What's the best way to periodically submit fresh data to my App Engine application from an automated script? Constraints: The application is written in Python. The scraping process for each site takes longer than one second, thus I cannot process the data in an App Engine handler. The host on which the updater script would run is shared, so I'd rather not store my password on disk. I'd like to check the code for the application into to our codebase. While my associates aren't malicious, they're pranksters, and I'd like to prevent them from inserting fake data into my app. I'm aware that App Engine supports some remote_api thingey, but I'd have to put that entry point behind authentication (see constraint 3) or hide the URL (see constraint 4). Suggestions?
How do I upload data to Google App Engine periodically?
0
0
0
400
1,689,570
2009-11-06T18:56:00.000
0
0
0
1
python,security,google-app-engine,automation
1,693,701
5
false
1
0
I asked around and some friends came up with two solutions: Upload a file with a shared secret token along with the application, but when committing to the codebase, change the token. Create a small datastore model with one row, a secret token. In both cases the token can be used to authenticate POST requests used to upload new data.
4
0
0
I'm writing an aggregation application which scrapes data from a couple of web sources and displays that data with a novel interface. The sites from which I'm scraping update every couple of minutes, and I want to make sure the data on my aggregator is up-to-date. What's the best way to periodically submit fresh data to my App Engine application from an automated script? Constraints: The application is written in Python. The scraping process for each site takes longer than one second, thus I cannot process the data in an App Engine handler. The host on which the updater script would run is shared, so I'd rather not store my password on disk. I'd like to check the code for the application into to our codebase. While my associates aren't malicious, they're pranksters, and I'd like to prevent them from inserting fake data into my app. I'm aware that App Engine supports some remote_api thingey, but I'd have to put that entry point behind authentication (see constraint 3) or hide the URL (see constraint 4). Suggestions?
How do I upload data to Google App Engine periodically?
0
0
0
400
1,691,179
2009-11-06T23:16:00.000
53
0
0
1
python,tcp,twisted,protocols
1,691,189
4
true
0
0
As long as the two messages were sent on the same TCP connection, order will be maintained. If multiple connections are opened between the same pair of processes, you may be in trouble. Regarding Twisted, or any other asynchronous event system: I expect you'll get the dataReceived messages in the order that bytes are received. However, if you start pushing work off onto deferred calls, you can, erm... "twist" your control flow beyond recognition.
3
43
0
If I send two TCP messages, do I need to handle the case where the latter arrives before the former? Or is it guaranteed to arrive in the order I send it? I assume that this is not a Twisted-specific example, because it should conform to the TCP standard, but if anyone familiar with Twisted could provide a Twisted-specific answer for my own peace of mind, that'd be appreciated :-)
Is TCP Guaranteed to arrive in order?
1.2
0
0
30,177
1,691,179
2009-11-06T23:16:00.000
25
0
0
1
python,tcp,twisted,protocols
1,691,197
4
false
0
0
TCP is connection-oriented and offers its Clients in-order delivery. Of course this applies to the connection level: individual connections are independent. You should note that normally we refer to "TCP streams" and "UDP messages". Whatever Client library you use (e.g. Twisted), the underlying TCP connection is independent of it. TCP will deliver the "protocol messages" in order to your client. By "protocol message" I refer of course to the protocol you use on the TCP layer. Further note that I/O operation are async in nature and very dependent on system load + also compounding network delays & losses, you cannot rely on message ordering between TCP connections.
3
43
0
If I send two TCP messages, do I need to handle the case where the latter arrives before the former? Or is it guaranteed to arrive in the order I send it? I assume that this is not a Twisted-specific example, because it should conform to the TCP standard, but if anyone familiar with Twisted could provide a Twisted-specific answer for my own peace of mind, that'd be appreciated :-)
Is TCP Guaranteed to arrive in order?
1
0
0
30,177
1,691,179
2009-11-06T23:16:00.000
8
0
0
1
python,tcp,twisted,protocols
1,691,194
4
false
0
0
TCP is a stream, UDP is a message. You're mixing up terms. For TCP it is true that the stream will arrive in the same order as it was send. There are no distict messages in TCP, bytes appear as they arrive, interpreting them as messages is up to you.
3
43
0
If I send two TCP messages, do I need to handle the case where the latter arrives before the former? Or is it guaranteed to arrive in the order I send it? I assume that this is not a Twisted-specific example, because it should conform to the TCP standard, but if anyone familiar with Twisted could provide a Twisted-specific answer for my own peace of mind, that'd be appreciated :-)
Is TCP Guaranteed to arrive in order?
1
0
0
30,177
1,691,400
2009-11-07T00:21:00.000
1
0
0
0
python,django,symfony1
1,691,549
2
false
1
0
Assuming you are going to be using the components in different places on different pages I would suggest trying {% include "foo.html" %}. One of the (several) downsides of the Django templating language is that there is no concept of macros, so you need to be very consistent in the names of values in the context you pass to your main template so that the included template finds things it's looking for. Alternatively, in the view you can invoke the template engine for each component and save the result in a value passed in the context. Then in the main template simply use the value in the context. I'm not fond of either of these approaches. The more complex your template needs become the more you may want to look at Jinja2. (And, no, I don't buy the Django Party Line about 'template designers' -- never saw one in my life.)
1
1
0
I've been developing in the Symfony framework for quite a time, but now I have to work with Django and I'm having problems with doing something like a "component" or "partial" in Symfony. That said, here is my goal: I have a webpage with lots of small widgets, all these need their logic - located in a "views.py" I guess. But, how do I tell Django to call all this logic and render it all as one webpage?
How to implement Symfony Partials or Components in Django?
0.099668
0
0
715
1,692,082
2009-11-07T05:18:00.000
6
0
0
1
python,progress-bar,subprocess,popen,apt-get
1,692,347
2
false
0
0
Instead of parsing the output of the apt-get, you can use python-apt to install packages. AFAIK it also has modules for reporting the progress.
1
6
0
I'm working on a simple GUI Python script to do some simple tasks on a system. Some of that work involves apt-get install to install some packages. While this is going on, I want to display a progress bar that should update with the progress of the download, using the little percentage shown in apt-get's interface in the terminal. BUT! I can't find a way to get the progress info. Piping or redirecting the output of apt-get just gives static lines that show the "completed download" message for each package, and same for reading via subprocess.Popen() in my script. How can I read from apt-get's output to get the percentages of the file downloaded?
Parsing output of apt-get install for progress bar
1
0
0
3,478
1,692,107
2009-11-07T05:29:00.000
6
0
1
0
python,performance,switch-statement
1,692,119
6
false
0
0
Your concern should be about the readability and maintainability of the code, rather than its efficiency. This applies in most scenarios, and particularly in the one you describe now. The efficiency difference is likely to be negligible (you can easily check it with a small amount of benchmarking code), but 30-40 elif's are a warning sign - perhaps something can be abstracted away and make the code more readable. Describe your case, and perhaps someone can come up with a better design.
4
3
0
I have read a few articles around alternatives to the switch statement in Python. Mainly using dicts instead of lots of if's and elif's. However none really answer the question: is there one with better performance or efficiency? I have read a few arguments that if's and elifs would have to check each statement and becomes inefficient with many ifs and elif's. However using dicts gets around that, but you end up having to create new modules to call which cancels the performance gain anyways. The only difference in the end being readability. Can anyone comment on this, is there really any difference in the long run? Does anyone regularly use the alternative? Only reason I ask is because I am going to end up having 30-40 elif/if's and possibly more in the future. Any input is appreciated. Thanks.
Performance difference in alternative switches in Python
1
0
0
1,377
1,692,107
2009-11-07T05:29:00.000
1
0
1
0
python,performance,switch-statement
1,692,248
6
false
0
0
Times when you'd use a switch in many languages you would use a dict in Python. A switch statement, if added to Python (it's been considered), would not be able to give any real performance gain anyways. dicts are used ubiquitously in Python. CPython dicts are an insanely-efficient, robust hashtable implementation. Lookup is O(1), as opposed to traversing an elif chain, which is O(n). (30-40 probably doesn't qualify as big enough for this to matter tons anyways). I am not sure what you mean about creating new modules to call, but using dicts is very scalable and easy. As for actual performance gain, that is impossible to really tackle effectively abstractly. Write your code in the most straightforward and maintainable way (you're using Python forgoshsakes!) and then see if it's too slow. If it is, profile it and find out what places it needs to be sped up to make a real difference.
4
3
0
I have read a few articles around alternatives to the switch statement in Python. Mainly using dicts instead of lots of if's and elif's. However none really answer the question: is there one with better performance or efficiency? I have read a few arguments that if's and elifs would have to check each statement and becomes inefficient with many ifs and elif's. However using dicts gets around that, but you end up having to create new modules to call which cancels the performance gain anyways. The only difference in the end being readability. Can anyone comment on this, is there really any difference in the long run? Does anyone regularly use the alternative? Only reason I ask is because I am going to end up having 30-40 elif/if's and possibly more in the future. Any input is appreciated. Thanks.
Performance difference in alternative switches in Python
0.033321
0
0
1,377
1,692,107
2009-11-07T05:29:00.000
0
0
1
0
python,performance,switch-statement
1,692,132
6
false
0
0
I think a dict will gain advantage over the alternative sequence of if statements as the number of cases goes up, since the key lookup only requires one hash operation. Otherwise if you only have a few cases, a few if statements are better. A dict is probably a more elegant solution for what you are doing. Either way, the performance difference wont really be noticeable in your case.
4
3
0
I have read a few articles around alternatives to the switch statement in Python. Mainly using dicts instead of lots of if's and elif's. However none really answer the question: is there one with better performance or efficiency? I have read a few arguments that if's and elifs would have to check each statement and becomes inefficient with many ifs and elif's. However using dicts gets around that, but you end up having to create new modules to call which cancels the performance gain anyways. The only difference in the end being readability. Can anyone comment on this, is there really any difference in the long run? Does anyone regularly use the alternative? Only reason I ask is because I am going to end up having 30-40 elif/if's and possibly more in the future. Any input is appreciated. Thanks.
Performance difference in alternative switches in Python
0
0
0
1,377
1,692,107
2009-11-07T05:29:00.000
8
0
1
0
python,performance,switch-statement
1,692,135
6
false
0
0
dict's perfomance is typically going to be unbeatable, because a lookup into a dict is going to be O(1) except in rare and practically never-observed cases (where they key involves user-coded types with lousy hashing;-). You don't have to "create new modules" as you say, just arbitrary callables, and that creation, which is performed just once to prep the dict, is not particularly costly anyway -- during operation, it's just one lookup and one call, greased lightning time. As others have suggested, try timeit to experiment with a few micro-benchmarks of the alternatives. My prediction: with a few dozen possibilities in play, as you mention you have, you'll be slapping your forehead about ever considering anything but a dict of callables!-) If you find it too hard to run your own benchmarks and can supply some specs, I guess we can benchmark the alternatives for you, but it would be really more instructive if you tried to do it yourself before you ask SO for help!-)
4
3
0
I have read a few articles around alternatives to the switch statement in Python. Mainly using dicts instead of lots of if's and elif's. However none really answer the question: is there one with better performance or efficiency? I have read a few arguments that if's and elifs would have to check each statement and becomes inefficient with many ifs and elif's. However using dicts gets around that, but you end up having to create new modules to call which cancels the performance gain anyways. The only difference in the end being readability. Can anyone comment on this, is there really any difference in the long run? Does anyone regularly use the alternative? Only reason I ask is because I am going to end up having 30-40 elif/if's and possibly more in the future. Any input is appreciated. Thanks.
Performance difference in alternative switches in Python
1
0
0
1,377
1,693,088
2009-11-07T13:51:00.000
7
1
1
0
python,optimization,assert,bytecode
1,693,940
7
false
0
0
I have never encountered a good reason to use -O. I have always assumed its main purpose is in case at some point in the future some meaningful optimization is added.
2
50
0
Python has a flag -O that you can execute the interpreter with. The option will generate "optimized" bytecode (written to .pyo files), and given twice, it will discard docstrings. From Python's man page: -O Turn on basic optimizations. This changes the filename extension for compiled (bytecode) files from .pyc to .pyo. Given twice, causes docstrings to be discarded. This option's two major features as I see it are: Strip all assert statements. This trades defense against corrupt program state for speed. But don't you need a ton of assert statements for this to make a difference? Do you have any code where this is worthwhile (and sane?) Strip all docstrings. In what application is the memory usage so critical, that this is a win? Why not push everything into modules written in C? What is the use of this option? Does it have a real-world value?
What is the use of Python's basic optimizations mode? (python -O)
1
0
0
12,397
1,693,088
2009-11-07T13:51:00.000
4
1
1
0
python,optimization,assert,bytecode
1,693,128
7
false
0
0
You've pretty much figured it out: It does practically nothing at all. You're almost never going to see speed or memory gains, unless you're severely hurting for RAM.
2
50
0
Python has a flag -O that you can execute the interpreter with. The option will generate "optimized" bytecode (written to .pyo files), and given twice, it will discard docstrings. From Python's man page: -O Turn on basic optimizations. This changes the filename extension for compiled (bytecode) files from .pyc to .pyo. Given twice, causes docstrings to be discarded. This option's two major features as I see it are: Strip all assert statements. This trades defense against corrupt program state for speed. But don't you need a ton of assert statements for this to make a difference? Do you have any code where this is worthwhile (and sane?) Strip all docstrings. In what application is the memory usage so critical, that this is a win? Why not push everything into modules written in C? What is the use of this option? Does it have a real-world value?
What is the use of Python's basic optimizations mode? (python -O)
0.113791
0
0
12,397
1,693,205
2009-11-07T14:27:00.000
1
1
1
0
.net,python,performance,ironpython
1,694,288
3
false
0
0
You could enable .net tracing, which outputs timing information at the bottom of the page. Make an app in C#/.Net and an app using Python and look at the differences in timing. That will give you a definitive answer. In all honesty I think you're better off just using C#, it's "faster" to develop since the VS environment is there for you and it's going to run faster since it doesn't have to use the dynamic language runtime.
2
10
0
I would like to give sources for what I'm saying but I just dont have them, it's something I heard. Once a programming professor told me that some software benchmarking done to .net vs Python in some particular items it gave a relation of 5:8 in favor of .NET . That was his argument in favor of Python not being so much slower than .NET Here it's the thing, I would like to try IronPython since I could combine the web framework I know the most (asp.net) with the language I like the most (Python) and I was wondering about the speed of programs in asp.net in Python vs the speed of programs in ASP.NET with VB.net or C#. Is there any software benchmarking on this? Also, shouldnt the speeds of IronPython compared to other .NET languages be similar, since IronPython unlike Python have to compile to the .NET intermediate code? Can someone enlight me on these issues? Greetings
How does ironpython speed compare to other .net languages?
0.066568
0
0
6,879
1,693,205
2009-11-07T14:27:00.000
0
1
1
0
.net,python,performance,ironpython
2,617,858
3
false
0
0
IronPython will be considerably slower than C#. You could think of the comparison as very roughly between CPython and C, but with the gap somewhat smaller.
2
10
0
I would like to give sources for what I'm saying but I just dont have them, it's something I heard. Once a programming professor told me that some software benchmarking done to .net vs Python in some particular items it gave a relation of 5:8 in favor of .NET . That was his argument in favor of Python not being so much slower than .NET Here it's the thing, I would like to try IronPython since I could combine the web framework I know the most (asp.net) with the language I like the most (Python) and I was wondering about the speed of programs in asp.net in Python vs the speed of programs in ASP.NET with VB.net or C#. Is there any software benchmarking on this? Also, shouldnt the speeds of IronPython compared to other .NET languages be similar, since IronPython unlike Python have to compile to the .NET intermediate code? Can someone enlight me on these issues? Greetings
How does ironpython speed compare to other .net languages?
0
0
0
6,879
1,693,564
2009-11-07T16:24:00.000
1
0
0
0
python,apache2,wsgi,x-sendfile
1,693,668
2
false
1
0
I have discovered the answer. Use the BETA version provided. It seems to fix this issue.
1
3
0
Any filesize over about 4GB is not going to work with the mod_xsendfile for Apache2 (as it sets the content length to a long). I am willing to rewrite it to support this; however, I can find no documentation on how to set content length from the apache api to something larger than a long and thus serve large files through Apache. I know Apache can do this as it is compiled with Large File Support and is serving the files through the directory index without any issue. I need to use Apache as I am using WSGI. I do not want to use FastCGI or switch off Apache2 for various reasons I do not feel like getting into. Thanks.
X-Sendfile and VERY big files on Apache2
0.099668
0
0
1,086
1,693,815
2009-11-07T17:39:00.000
1
0
0
1
python,google-app-engine,indexing,archive,bigtable
1,693,856
2
false
1
0
Unless someone's written utilities for this kind of thing, the way to go is to read from one and write to the other kind!
2
0
0
Is there a way to move an entity to another kind in appengine. Say you have a kind defines, and you want to keep a record of deleted entities of that kind. But you want to separate the storage of live object and archived objects. Kinds are basically just serialized dicts in the bigtable anyway. And maybe you don't need to index the archive in the same way as the live data. So how would you make a move or copy of a entity of one kind to another kind.
Move or copy an entity to another kind
0.099668
0
0
162
1,693,815
2009-11-07T17:39:00.000
1
0
0
1
python,google-app-engine,indexing,archive,bigtable
1,693,979
2
true
1
0
No - once created, the kind is a part of the entity's immutable key. You need to create a new entity and copy everything across. One way to do this would be to use the low-level google.appengine.api.datastore interface, which treats entities as dicts.
2
0
0
Is there a way to move an entity to another kind in appengine. Say you have a kind defines, and you want to keep a record of deleted entities of that kind. But you want to separate the storage of live object and archived objects. Kinds are basically just serialized dicts in the bigtable anyway. And maybe you don't need to index the archive in the same way as the live data. So how would you make a move or copy of a entity of one kind to another kind.
Move or copy an entity to another kind
1.2
0
0
162
1,694,205
2009-11-07T19:53:00.000
3
1
0
0
python,joomla,xml-rpc
1,696,183
1
true
0
0
the book "Mastering Joomla 1.5 Extension and Framework Development" has a nice explanation of that. Joomla has a fex XML-RPC plugins that let you do a few things, like the blogger API interface. (plugins/xmlrpc/blogger.php) You should create your own XML-RPC plugin to do the custom things you want.
1
3
0
How do I get started with getting going with XML-RPC with joomla? I've been looking around for documentation and finding nothing... I'd like to connect to a joomla server, (after enabling the Core Joomla XML-RPC plugin), and be able to do things like login and add an article, and tweak all the parameters of the article if possible. My xml-rpc client implementation will be in python.
Joomla and XMLRPC
1.2
0
1
2,670
1,695,014
2009-11-08T01:20:00.000
1
0
1
0
python,security,virtual-machine,interpreter
1,695,275
8
false
0
0
I am an undergrad and in my first year, we were taught python. We had these things called "CodeLabs" that had to be submitted periodically. It works by asking a question and asking the student to input their answer into a text box and running that code on some test cases and checking their return values One day, the codelabs website (turingscraft.com) became inaccessible because someone decided to run an endless while loop and calling os.fork() inside it. This was obviously a problem for the administrators of turingscraft.com. However, they later found a way to restrict access to such commands for students. If I were you, I would look up information about their site. Maybe they posted some information about this and how to fix it
5
6
0
Is there a secure Python intepreter? Imagine a Python VM you can run on your machine, that restricts the operations. No files can be opened, no system calls, etc. It just transforms stdin to stdout, maybe with text processing + math etc. Does such a secure Python VM exist?
Secure Python intepreter?
0.024995
0
0
1,260
1,695,014
2009-11-08T01:20:00.000
-1
0
1
0
python,security,virtual-machine,interpreter
2,215,015
8
false
0
0
Isn't security more a job for the operating system ? I mean, create a user with restricted access to files and such. Then let the vm be ran only with these rights. Or maybe I'm speaking nonsense. I'm no sysadmin or security expert, but I tend to do things with the tools that are made for it.
5
6
0
Is there a secure Python intepreter? Imagine a Python VM you can run on your machine, that restricts the operations. No files can be opened, no system calls, etc. It just transforms stdin to stdout, maybe with text processing + math etc. Does such a secure Python VM exist?
Secure Python intepreter?
-0.024995
0
0
1,260
1,695,014
2009-11-08T01:20:00.000
2
0
1
0
python,security,virtual-machine,interpreter
1,695,169
8
true
0
0
You could run Jython on the JVM with a SecurityManager that allows you to specify permitted / disallowed operations.
5
6
0
Is there a secure Python intepreter? Imagine a Python VM you can run on your machine, that restricts the operations. No files can be opened, no system calls, etc. It just transforms stdin to stdout, maybe with text processing + math etc. Does such a secure Python VM exist?
Secure Python intepreter?
1.2
0
0
1,260
1,695,014
2009-11-08T01:20:00.000
-1
0
1
0
python,security,virtual-machine,interpreter
1,695,267
8
false
0
0
I've been toying with this lately. My requirements include Python 3.x which immediately takes solutions like Jython and IronPython off the table. I'd be hesitant to take that route anyway, as I've never trusted user-mode language VMs. That being the case, for my purposes the best solution so far is to take it out of the hands of the interpreter completely and run in a strongly locked-down container (OpenVZ or similar). However, this is taking a hammer to the problem (albeit not the sledgehammer of full virtualization), and may not be viable if you have to run a truly huge number of isolated interpreters. One upside, though, is that because it doesn't rely on the security of any particular interpreter, you can use any arbitrary language you want in the environment -- you don't have to tie yourself to Python or the set of languages/implementations available for JVM or .NET/Mono.
5
6
0
Is there a secure Python intepreter? Imagine a Python VM you can run on your machine, that restricts the operations. No files can be opened, no system calls, etc. It just transforms stdin to stdout, maybe with text processing + math etc. Does such a secure Python VM exist?
Secure Python intepreter?
-0.024995
0
0
1,260
1,695,014
2009-11-08T01:20:00.000
0
0
1
0
python,security,virtual-machine,interpreter
1,697,077
8
false
0
0
You could always go to the source code and make your own flavor of Python. If enough people need it, it will be no time before it's up and running.
5
6
0
Is there a secure Python intepreter? Imagine a Python VM you can run on your machine, that restricts the operations. No files can be opened, no system calls, etc. It just transforms stdin to stdout, maybe with text processing + math etc. Does such a secure Python VM exist?
Secure Python intepreter?
0
0
0
1,260
1,695,971
2009-11-08T10:29:00.000
0
0
0
0
python,text,nlp,words,wordnet
58,050,062
5
false
0
0
sorry, may I ask which tool could judge "difficulty level" of sentences? I wish to find out "similar difficulty level" of sentences for user to read.
4
6
1
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
Does WordNet have "levels"? (NLP)
0
0
0
2,585
1,695,971
2009-11-08T10:29:00.000
2
0
0
0
python,text,nlp,words,wordnet
1,696,133
5
false
0
0
In order to get levels, you need to predefine the content of each level. An ontology often defines these as the immediate IS_A children of a specific concept, but if that is absent, you need to develop a method of that yourself. The next step is to put a priority on each concept, in case you want to present only one category for each word. The priority can be done in multiple ways, for instance as the count of IS_A relations between the category and the word, or manually selected priorities for each category. For each word, you can then pick the category with the highest priority. For instance, you may want meat to be "food" rather than chemical substance. You may also want to pick some words, that change priority if they are in the path. For instance, if you want some chemicals which are also food, to be announced as chemicals, but others should still be food.
4
6
1
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
Does WordNet have "levels"? (NLP)
0.07983
0
0
2,585
1,695,971
2009-11-08T10:29:00.000
0
0
0
0
python,text,nlp,words,wordnet
1,717,952
5
false
0
0
WordNet's hypernym tree ends with a single root synset for the word "entity". If you are using WordNet's C library, then you can get a while recursive structure for a synset's ancestors using traceptrs_ds, and you can get the whole synset tree by recursively following nextss and ptrlst pointers until you hit null pointers.
4
6
1
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
Does WordNet have "levels"? (NLP)
0
0
0
2,585
1,695,971
2009-11-08T10:29:00.000
6
0
0
0
python,text,nlp,words,wordnet
1,698,380
5
false
0
0
[Please credit Pete Kirkham, he first came with the reference to SUMO which may well answer the question asked by Alex, the OP] (I'm just providing a complement of information here; I started in a comment field but soon ran out of space and layout capabilites...) Alex: Most of SUMO is science or engineering? It does not contain every-day words like foods, people, cars, jobs, etc? Pete K: SUMO is an upper ontology. The mid-level ontologies (where you would find concepts between 'thing' and 'beef burrito') listed on the page don't include food, but reflect the sorts of organisations which fund the project. There is a mid-level ontology for people. There's also one for industries (and hence jobs), including food suppliers, but no mention of burritos if you grep it. My two cents 100% of WordNet (3.0 i.e. the latest, as well as older versions) is mapped to SUMO, and that may just be what Alex need. The mid-level ontologies associated with SUMO (or rather with MILO) are effectively in specific domains, and do not, at this time, include Foodstuff, but since WordNet does (include all -well, many of- these everyday things) you do not need to leverage any formal ontology "under" SUMO, but instead use Sumo's WordNet mapping (possibly in addition to WordNet, which, again, is not an ontology but with its informal and loose "hierarchy" may also help. Some difficulty may arise, however, from two area (and then some ;-) ?): the SUMO ontology's "level" may not be the level you'd have in mind for your particular application. For example while "Burrito" brings "Food", at top level entity in SUMO "Chicken" brings well "Chicken" which only through a long chain finds "Animal" (specifically: Chicken->Poultry->Bird->Warm_Blooded_Vertebrae->Vertebrae->Animal). Wordnet's coverage and metadata is impressive, but with regards to the mid-level concepts can be a bit inconsistent. For example "our" Burrito's hypernym is appropriately "Dish", which provides it with circa 140 food dishes, which includes generics such as "Soup" or "Casserole" as well as "Chicken Marengo" (but omitting say "Chicken Cacciatore") My point, in bringing up these issues, is not to criticize WordNet or SUMO and its related ontologies, but rather to illustrate simply some of the challenges associated with building ontology, particularly at the mid-level. Regardless of some possible flaws and lackings of a solution based on SUMO and WordNet, a pragmatic use of these frameworks may well "fit the bill" (85% of the time)
4
6
1
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
Does WordNet have "levels"? (NLP)
1
0
0
2,585
1,697,045
2009-11-08T16:24:00.000
1
0
0
0
python,sql,django,database-design,database
1,697,224
2
true
1
0
You might also simply store the last time a user was reading a particular forum. Any posts that have been updated since that date are new. You'll only be storing one additional piece of information per user as opposed to a piece of information per post per user.
2
2
0
I'm working on a not-so-big project in django that will among other things incorporate a forum system. I have most of the system at a more or less functioning state, but I'm still missing a feature to mark unread threads for the users when there are new posts. The thing is I can't really think of a way to properly store that information. My first idea was to create another model that will store a list of threads with changes in them for each user. Something with one ForeignKey(User) and one ForeignKey(Thread) and just keep adding new entries each time a thread is posted or a post is added to a thread. But then, I'm not sure how well that would scale with say several hundred threads after a while and maybe 50-200 users. So add 200 rows for each new post for the users who aren't logged on? Sounds like a lot. How do other forum systems do it anyway? And how can I implement a system to work these things out in Django. Thanks!
Django/SQL: keeping track of who who read what in a forum
1.2
0
0
354
1,697,045
2009-11-08T16:24:00.000
2
0
0
0
python,sql,django,database-design,database
1,697,061
2
false
1
0
You're much better off storing the "read" bit, not the "unread" bit. And you can store them not as relational data, but in a giant bit-blob. Then you don't have to modify the read data at all when new posts are added, only when a user reads posts.
2
2
0
I'm working on a not-so-big project in django that will among other things incorporate a forum system. I have most of the system at a more or less functioning state, but I'm still missing a feature to mark unread threads for the users when there are new posts. The thing is I can't really think of a way to properly store that information. My first idea was to create another model that will store a list of threads with changes in them for each user. Something with one ForeignKey(User) and one ForeignKey(Thread) and just keep adding new entries each time a thread is posted or a post is added to a thread. But then, I'm not sure how well that would scale with say several hundred threads after a while and maybe 50-200 users. So add 200 rows for each new post for the users who aren't logged on? Sounds like a lot. How do other forum systems do it anyway? And how can I implement a system to work these things out in Django. Thanks!
Django/SQL: keeping track of who who read what in a forum
0.197375
0
0
354
1,697,153
2009-11-08T17:01:00.000
2
0
0
0
c++,python,database,persistence
1,697,185
6
false
0
0
BerkeleyDB is good, also look at the *DBM incarnations (e.g. GDBM). The big question though is: for what do you need to search? Do you need to search by that URL, by a range of URLs or the dates you list? It is also quite possible to keep groups of records as simple files in the local filesystem, grouped by dates or search terms, &c. Answering the "search" question is the biggest start. As for the key/value thingy, what you need to ensure is that the KEY itself is well defined as for your lookups. If for example you need to lookup by dates sometimes and others by title, you will need to maintain a "record" row, and then possibly 2 or more "index" rows making reference to the original record. You can model nearly anything in a key/value store.
3
5
0
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...}) As this is a client-side app, I don't want to use a database server, I just want the info stored into files. I want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game. I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs. Is my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution?
Which database should I use to store records, and how should I use it?
0.066568
1
0
666
1,697,153
2009-11-08T17:01:00.000
0
0
0
0
c++,python,database,persistence
1,698,109
6
false
0
0
Ok, so you say just storing the data..? You really only need a DB for retrieval, lookup, summarising, etc. So, for storing, just use simple text files and append lines. Compress the data if you need to, use delims between fields - just about any language will be able to read such files. If you do want to retrieve, then focus on your retrieval needs, by date, by key, which keys, etc. If you want simple client side, then you need simple client db. SQLite is far easier than BDB, but look at things like Sybase Advantage (very fast and free for local clients but not open-source) or VistaDB or firebird... but all will require local config/setup/maintenance. If you go local XML for a 'sizable' number of records will give you some unnecessarily bloated file-sizes..!
3
5
0
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...}) As this is a client-side app, I don't want to use a database server, I just want the info stored into files. I want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game. I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs. Is my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution?
Which database should I use to store records, and how should I use it?
0
1
0
666
1,697,153
2009-11-08T17:01:00.000
2
0
0
0
c++,python,database,persistence
1,697,239
6
false
0
0
Personally I would use sqlite anyway. It has always just worked for me (and for others I work with). When your app grows and you suddenly do want to do something a little more sophisticated, you won't have to rewrite. On the other hand, I've seen various comments on the Python dev list about Berkely DB that suggest it's less than wonderful; you only get dict-style access (what if you want to select certain date ranges or titles instead of URLs); and it's not even in Python 3's standard set of libraries.
3
5
0
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...}) As this is a client-side app, I don't want to use a database server, I just want the info stored into files. I want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game. I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs. Is my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution?
Which database should I use to store records, and how should I use it?
0.066568
1
0
666
1,697,334
2009-11-08T17:54:00.000
1
0
0
0
python,algorithm,sudoku
35,500,598
11
false
0
0
Not gonna write full code, but I did a sudoku solver a long time ago. I found that it didn't always solve it (the thing people do when they have a newspaper is incomplete!), but now think I know how to do it. Setup: for each square, have a set of flags for each number showing the allowed numbers. Crossing out: just like when people on the train are solving it on paper, you can iteratively cross out known numbers. Any square left with just one number will trigger another crossing out. This will either result in solving the whole puzzle, or it will run out of triggers. This is where I stalled last time. Permutations: there's only 9! = 362880 ways to arrange 9 numbers, easily precomputed on a modern system. All of the rows, columns, and 3x3 squares must be one of these permutations. Once you have a bunch of numbers in there, you can do what you did with the crossing out. For each row/column/3x3, you can cross out 1/9 of the 9! permutations if you have one number, 1/(8*9) if you have 2, and so forth. Cross permutations: Now you have a bunch of rows and columns with sets of potential permutations. But there's another constraint: once you set a row, the columns and 3x3s are vastly reduced in what they might be. You can do a tree search from here to find a solution.
2
22
1
I want to write a code in python to solve a sudoku puzzle. Do you guys have any idea about a good algorithm for this purpose. I read somewhere in net about a algorithm which solves it by filling the whole box with all possible numbers, then inserts known values into the corresponding boxes.From the row and coloumn of known values the known value is removed.If you guys know any better algorithm than this please help me to write one. Also I am confused that how i should read the known values from the user. It is really hard to enter the values one by one through console. Any easy way for this other than using gui?
Algorithm for solving Sudoku
0.01818
0
0
147,751
1,697,334
2009-11-08T17:54:00.000
5
0
0
0
python,algorithm,sudoku
1,697,407
11
false
0
0
I wrote a simple program that solved the easy ones. It took its input from a file which was just a matrix with spaces and numbers. The datastructure to solve it was just a 9 by 9 matrix of a bit mask. The bit mask would specify which numbers were still possible on a certain position. Filling in the numbers from the file would reduce the numbers in all rows/columns next to each known location. When that is done you keep iterating over the matrix and reducing possible numbers. If each location has only one option left you're done. But there are some sudokus that need more work. For these ones you can just use brute force: try all remaining possible combinations until you find one that works.
2
22
1
I want to write a code in python to solve a sudoku puzzle. Do you guys have any idea about a good algorithm for this purpose. I read somewhere in net about a algorithm which solves it by filling the whole box with all possible numbers, then inserts known values into the corresponding boxes.From the row and coloumn of known values the known value is removed.If you guys know any better algorithm than this please help me to write one. Also I am confused that how i should read the known values from the user. It is really hard to enter the values one by one through console. Any easy way for this other than using gui?
Algorithm for solving Sudoku
0.090659
0
0
147,751
1,698,017
2009-11-08T21:44:00.000
4
0
0
0
python,scipy,neural-network
1,698,110
3
false
0
0
If you're familiar with Matlab, check out the excellent Python libraries numpy, scipy, and matplotlib. Together, they provide the most commonly used subset of Matlab functions.
1
3
1
I am trying to learn programming in python and am also working against a deadline for setting up a neural network which looks like it's going to feature multidirectional associative memory and recurrent connections among other things. While the mathematics for all these things can be accessed from various texts and sources (and is accessible, so to speak), as a newbie to python (and programming as a profession) I am kinda floating in space looking for the firmament as I try to 'implement' things!! Information on any good online tutorials on constructing neural networks ab initio will be greatly appreciated :) In the meantime I am moonlighting as a MatLab user to nurse the wounds caused by Python :)
Neural Networks in Python without using any readymade libraries...i.e., from first principles..help!
0.26052
0
0
4,234
1,698,362
2009-11-08T23:40:00.000
0
1
0
0
python,time,multithreading,pamie
1,698,371
2
false
1
0
I think what you are looking for is somewhere to set the timeout on your request. I would suggest looking into the documentation on PAMIE.
2
0
0
currently im making some crawler script,one of problem is sometimes if i open webpage with PAMIE,webpage can't open and hang forever. are there any method to close PAMIE's IE or win32com's IE ? such like if webpage didn't response or loading complete less than 10sec or so . thanks in advance
win32com and PAMIE web page open timeout
0
0
1
294
1,698,362
2009-11-08T23:40:00.000
2
1
0
0
python,time,multithreading,pamie
1,698,422
2
true
1
0
Just use, to initialize your PAMIE instance, PAMIE(timeOut=100) or whatever. The units of measure for timeOut are "tenths of a second" (!); the default is 3000 (300 seconds, i.e., 5 minutes); with 300 as I suggested, you'd time out after 10 seconds as you request. (You can pass the timeOut= parameter even when you're initializing with a URL, but in that case the timeout will only be active after the initial navigation).
2
0
0
currently im making some crawler script,one of problem is sometimes if i open webpage with PAMIE,webpage can't open and hang forever. are there any method to close PAMIE's IE or win32com's IE ? such like if webpage didn't response or loading complete less than 10sec or so . thanks in advance
win32com and PAMIE web page open timeout
1.2
0
1
294
1,700,228
2009-11-09T10:33:00.000
2
1
0
0
c#,python,ipc,rpc,bidirectional
1,700,287
2
false
0
0
Use JSON-RPC because the experience that you gain will have more practical use. JSON is widely used in web applications written in all of the dozen or so most popular languages.
2
6
0
I want to pass data between a Python and a C# application in Windows (I want the channel to be bi-directional) In fact I wanna pass a struct containing data about a network packet that I've captured with C# (SharpPcap) to the Python app and then send back a modified packet to the C# program. What do you propose ? (I rather it be a fast method) My searches so far revealed that I can use these technologies, but I don't know which: JSON-RPC Use WCF (run the project under IronPython using Ironclad) WCF (use Python for .NET)
IPC between Python and C#
0.197375
0
0
2,195
1,700,228
2009-11-09T10:33:00.000
2
1
0
0
c#,python,ipc,rpc,bidirectional
1,700,631
2
true
0
0
Why not use a simple socket communication, or if you wish you can start a simple http server, and/or do json-rpc over it.
2
6
0
I want to pass data between a Python and a C# application in Windows (I want the channel to be bi-directional) In fact I wanna pass a struct containing data about a network packet that I've captured with C# (SharpPcap) to the Python app and then send back a modified packet to the C# program. What do you propose ? (I rather it be a fast method) My searches so far revealed that I can use these technologies, but I don't know which: JSON-RPC Use WCF (run the project under IronPython using Ironclad) WCF (use Python for .NET)
IPC between Python and C#
1.2
0
0
2,195
1,700,441
2009-11-09T11:24:00.000
3
0
0
1
python,google-app-engine
1,701,239
3
false
1
0
To answer the actual question from the title of your post, assuming you're still wondering: to get environment variables, simple import os and the environment is available in os.environ.
1
0
0
does someone have an idea how to get the environment variables on Google-AppEngine ? I'm trying to write a simple Script that shall use the Client-IP (for Authentication) and a parameter (geturl or so) from the URL (for e.g. http://thingy.appspot.dom/index?geturl=www.google.at) I red that i should be able to get the Client-IP via "request.remote_addr" but i seem to lack 'request' even tho i imported webapp from google.appengine.ext Many thanks in advance, Birt
Environment on google Appengine
0.197375
0
0
461
1,701,199
2009-11-09T14:12:00.000
43
0
1
0
java,python,exception
1,701,327
3
true
1
0
In Python, that would be ValueError, or a subclass of it. For example, trying to .read() a closed file raises "ValueError: I/O operation on closed file".
1
67
0
IllegalStateException is often used in Java when a method is invoked on an object in inappropriate state. What would you use instead in Python?
Is there an analogue to Java IllegalStateException in Python?
1.2
0
0
12,359
1,702,024
2009-11-09T16:11:00.000
1
0
0
0
python,performance,profiling,paster
1,710,186
3
true
0
0
I almost always use paster serve --reload ... during development. That command executes itself as a subprocess (it executes its own script using the subprocess module, not fork()). The subprocess polls for source code changes, quits when it detects a change, and gets restarted by the parent paster serve --reload. That's to say if you're going to profile paster serve itself, omit the --reload argument. Profiling individual requests with middleware should work fine either way. My particular problem was that pkg_resources takes an amount of time proportional to all installed packages when it is first invoked. I solved it by rebuilding my virtualenv without unnecessary packages.
1
2
0
Python's paster serve app.ini is taking longer than I would like to be ready for the first request. I know how to profile requests with middleware, but how do I profile the initialization time? I would like it to not fork a thread pool and quit as soon as it is ready to serve so the time after it's ready doesn't show up in the profile.
How do I profile `paster serve`'s startup time?
1.2
0
0
545
1,702,586
2009-11-09T17:41:00.000
5
0
1
1
python,notepad++
24,143,304
21
false
0
0
I wish people here would post steps instead of just overall concepts. I eventually got the cmd /k version to work. The step-by-step instructions are: In NPP, click on the menu item: Run In the submenu, click on: Run In the Run... dialog box, in the field The Program to Run, delete any existing text and type in: cmd /K "$(FULL_CURRENT_PATH)" The /K is optional, it keeps open the window created when the script runs, if you want that. Hit the Save... button. The Shortcut dialogue box opens; fill it out if you want a keyboard shortcut (there's a note saying "This will disable the accelerator" whatever that is, so maybe you don't want to use the keyboard shortcut, though it probably doesn't hurt to assign one when you don't need an accelerator). Somewhere I think you have to tell NPP where the Python.exe file is (e.g., for me: C:\Python33\python.exe). I don't know where or how you do this, but in trying various things here, I was able to do that--I don't recall which attempt did the trick.
1
140
0
I prefer using Notepad++ for developing, How do I execute the files in Python through Notepad++?
How to Execute a Python Script in Notepad++?
0.047583
0
0
455,952
1,703,012
2009-11-09T18:56:00.000
1
0
1
0
python,random
1,703,327
5
false
0
0
Perhaps it is not a problem in your case, but ont problem with using the system time as the seed is that someone who knows roughly when your system was started may be able to guess your seed (by trial) after seeing a few numbers from the sequence. eg, don't use system time as the seed for your online poker game
4
6
0
Simple enough question: I'm using python random module to generate random integers. I want to know what is the suggested value to use with the random.seed() function? Currently I am letting this default to the current time, but this is not ideal. It seems like a string literal constant (similar to a password) would also not be ideal/strong Suggestions? Thanks, -aj UPDATE: The reason I am generating random integers is for generation of test data. The numbers do not need to be reproducable.
What is suggested seed value to use with random.seed()?
0.039979
0
0
13,998
1,703,012
2009-11-09T18:56:00.000
5
0
1
0
python,random
1,703,027
5
false
0
0
For most cases using current time is good enough. Occasionally you need to use a fixed number to generate pseudo random numbers for comparison purposes.
4
6
0
Simple enough question: I'm using python random module to generate random integers. I want to know what is the suggested value to use with the random.seed() function? Currently I am letting this default to the current time, but this is not ideal. It seems like a string literal constant (similar to a password) would also not be ideal/strong Suggestions? Thanks, -aj UPDATE: The reason I am generating random integers is for generation of test data. The numbers do not need to be reproducable.
What is suggested seed value to use with random.seed()?
0.197375
0
0
13,998
1,703,012
2009-11-09T18:56:00.000
3
0
1
0
python,random
1,703,241
5
false
0
0
Setting the seed is for repeatability, not security. If anything, you make the system less secure by having a fixed seed than one that is constantly changing.
4
6
0
Simple enough question: I'm using python random module to generate random integers. I want to know what is the suggested value to use with the random.seed() function? Currently I am letting this default to the current time, but this is not ideal. It seems like a string literal constant (similar to a password) would also not be ideal/strong Suggestions? Thanks, -aj UPDATE: The reason I am generating random integers is for generation of test data. The numbers do not need to be reproducable.
What is suggested seed value to use with random.seed()?
0.119427
0
0
13,998
1,703,012
2009-11-09T18:56:00.000
0
0
1
0
python,random
1,703,231
5
false
0
0
If you are using random for generating test data I would like to suggest that reproducibility can be important. Just think to an use case: for data set X you get some weird behaviour (eg crash). Turns out that data set X shows some feature that was not so apparent from the other data sets Y and Z and uncovers a bug which had escapend your test suites. Now knowing the seed is useful so that you can precisely reproduce the bug and you can fix it.
4
6
0
Simple enough question: I'm using python random module to generate random integers. I want to know what is the suggested value to use with the random.seed() function? Currently I am letting this default to the current time, but this is not ideal. It seems like a string literal constant (similar to a password) would also not be ideal/strong Suggestions? Thanks, -aj UPDATE: The reason I am generating random integers is for generation of test data. The numbers do not need to be reproducable.
What is suggested seed value to use with random.seed()?
0
0
0
13,998
1,703,440
2009-11-09T20:11:00.000
2
0
0
0
python,wsgi
1,718,144
3
false
1
0
I prefer working directly on wsgi, along with mako and psycopg. It's good to know about Beaker, though I usually don't hold state in the server because I believe it reduces scalability. I either put it in the user's cookie, in the database tied to a token in the user's cookie, or in a redirect url.
3
1
0
I'm quite new in Python world. I come from java and ABAP world, where their application server are able to handle stateful request. Is it also possible in python using WSGI? Or stateful and stateless are handled in other layer?
How can I make WSGI(Python) stateful?
0.132549
0
0
1,364
1,703,440
2009-11-09T20:11:00.000
1
0
0
0
python,wsgi
1,703,477
3
false
1
0
Your question is a little vague and open-ended. First of all, WSGI itself isn't a framework, it's just the glue to connect a framework to the web server. Secondly, I'm not clear on what you mean when you say "state" -- do you mean storing information about a client on the server? If so, web frameworks (Pylons, Django, etc) allow you to store that kind of information in web session variables.
3
1
0
I'm quite new in Python world. I come from java and ABAP world, where their application server are able to handle stateful request. Is it also possible in python using WSGI? Or stateful and stateless are handled in other layer?
How can I make WSGI(Python) stateful?
0.066568
0
0
1,364
1,703,440
2009-11-09T20:11:00.000
5
0
0
0
python,wsgi
1,703,470
3
false
1
0
Usually, you don't work with "bare" WSGI. You work with web-frameworks, such as Pylons or TurboGears2. And these contain a session-middleware, based on WSGI - called "Beaker". But if you work with the framework, you don't have to worry about that - you just use it. But if you insist, you can of course use Beaker standalone.
3
1
0
I'm quite new in Python world. I come from java and ABAP world, where their application server are able to handle stateful request. Is it also possible in python using WSGI? Or stateful and stateless are handled in other layer?
How can I make WSGI(Python) stateful?
0.321513
0
0
1,364
1,704,458
2009-11-09T22:43:00.000
9
0
1
0
python,gzip
54,360,738
11
false
0
0
Despite what the other answers say, the last four bytes are not a reliable way to get the uncompressed length of a gzip file. First, there may be multiple members in the gzip file, so that would only be the length of the last member. Second, the length may be more than 4 GB, in which case the last four bytes represent the length modulo 232. Not the length. However for what you want, there is no need to get the uncompressed length. You can instead base your progress bar on the amount of input consumed, as compared to the length of the gzip file, which is readily obtained. For typical homogenous data, that progress bar would show exactly the same thing as a progress bar based instead on the uncompressed data.
4
17
0
Using gzip, tell() returns the offset in the uncompressed file. In order to show a progress bar, I want to know the original (uncompressed) size of the file. Is there an easy way to find out?
Get uncompressed size of a .gz file in python
1
0
0
15,630
1,704,458
2009-11-09T22:43:00.000
5
0
1
0
python,gzip
1,704,485
11
false
0
0
Unix way: use "gunzip -l file.gz" via subprocess.call / os.popen, capture and parse its output.
4
17
0
Using gzip, tell() returns the offset in the uncompressed file. In order to show a progress bar, I want to know the original (uncompressed) size of the file. Is there an easy way to find out?
Get uncompressed size of a .gz file in python
0.090659
0
0
15,630
1,704,458
2009-11-09T22:43:00.000
4
0
1
0
python,gzip
1,704,537
11
false
0
0
The last 4 bytes of the .gz hold the original size of the file
4
17
0
Using gzip, tell() returns the offset in the uncompressed file. In order to show a progress bar, I want to know the original (uncompressed) size of the file. Is there an easy way to find out?
Get uncompressed size of a .gz file in python
0.072599
0
0
15,630
1,704,458
2009-11-09T22:43:00.000
0
0
1
0
python,gzip
1,750,299
11
false
0
0
GzipFile.size stores the uncompressed size, but it's only incremented when you read the file, so you should prefer len(fd.read()) instead of the non-public GzipFile.size.
4
17
0
Using gzip, tell() returns the offset in the uncompressed file. In order to show a progress bar, I want to know the original (uncompressed) size of the file. Is there an easy way to find out?
Get uncompressed size of a .gz file in python
0
0
0
15,630
1,704,589
2009-11-09T23:09:00.000
0
0
1
0
python,zope.interface
1,707,215
2
false
0
0
Use python 2.x. It is more supported by most libraries. It has many 3.x features plus all 3rd party libraries. Later when dependencies are available you can migrate to py3 using 2to3.
1
1
0
Zope Interfaces are a great way to get some Java-style "design by contract" into a python program. It provides some great features such as implement-able interfaces and a really neat pattern for writing adaptors for objects. Unfortunately, since it's part of a very mature platform which runs just fine on Python 2.x the developers of Zope.Interface have not yet prioritised porting to Python 3. I'd probably do the same in their situation. :-) What I want to know is: Is there another way to achieve a similar effect on the 3.x platform? I want to use the same kinds of patterns that Zope.Interface makes easy but I don't want to roll my own interfaces system. Or I should just forget about interfaces for now and design around this problem.
I want to use ZopeInterfaces, however my project is based on Python 3.x - any suggestions?
0
0
0
100
1,704,607
2009-11-09T23:13:00.000
33
0
1
0
c#,python
1,704,637
3
true
0
0
C#'s yield return is equivalent to Python's yield , and yield break is just return in Python. Other than those minor differences, they have basically the same purpose.
1
32
0
What is the difference between yield keyword in Python and yield keyword in C#?
Difference between yield in Python and yield in C#
1.2
0
0
4,780
1,705,077
2009-11-10T01:14:00.000
6
0
0
1
python,linux,process
1,705,099
8
false
0
0
Checking the list of running processes is accomplished (even by core utilities like "ps") by looking at the contents of the /proc directory. As such, the library you're interested for querying running processes is the same as used for working with any other files and directories (i.e. sys or os, depending on the flavor you're after. Pay special attention to os.path though, it does most of what you're after). To terminate or otherwise interact with processes, you send them signals, which is accomplished with os.kill. Finally, you start new processes using os.popen and friends.
1
14
0
Through my web interface I would like to start/stop certain processes and determine whether a started process is still running. My existing website is Python based and running on a Linux server, so do you know of a suitable library that supports this functionality? Thanks
Python library for Linux process management
1
0
0
10,686
1,705,217
2009-11-10T02:02:00.000
1
0
0
0
python,google-app-engine,forms
1,705,231
4
false
1
0
Is there a more specific reason you don't want to use django.forms? I've quite successfully used bits and pieces of django all by themselves without trouble in several projects. As an aside, there are several patches that make django sortof work in app-engine, though I assume you've considered and discarded them.
1
3
0
django.forms is very nice, and does almost exactly what I want to do on my current project, but unfortunately, Google App Engine makes most of the rest of Django unusable, and so packing it along with the app seems kind of silly. I've also discovered FormAlchemy, which is an SQLAlchemy analog to Django forms, and I intend to explore that fully, but it's relationship with SQLAlchemy suggests that it may also give me some trouble. Is there any HTML Forms processing library for python that I haven't considered?
Python Form Processing alternatives
0.049958
0
0
1,382
1,705,824
2009-11-10T05:33:00.000
0
0
0
0
python,graph,geometry,cycle
1,705,913
11
false
0
0
Do you need to find 'all' of the 'triangles', or just 'some'/'any'? Or perhaps you just need to test whether a particular node is part of a triangle? The test is simple - given a node A, are there any two connected nodes B & C that are also directly connected. If you need to find all of the triangles - specifically, all groups of 3 nodes in which each node is joined to the other two - then you need to check every possible group in a very long running 'for each' loop. The only optimisation is ensuring that you don't check the same 'group' twice, e.g. if you have already tested that B & C aren't in a group with A, then don't check whether A & C are in a group with B.
2
10
1
I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient. I am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know. If someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.
Finding cycle of 3 nodes ( or triangles) in a graph
0
0
0
18,355
1,705,824
2009-11-10T05:33:00.000
1
0
0
0
python,graph,geometry,cycle
1,705,866
11
false
0
0
Even though it isn't efficient, you may want to implement a solution, so use the loops. Write a test so you can get an idea as to how long it takes. Then, as you try new approaches you can do two things: 1) Make certain that the answer remains the same. 2) See what the improvement is. Having a faster algorithm that misses something is probably going to be worse than having a slower one. Once you have the slow test, you can see if you can do this in parallel and see what the performance increase is. Then, you can see if you can mark all nodes that have less than 3 vertices. Ideally, you may want to shrink it down to just 100 or so first, so you can draw it, and see what is happening graphically. Sometimes your brain will see a pattern that isn't as obvious when looking at algorithms.
2
10
1
I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient. I am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know. If someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.
Finding cycle of 3 nodes ( or triangles) in a graph
0.01818
0
0
18,355
1,705,955
2009-11-10T06:16:00.000
3
0
0
0
python,django,packaging,pinax,external-dependencies
1,706,000
4
false
0
0
Could you handle this using the "==dev" version specifier? If the distribution's page on PyPI includes a link to a .tgz of the current dev version (such as both github and bitbucket provide automatically) and you append "#egg=project_name-dev" to the link, both easy_install and pip will use that .tgz if ==dev is requested. This doesn't allow you to pin to anything more specific than "most recent tip/head", but in a lot of cases that might be good enough?
3
6
0
One issue that comes up during Pinax development is dealing with development versions of external apps. I am trying to come up with a solution that doesn't involve bringing in the version control systems. Reason being I'd rather not have to install all the possible version control systems on my system (or force that upon contributors) and deal the problems that might arise during environment creation. Take this situation (knowing how Pinax works will be beneficial to understanding): We are beginning development on a new version of Pinax. The previous version has a pip requirements file with explicit versions set. A bug comes in for an external app that we'd like to get resolved. To get that bug fix in Pinax the current process is to simply make a minor release of the app assuming we have control of the app. Apps we don't have control we just deal with the release cycle of the app author or force them to make releases ;-) I am not too fond of constantly making minor releases for bug fixes as in some cases I'd like to be working on new features for apps as well. Of course branching the older version is what we do and then do backports as we need. I'd love to hear some thoughts on this.
How might I handle development versions of Python packages without relying on SCM?
0.148885
0
0
347
1,705,955
2009-11-10T06:16:00.000
0
0
0
0
python,django,packaging,pinax,external-dependencies
1,706,060
4
false
0
0
EDIT: I am not sure I am reading your question correctly so the following may not answer your question directly. Something I've considered, but haven't tested, is using pip's freeze bundle feature. Perhaps using that and distributing the bundle with Pinax would work? My only concern would be how different OS's are handled. For example, I've never used pip on Windows, so I wouldn't know how a bundle would interact there. The full idea I hope to try is creating a paver script that controls management of the bundles, making it easy for users to upgrade to newer versions. This would require a bit of scaffolding though. One other option may be you keeping a mirror of the apps you don't control, in a consistent vcs, and then distributing your mirrored versions. This would take away the need for "everyone" to have many different programs installed. Other than that, it seems the only real solution is what you guys are doing, there isn't a hassle-free way that I've been able to find.
3
6
0
One issue that comes up during Pinax development is dealing with development versions of external apps. I am trying to come up with a solution that doesn't involve bringing in the version control systems. Reason being I'd rather not have to install all the possible version control systems on my system (or force that upon contributors) and deal the problems that might arise during environment creation. Take this situation (knowing how Pinax works will be beneficial to understanding): We are beginning development on a new version of Pinax. The previous version has a pip requirements file with explicit versions set. A bug comes in for an external app that we'd like to get resolved. To get that bug fix in Pinax the current process is to simply make a minor release of the app assuming we have control of the app. Apps we don't have control we just deal with the release cycle of the app author or force them to make releases ;-) I am not too fond of constantly making minor releases for bug fixes as in some cases I'd like to be working on new features for apps as well. Of course branching the older version is what we do and then do backports as we need. I'd love to hear some thoughts on this.
How might I handle development versions of Python packages without relying on SCM?
0
0
0
347