Q_Id int64 337 49.3M | CreationDate stringlengths 23 23 | Users Score int64 -42 1.15k | Other int64 0 1 | Python Basics and Environment int64 0 1 | System Administration and DevOps int64 0 1 | Tags stringlengths 6 105 | A_Id int64 518 72.5M | AnswerCount int64 1 64 | is_accepted bool 2
classes | Web Development int64 0 1 | GUI and Desktop Applications int64 0 1 | Answer stringlengths 6 11.6k | Available Count int64 1 31 | Q_Score int64 0 6.79k | Data Science and Machine Learning int64 0 1 | Question stringlengths 15 29k | Title stringlengths 11 150 | Score float64 -1 1.2 | Database and SQL int64 0 1 | Networking and APIs int64 0 1 | ViewCount int64 8 6.81M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,397,850 | 2010-08-03T15:05:00.000 | 0 | 0 | 0 | 0 | python,testing,black-box | 3,397,887 | 4 | false | 1 | 0 | What exactly is "Testing links"?
If it means they lead to non-4xx URIs, I'm afraid You must visit them.
As for existence of given links (like "Contact"), You may look for them using xpath. | 2 | 2 | 0 | I'm tryng to verify if all my page links are valid, and also something similar to me if all the pages have a specified link like contact. i use python unit testing and selenium IDE to record actions that need to be tested.
So my question is can i verify the links in a loop or i need to try every link on my own?
i tried to do this with __iter__ but it didn't get any close ,there may be a reason that i'm poor at oop, but i still think that there must me another way of testing links than clicking them and recording one by one. | how can i verify all links on a page as a black-box tester | 0 | 0 | 1 | 412 |
3,397,850 | 2010-08-03T15:05:00.000 | 0 | 0 | 0 | 0 | python,testing,black-box | 3,399,490 | 4 | false | 1 | 0 | You could (as yet another alternative), use BeautifulSoup to parse the links on your page and try to retrieve them via urllib2. | 2 | 2 | 0 | I'm tryng to verify if all my page links are valid, and also something similar to me if all the pages have a specified link like contact. i use python unit testing and selenium IDE to record actions that need to be tested.
So my question is can i verify the links in a loop or i need to try every link on my own?
i tried to do this with __iter__ but it didn't get any close ,there may be a reason that i'm poor at oop, but i still think that there must me another way of testing links than clicking them and recording one by one. | how can i verify all links on a page as a black-box tester | 0 | 0 | 1 | 412 |
3,399,185 | 2010-08-03T17:48:00.000 | 0 | 0 | 0 | 0 | python,twisted.web | 3,400,337 | 1 | false | 0 | 0 | Well, it doesn't look like you've missed anything. client.getPage doesn't directly support setting the bind address. I'm just guessing here but I would suspect it's one of those cases where it just never occured to the original developer that someone would want to specify the bind address.
Even though there isn't built-in support for doing this, it should be pretty easy to do. The way you specify binding addresses for outgoing connections in twisted is by passing the bind address to the reactor.connectXXX() functions. Fortunately, the code for getPage() is really simple. I'd suggest three things:
Copy the code for getPage() and it's associated helper function into your project
Modify them to pass through the bind address
Create a patch to fix this oversight and send it to the Twisted folks :) | 1 | 0 | 0 | For the past 10 hours I've been trying to accomplish this:
Translation of my blocking httpclient using standard lib...
Into a twisted nonblocking/async version of it.
10 hours later... scoring through their APIs-- it appears no one has EVER needed to do be able to do that. Nice framework, but seems ...a bit overwhelming to just set a socket to a different interface.
Can any python gurus shed some light on this and/or send me in the right direction? or any docs that I could have missed? THANKS! | Overloading twisted.client.getPage to set the client socket's bindaddress ! | 0 | 0 | 1 | 314 |
3,399,367 | 2010-08-03T18:06:00.000 | 1 | 1 | 1 | 0 | python,ruby-on-rails | 3,399,696 | 6 | false | 1 | 0 | Even if Ruby/Python interpreters were perfect, and could utilize all avail CPU with single process, you would still reach maximal capability of single server sooner or later and have to scale across several machines, going back to running several instances of your app. | 5 | 1 | 0 | Is it just me or is having to run multiple instances of a web server to scale a hack?
Am I wrong in this?
Clarification
I am referring to how I read people run multiple instances of a web service on a single server. I am not talking about a cluster of servers. | having to run multiple instances of a web service for ruby/python seems like a hack to me | 0.033321 | 0 | 0 | 584 |
3,399,367 | 2010-08-03T18:06:00.000 | 1 | 1 | 1 | 0 | python,ruby-on-rails | 3,399,437 | 6 | false | 1 | 0 | With no details, it is very difficult to see what you are getting at. That being said, it is quite possible that you are simply not using the right approach for your problem.
Sometimes multiple separate instances are better. Sometimes, your Python services are actually better deployed behind a single Apache instance (using mod_wsgi) which may elect to use more than a single process. I don't know about Ruby to opinionate there.
In short, if you want to make your service scalable then the way to do so depends heavily on additional details. Is it scaling up or scaling out? What is the operating system and available or possibly installable server software? Is the service itself easily parallelized and how much is it database dependent? How is the database deployed? | 5 | 1 | 0 | Is it just me or is having to run multiple instances of a web server to scale a hack?
Am I wrong in this?
Clarification
I am referring to how I read people run multiple instances of a web service on a single server. I am not talking about a cluster of servers. | having to run multiple instances of a web service for ruby/python seems like a hack to me | 0.033321 | 0 | 0 | 584 |
3,399,367 | 2010-08-03T18:06:00.000 | 0 | 1 | 1 | 0 | python,ruby-on-rails | 3,402,337 | 6 | false | 1 | 0 | Your assumption that Tomcat's and IIS's single process per server is superior is flawed. The choice of a multi-threaded server and a multi-process server depends on a lot of variables.
One main thing is the underlying operating system. Unix systems have always had great support for multi-processing because of the copy-on-write nature of the fork system call. This makes multi-processes a really attractive option because web-serving is usually very shared-nothing and you don't have to worry about locking. Windows on the other hand had much heavier processes and lighter threads so programs like IIS would gravitate to a multi-threading model.
As for the question to wether it's a hack to run multiple servers really depends on your perspective. If you look at Apache, it comes with a variety of pluggable engines to choose from. The MPM-prefork one is the default because it allows the programmer to easily use non-thread-safe C/Perl/database libraries without having to throw locks and semaphores all over the place. To some that might be a hack to work around poorly implemented libraries. To me it's a brilliant way of leaving it to the OS to handle the problems and letting me get back to work.
Also a multi-process model comes with a few features that would be very difficult to implement in a multi-threaded server. Because they are just processes, zero-downtime rolling-updates are trivial. You can do it with a bash script.
It also has it's short-comings. In a single-server model setting up a singleton that holds some global state is trivial, while on a multi-process model you have to serialize that state to a database or Redis server. (Of course if your single-process server outgrows a single server you'll have to do that anyway.)
Is it a hack? Yes and no. Both original implementations (MRI, and CPython) have Global Interpreter Locks that will prevent a multi-core server from operating at it's 100% potential. On the other hand multi-process has it's advantages (especially on the Unix-side of the fence).
There's also nothing inherent in the languages themselves that makes them require a GIL, so you can run your application with Jython, JRuby, IronPython or IronRuby if you really want to share state inside a single process. | 5 | 1 | 0 | Is it just me or is having to run multiple instances of a web server to scale a hack?
Am I wrong in this?
Clarification
I am referring to how I read people run multiple instances of a web service on a single server. I am not talking about a cluster of servers. | having to run multiple instances of a web service for ruby/python seems like a hack to me | 0 | 0 | 0 | 584 |
3,399,367 | 2010-08-03T18:06:00.000 | 4 | 1 | 1 | 0 | python,ruby-on-rails | 3,399,409 | 6 | true | 1 | 0 | Not really, people were running multiple frontends across a cluster of servers before multicore cpus became widespread
So there has been all the infrastructure for supporting sessions properly across multiple frontends for quite some time before it became really advantageous to run a bunch of threads on one machine.
Infact using asynchronous style frontends gives better performance on the same hardware than a multithreaded approach, so I would say that not running multiple instances in favour of a multithreaded monster is a hack | 5 | 1 | 0 | Is it just me or is having to run multiple instances of a web server to scale a hack?
Am I wrong in this?
Clarification
I am referring to how I read people run multiple instances of a web service on a single server. I am not talking about a cluster of servers. | having to run multiple instances of a web service for ruby/python seems like a hack to me | 1.2 | 0 | 0 | 584 |
3,399,367 | 2010-08-03T18:06:00.000 | 4 | 1 | 1 | 0 | python,ruby-on-rails | 3,399,416 | 6 | false | 1 | 0 | Since we are now moving towards more cores, rather than faster processors - in order to scale more and more, you will need to be running more instances.
So yes, I reckon you are wrong.
This does not by any means condone brain-dead programming with the excuse that you can just scale it horizontally, that just seems retarded. | 5 | 1 | 0 | Is it just me or is having to run multiple instances of a web server to scale a hack?
Am I wrong in this?
Clarification
I am referring to how I read people run multiple instances of a web service on a single server. I am not talking about a cluster of servers. | having to run multiple instances of a web service for ruby/python seems like a hack to me | 0.132549 | 0 | 0 | 584 |
3,400,381 | 2010-08-03T20:08:00.000 | 5 | 0 | 0 | 1 | python,linux,shell,command-line | 3,400,402 | 6 | false | 0 | 0 | You want a shebang. #!/path/to/python. Put that on the first line of your python script. The #! is actually a magic number that tells the operating system to interpret the file as a script for the program named. You can supply /usr/bin/python or, to be more portable, /usr/bin/env python which calls the /usr/bin/env program and tells it you want the system's installed Python interpreter.
You'll also have to put your script in your path, unless you're okay with typing ./SQLsap args. | 2 | 12 | 0 | Question: In command line, how do I call a python script without having to type python in front of the script's name? Is this even possible?
Info:
I wrote a handy script for accessing sqlite databases from command line, but I kind of don't like having to type "python SQLsap args" and would rather just type "SQLsap args". I don't know if this is even possible, but it would be good to know if it is. For more than just this program. | Calling a python script from command line without typing "python" first | 0.16514 | 0 | 0 | 14,699 |
3,400,381 | 2010-08-03T20:08:00.000 | 4 | 0 | 0 | 1 | python,linux,shell,command-line | 3,400,404 | 6 | false | 0 | 0 | Assuming this is on a unix system, you can add a "shebang" on the top of the file like this:
#!/usr/bin/env python
And then set the executable flag like this:
chmod +x SQLsap | 2 | 12 | 0 | Question: In command line, how do I call a python script without having to type python in front of the script's name? Is this even possible?
Info:
I wrote a handy script for accessing sqlite databases from command line, but I kind of don't like having to type "python SQLsap args" and would rather just type "SQLsap args". I don't know if this is even possible, but it would be good to know if it is. For more than just this program. | Calling a python script from command line without typing "python" first | 0.132549 | 0 | 0 | 14,699 |
3,400,622 | 2010-08-03T20:44:00.000 | 2 | 0 | 0 | 0 | python,windows,keyboard-shortcuts,tkinter | 3,401,235 | 1 | true | 0 | 1 | Put return 'break' at the end of your event handling function. This tells Tkinter not to propagate the event to default handlers. | 1 | 0 | 0 | I'd like to implement my own key command. However when I do, it does both what I tell it and the default command. How do I disable the default command, so that my command is the only one that runs?
This is on Windows 7, BTW. | How do I disable default Tkinter key commands? | 1.2 | 0 | 0 | 226 |
3,400,847 | 2010-08-03T21:23:00.000 | 0 | 1 | 1 | 0 | python | 3,401,173 | 6 | false | 0 | 0 | Seems like what you want is to organize various dependencies between components. You will be better off expressing these dependencies in an object-oriented manner. Rather than doing it by importing modules and global states, encode these states in objects and pass those around.
Read up on objects and classes and how to write them in Python; I'd probably start there. | 4 | 3 | 0 | I'm a mechanical engineering student, and I'm building a physical simulation using PyODE.
instead of running everything from one file, I wanted to organize stuff in modules so I had:
main.py
callback.py
helper.py
I ran into problems when I realized that helper.py needed to reference variables from main, but main was the one importing helper!
so my solution was to create a 4th file, which houses variables and imports only external modules (such as time and random).
so I now have:
main.py
callback.py
helper.py
parameters.py
and all scripts have: import parameters and use: parameters.foo or parameters.bar.
Is this an acceptable practice or is this a sure fire way to make python programmers puke? :)
Please let me know if this makes sense, or if there is a more sensible way of doing it!
Thanks,
-Leav | A python module for global parameters - is this good practice? | 0 | 0 | 0 | 260 |
3,400,847 | 2010-08-03T21:23:00.000 | 1 | 1 | 1 | 0 | python | 3,401,021 | 6 | true | 0 | 0 | I try to design my code so that it looks much like a pyramid. That, I have found, leads to cleaner code. | 4 | 3 | 0 | I'm a mechanical engineering student, and I'm building a physical simulation using PyODE.
instead of running everything from one file, I wanted to organize stuff in modules so I had:
main.py
callback.py
helper.py
I ran into problems when I realized that helper.py needed to reference variables from main, but main was the one importing helper!
so my solution was to create a 4th file, which houses variables and imports only external modules (such as time and random).
so I now have:
main.py
callback.py
helper.py
parameters.py
and all scripts have: import parameters and use: parameters.foo or parameters.bar.
Is this an acceptable practice or is this a sure fire way to make python programmers puke? :)
Please let me know if this makes sense, or if there is a more sensible way of doing it!
Thanks,
-Leav | A python module for global parameters - is this good practice? | 1.2 | 0 | 0 | 260 |
3,400,847 | 2010-08-03T21:23:00.000 | 3 | 1 | 1 | 0 | python | 3,400,902 | 6 | false | 0 | 0 | Separate 'global' files for constants, configurations, and includes needed everywhere are fine. But when they contain actual mutable variables then they're not such a good idea. Consider having the files communicate with function return values and arguments instead. This promotes encapsulation and will keep your code from becoming a tangled mess.
Clear communication between files makes them easier to understand and makes what's going on more obvious. When you're using variables and nobody knows where they came from, things can get pretty annoying. :) | 4 | 3 | 0 | I'm a mechanical engineering student, and I'm building a physical simulation using PyODE.
instead of running everything from one file, I wanted to organize stuff in modules so I had:
main.py
callback.py
helper.py
I ran into problems when I realized that helper.py needed to reference variables from main, but main was the one importing helper!
so my solution was to create a 4th file, which houses variables and imports only external modules (such as time and random).
so I now have:
main.py
callback.py
helper.py
parameters.py
and all scripts have: import parameters and use: parameters.foo or parameters.bar.
Is this an acceptable practice or is this a sure fire way to make python programmers puke? :)
Please let me know if this makes sense, or if there is a more sensible way of doing it!
Thanks,
-Leav | A python module for global parameters - is this good practice? | 0.099668 | 0 | 0 | 260 |
3,400,847 | 2010-08-03T21:23:00.000 | 2 | 1 | 1 | 0 | python | 3,400,871 | 6 | false | 0 | 0 | Uhm, i think it does not make sence if this happens: "realized that helper.py needed to reference variables from main", your helper functions should be independent from your "main code", otherwise i think its ugly and more like a design failure. | 4 | 3 | 0 | I'm a mechanical engineering student, and I'm building a physical simulation using PyODE.
instead of running everything from one file, I wanted to organize stuff in modules so I had:
main.py
callback.py
helper.py
I ran into problems when I realized that helper.py needed to reference variables from main, but main was the one importing helper!
so my solution was to create a 4th file, which houses variables and imports only external modules (such as time and random).
so I now have:
main.py
callback.py
helper.py
parameters.py
and all scripts have: import parameters and use: parameters.foo or parameters.bar.
Is this an acceptable practice or is this a sure fire way to make python programmers puke? :)
Please let me know if this makes sense, or if there is a more sensible way of doing it!
Thanks,
-Leav | A python module for global parameters - is this good practice? | 0.066568 | 0 | 0 | 260 |
3,402,168 | 2010-08-04T02:28:00.000 | 152 | 0 | 1 | 0 | python,windows,pythonpath | 3,402,193 | 22 | true | 0 | 0 | You need to add your new directory to the environment variable PYTHONPATH, separated by a colon from previous contents thereof. In any form of Unix, you can do that in a startup script appropriate to whatever shell you're using (.profile or whatever, depending on your favorite shell) with a command which, again, depends on the shell in question; in Windows, you can do it through the system GUI for the purpose.
superuser.com may be a better place to ask further, i.e. for more details if you need specifics about how to enrich an environment variable in your chosen platform and shell, since it's not really a programming question per se. | 5 | 399 | 0 | Whenever I use sys.path.append, the new directory will be added. However, once I close python, the list will revert to the previous (default?) values. How do I permanently add a directory to PYTHONPATH? | Permanently add a directory to PYTHONPATH? | 1.2 | 0 | 0 | 899,731 |
3,402,168 | 2010-08-04T02:28:00.000 | 7 | 0 | 1 | 0 | python,windows,pythonpath | 7,919,405 | 22 | false | 0 | 0 | Just to add on awesomo's answer, you can also add that line into your ~/.bash_profile or ~/.profile | 5 | 399 | 0 | Whenever I use sys.path.append, the new directory will be added. However, once I close python, the list will revert to the previous (default?) values. How do I permanently add a directory to PYTHONPATH? | Permanently add a directory to PYTHONPATH? | 1 | 0 | 0 | 899,731 |
3,402,168 | 2010-08-04T02:28:00.000 | 30 | 0 | 1 | 0 | python,windows,pythonpath | 12,429,896 | 22 | false | 0 | 0 | In case anyone is still confused - if you are on a Mac, do the following:
Open up Terminal
Type open .bash_profile
In the text file that pops up, add this line at the end:
export PYTHONPATH=$PYTHONPATH:foo/bar
Save the file, restart the Terminal, and you're done | 5 | 399 | 0 | Whenever I use sys.path.append, the new directory will be added. However, once I close python, the list will revert to the previous (default?) values. How do I permanently add a directory to PYTHONPATH? | Permanently add a directory to PYTHONPATH? | 1 | 0 | 0 | 899,731 |
3,402,168 | 2010-08-04T02:28:00.000 | 55 | 0 | 1 | 0 | python,windows,pythonpath | 30,728,643 | 22 | false | 0 | 0 | This works on Windows
On Windows, with Python 2.7 go to the Python setup folder.
Open Lib/site-packages.
Add an example.pth empty file to this folder.
Add the required path to the file, one per each line.
Then you'll be able to see all modules within those paths from your scripts. | 5 | 399 | 0 | Whenever I use sys.path.append, the new directory will be added. However, once I close python, the list will revert to the previous (default?) values. How do I permanently add a directory to PYTHONPATH? | Permanently add a directory to PYTHONPATH? | 1 | 0 | 0 | 899,731 |
3,402,168 | 2010-08-04T02:28:00.000 | 2 | 0 | 1 | 0 | python,windows,pythonpath | 45,004,593 | 22 | false | 0 | 0 | I added permanently in Windows Vista, Python 3.5
System > Control Panel > Advanced system settings > Advanced (tap) Environment Variables > System variables > (if you don't see PYTHONPATH in Variable column) (click) New > Variable name: PYTHONPATH > Variable value:
Please, write the directory in the Variable value. It is details of Blue Peppers' answer. | 5 | 399 | 0 | Whenever I use sys.path.append, the new directory will be added. However, once I close python, the list will revert to the previous (default?) values. How do I permanently add a directory to PYTHONPATH? | Permanently add a directory to PYTHONPATH? | 0.01818 | 0 | 0 | 899,731 |
3,402,271 | 2010-08-04T02:56:00.000 | 0 | 0 | 1 | 0 | python,curl,urllib2 | 3,402,359 | 1 | false | 0 | 0 | At sizes of 500MB+ one has to worry about data integrity, and HTTP is not designed with data integrity in mind.
I'd rather use python bindings for rsync (if they exist) or even bittorrent, which was initially implemented in python. Both rsync and bittorrent address the data integrity issue. | 1 | 1 | 0 | Which library/module is the best to use for downloading large 500mb+ files in terms of speed, memory, cpu? I was also contemplating using pycurl. | best way to download large files with python | 0 | 0 | 1 | 2,537 |
3,402,574 | 2010-08-04T04:31:00.000 | 0 | 0 | 1 | 0 | python,puzzle,iterator | 3,402,637 | 3 | false | 0 | 0 | I assume you are trying to find out what is the longest word that can be made from your 10 arbitrary letters.
You can keep your 10 arbitrary letters in a dict along with the frequency they occur.
e.g., your 4 (using 4 instead of 10 for simplicity) arbitrary letters are: e, w, l, l. This would be in a dict as:
{'e':1, 'w':1, 'l':2}
Then for each word in the text file, see if all of the letters for that word can be found in your dict of arbitrary letters. If so, then that is one of your candidate words.
So:
we
wall
well
all of the letters in well would be found in your dict of arbitrary letters so save it and its length for comparison against other words. | 1 | 0 | 0 | I have 10 arbitrary letters and need to check the max length match from words file
I started to learn RE just some time ago, and can't seem to find suitable pattern
first idea that came was using set: [10 chars] but it also repeats included chars and I don't know how to avoid that
I stared to learn Python recently but before RE and maybe RE is not needed and this can be solved without it
using "for this in that:" iterator seems inappropriate, but maybe itertools can do it easily (with which I'm not familiar)
I guess solution is known even to novice programmers/scripters, but not to me
Thanks | Find max length word from arbitrary letters | 0 | 0 | 0 | 1,499 |
3,403,168 | 2010-08-04T06:40:00.000 | 0 | 0 | 0 | 0 | python,escaping,html-entities | 3,405,525 | 4 | false | 1 | 0 | You shouldn't use an XML parser to parse data that isn't XML. Find an HTML parser instead, you'll be happier in the long run. The standard library has a few (HTMLParser and htmllib), and BeautifulSoup is a well-loved third-party package. | 1 | 1 | 0 | I'm scraping a html page, then using xml.dom.minidom.parseString() to create a dom object.
however, the html page has a '&'. I can use cgi.escape to convert this into & but it also converts all my html <> tags into <> which makes parseString() unhappy.
how do i go about this? i would rather not just hack it and straight replace the "&"s
thanks | need to selectively escape html entities (&) | 0 | 0 | 1 | 576 |
3,404,055 | 2010-08-04T09:05:00.000 | 0 | 0 | 0 | 0 | python,django | 3,404,200 | 2 | false | 1 | 0 | setattr(obj, fieldname, fieldvalue)
(see also getattr to retrieve at runtime) | 1 | 0 | 0 | Can anyone help me?
I have list of fields called 'allowed_fields' and I have object called 'individual'.
allowed_fields is sub set of individual. Now I want to run loop like this
for field in allowed_fields:
obj.field = individual.field
obj have same fields like individual. Do you have solution of my problem? I will thankful to you. | djangoproject access fields of object dynamically | 0 | 0 | 0 | 53 |
3,404,556 | 2010-08-04T10:18:00.000 | 0 | 0 | 0 | 0 | python,performance,sqlite | 3,536,835 | 4 | false | 0 | 0 | You appear to be comparing apples with oranges.
A python list is only useful if your data fit into the address-space of the process. Once the data get big, this won't work any more.
Moreover, a python list is not indexed - for that you should use a dictionary.
Finally, a python list is non-persistent - it is forgotten when the process quits.
How can you possibly compare these? | 2 | 2 | 0 | Lets say I have a database table which consists of three columns: id, field1 and field2. This table may have anywhere between 100 and 100,000 rows in it. I have a python script that should insert 10-1,000 new rows into this table. However, if the new field1 already exists in the table, it should do an UPDATE, not an INSERT.
Which of the following approaches is more efficient?
Do a SELECT field1 FROM table (field1 is unique) and store that in a list. Then, for each new row, use list.count() to determine whether to INSERT or UPDATE
For each row, run two queries. Firstly, SELECT count(*) FROM table WHERE field1="foo" then either the INSERT or UPDATE.
In other words, is it more efficient to perform n+1 queries and search a list, or 2n queries and get sqlite to search? | Python performance: search large list vs sqlite | 0 | 1 | 0 | 2,505 |
3,404,556 | 2010-08-04T10:18:00.000 | 0 | 0 | 0 | 0 | python,performance,sqlite | 3,404,589 | 4 | false | 0 | 0 | I imagine using a python dictionary would allow for much faster searching than using a python list. (Just set the values to 0, you won't need them, and hopefully a '0' stores compactly.)
As for the larger question, I'm curious too. :) | 2 | 2 | 0 | Lets say I have a database table which consists of three columns: id, field1 and field2. This table may have anywhere between 100 and 100,000 rows in it. I have a python script that should insert 10-1,000 new rows into this table. However, if the new field1 already exists in the table, it should do an UPDATE, not an INSERT.
Which of the following approaches is more efficient?
Do a SELECT field1 FROM table (field1 is unique) and store that in a list. Then, for each new row, use list.count() to determine whether to INSERT or UPDATE
For each row, run two queries. Firstly, SELECT count(*) FROM table WHERE field1="foo" then either the INSERT or UPDATE.
In other words, is it more efficient to perform n+1 queries and search a list, or 2n queries and get sqlite to search? | Python performance: search large list vs sqlite | 0 | 1 | 0 | 2,505 |
3,404,759 | 2010-08-04T10:48:00.000 | 1 | 0 | 0 | 0 | python,django | 3,404,772 | 3 | false | 1 | 0 | You can look in the admin to see how many usernames are there, assuming everyone who likes it creates one. Or you can look at your server logs and count the unique IPs. | 2 | 0 | 0 | I have developed a small django web application. It still runs in the django development web server.
It has been decided that if more than 'n' number of users like the application, it will be approved.
I want to find out all the users who view my application.
How can find the user who views my application?
Since I was the user who ran the application, all python ways of getting the username returns my name only.
Please help. | Finding the user who uses my django web application | 0.066568 | 0 | 0 | 206 |
3,404,759 | 2010-08-04T10:48:00.000 | 1 | 0 | 0 | 0 | python,django | 3,405,077 | 3 | false | 1 | 0 | Step 1. Add a model -- connected users. Include an FK to username and a datetime stamp.
Step 2. Write a function to log each user's activity.
Step 3. Write your own version of login that will call the Django built-in login and also call your function to log each user's activity.
Step 4. Write a small application -- outside Django -- that uses the ORM to query the connected users table and write summaries and counts and what-not.
You have a database. Use it. | 2 | 0 | 0 | I have developed a small django web application. It still runs in the django development web server.
It has been decided that if more than 'n' number of users like the application, it will be approved.
I want to find out all the users who view my application.
How can find the user who views my application?
Since I was the user who ran the application, all python ways of getting the username returns my name only.
Please help. | Finding the user who uses my django web application | 0.066568 | 0 | 0 | 206 |
3,406,800 | 2010-08-04T14:58:00.000 | 1 | 0 | 0 | 0 | python,ajax,django | 3,407,020 | 1 | false | 1 | 0 | None.
You'll have to code your own wrapper utility, using one of httplib / urllib / urllib2 libs to connect to the other server.
Most likely you will have to extract all the relevant info from the HttpRequest object and use that to manually construct your own request in said util function.
Regarding receiving the response from that other server, it will depend a little bit on wether you need that response only asynchronously or quasi-synchronously. | 1 | 2 | 0 | I want to make a Django view that does the following:
Receive an HttpRequest on api/some/url/or/other
Passes this through to another server at some/url/or/other (rewrite the URL, basically)
Adding a cookie based on session data in Django
Using the same method, data, params, et al, that were in the original request
Returns verbatim the response to the API call
Must store the cookies that came back from the call in the session
Must include the Django session cookie in the returned HttpResponse
What tools already exist in Django to do this? | How to make a Django passthrough view? | 0.197375 | 0 | 0 | 523 |
3,408,610 | 2010-08-04T18:31:00.000 | 1 | 1 | 1 | 0 | .net,visual-studio-2010,ironpython,pysvn | 3,408,897 | 3 | false | 0 | 0 | I believe you can import pysvn in IronPython, but you have to add python site-packages directory to IRONPYTHONPATH. | 1 | 2 | 0 | Python has a subversion bindings called 'pysvn' that can be used to manipulate subversion repository. Does something similar exists for IronPython?
My test platform in Windows 7 64-bit with Visual Studio 2010. | is it possible to access subversion from ironpython? | 0.066568 | 0 | 0 | 248 |
3,408,891 | 2010-08-04T19:01:00.000 | 1 | 0 | 0 | 0 | python,browser | 3,408,987 | 1 | true | 0 | 0 | In firefox, if you go to about:config and set browser.link.open_newwindow to "1", that will cause a clicked link that would open in a new window or tab to stay in the current tab. I'm not sure if this applies to calls from 3rd-party apps, but it might be worth a try.
Of course, this will now apply to everything you do in firefox (though ctrl + click will still open links in a new tab) | 1 | 0 | 0 | I am trying to create a python script that opens a single page at a time, however python + mozilla make it so everytime I do this, it opens up a new tab. I want it to keep just a single window open so that it can loop forever without crashing due to too many windows or tabs. It will be going to about 6-7 websites and the current code imports time and webbrowser.
webbrowser.open('url')
time.sleep(100)
webbrowser.open('next url')
//but here it will open a new tab, when I just want it to change the page.
Any information would be greatful,
Thank you. | How do I edit the url in python and open a new page without having a new window or tab opened? | 1.2 | 0 | 1 | 2,001 |
3,409,072 | 2010-08-04T19:25:00.000 | 2 | 0 | 1 | 0 | subprocess,ipython,execution | 3,409,938 | 1 | true | 0 | 0 | Apparently, such wrapper can be called via ip.IP.getoutput("command"). | 1 | 3 | 0 | I'd like to run a new command from IPython configuration and capture its output. Basically, I'd like to access the equivalent of !command via normal functions. I know I can just use subprocess, but since IPython already provides this functionality, I guess there must be a properly made wrapper included somewhere in the API. | Running external commands in IPython | 1.2 | 0 | 0 | 466 |
3,409,226 | 2010-08-04T19:41:00.000 | 2 | 1 | 0 | 1 | python,eclipse,intellisense,pydev | 3,409,335 | 3 | false | 0 | 0 | I'm using eclipse 3.6 and pydev with python 2.6 and it's the best one I've tested up to now. I didn't try 3.5 so not sure if it's the same as yours but I think it autocompletes well compared to others I tried but I didn't try any of the paid ones. | 3 | 0 | 0 | Does anyone know how to get an intellisense like functionality (better than default) in eclipse for python development? I am using Eclipse 3.5 with aptana and pydev and the interpreter is python 2.5.2 | How do you get Intellisense for Python in Eclipse/Aptana/Pydev? | 0.132549 | 0 | 0 | 3,548 |
3,409,226 | 2010-08-04T19:41:00.000 | 3 | 1 | 0 | 1 | python,eclipse,intellisense,pydev | 3,409,439 | 3 | false | 0 | 0 | You are probably never going to get something as good as intellisense for python. Due to the dynamic nature of python, it is often impossible to be able to know the type of some variables.
And if you don't know their types, you can't do auto-complete on things like class members.
Personally, I think the auto-complete in PyDev is pretty good, given the nature of python. It isn't as good as for Java and probably won't be, but it sure beats not having anything.
Having said that, I haven't tried if PyDev is able to use the parameter types you can specify in python 3.x. Otherwise, that might be an improvement that could make life a little easier.
Update: Got curious and did a quick test, Looks like optional type information in python 3 is not used by PyDev. | 3 | 0 | 0 | Does anyone know how to get an intellisense like functionality (better than default) in eclipse for python development? I am using Eclipse 3.5 with aptana and pydev and the interpreter is python 2.5.2 | How do you get Intellisense for Python in Eclipse/Aptana/Pydev? | 0.197375 | 0 | 0 | 3,548 |
3,409,226 | 2010-08-04T19:41:00.000 | 0 | 1 | 0 | 1 | python,eclipse,intellisense,pydev | 9,141,159 | 3 | false | 0 | 0 | In Aptana I added the reference to the .egg file to the system PYTHONPATH in Preferences menu. I am not sure if this works for every library out there.
Preferences --> PyDev --> Interpreter Python --> Libraries tab on the right. | 3 | 0 | 0 | Does anyone know how to get an intellisense like functionality (better than default) in eclipse for python development? I am using Eclipse 3.5 with aptana and pydev and the interpreter is python 2.5.2 | How do you get Intellisense for Python in Eclipse/Aptana/Pydev? | 0 | 0 | 0 | 3,548 |
3,409,549 | 2010-08-04T20:27:00.000 | 5 | 0 | 0 | 1 | python,google-app-engine | 3,409,743 | 1 | false | 1 | 0 | I would say the blobstore is suitable for this. While datastore entities are limited to 1MB and standard HTTP responses are limited to 10MB, with the blobstore you can upload, store, and serve files up to 2GB. The 30 second limit refers to how long your handler can execute; time spent downloading (or uploading) doesn't count towards this limit.
The blobstore also supports byte ranges, so if your flash component supports it, you can seek to random positions in the video without downloading everything first. | 1 | 6 | 0 | I'm trying to setup a video streaming app via the Google Appengine Blobstore. Just wanted to know if this was possible, as there isn't too much regarding this in the Appengine Documentation. Basically I want to serve these videos through a flash player.
Thanks | Appengine Blobstore - Video Streaming | 0.761594 | 0 | 0 | 1,334 |
3,410,228 | 2010-08-04T22:05:00.000 | 2 | 0 | 0 | 0 | python,svn,version-control,deployment | 3,410,280 | 1 | true | 0 | 0 | When it can be broken up in modules, go for a repo / branch with all the 'base' code, and in the actual project, include them as svn:externals (same repository or another one doesn't matter). That way you can independently update / work on modules, pin certain projects to certain revisions of that module or keep them to HEAD.
A new project would either require a branching of a base project with most externals already set, or manual adding of the needed externals. A simple shell script setting the exact externals you need is easily made. | 1 | 0 | 0 | I have a few related applications that I want to deploy to different computers. They each share a large body of common code, and have some things unique to them. For example, I have a server and a client which use a lot of common classes to communicate to each other. I have yet more servers and clients which use some of the same classes, but are unrelated from each other.
The easy solution is to just leave them all in the same directory structure so they can all use whichever modules they need, and whenever I deploy a server or a client I put in the entire codebase. However the codebase is quite large, and some of the components use datafiles which are a few megabytes in size.
Ideally I'd be able to have them all share the same code, but be able to deploy just exactly which files each component needs... and they'd all be connected to the same version control. So it'd be something like:
On one computer: svn checkout client1. On another: svn checkout server1. On another: svn checkout client2. Then if I modify some client2 code that is shared between client2 and client1, both will be updated when I do svn update. Also ideally, I wouldn't have to pick out the files I need manually, since that can be annoying, but I can deal with that.
Have other people had this problem? Does it have a better-defined name? What solutions can I use to solve it? | python, svn, deploy applications with shared code | 1.2 | 0 | 0 | 270 |
3,410,296 | 2010-08-04T22:17:00.000 | 3 | 1 | 1 | 0 | python,shell | 3,410,409 | 3 | false | 0 | 0 | I don't think it's a bad idea. Lots of people use IPython which is a shell written in Python :)
In fact you may want to base your effort around IPython. scipy does this, for example | 1 | 2 | 0 | Why is this such a bad idea? (According to many people) | Writing a Shell in Python? | 0.197375 | 0 | 0 | 797 |
3,410,309 | 2010-08-04T22:18:00.000 | 6 | 1 | 1 | 0 | python,string | 49,795,277 | 2 | false | 0 | 0 | In terms of python 2.7 and 3:
io.BytesIO is an in-memory file-like object that doesn't do any alteration to newlines, and is similar to open(filename, "wb"). It deal with bytes() strings, which in py2.7 is an alias for str.
io.StringIO is an in-memory file-like object that does do alterations to newlines, and is similar to open(filename, "w"). It deal with unicode() strings, which in py3.x is an alias for str.
py2.7's old StringIO.StringIO is an in-memory file-like object that does not do alterations to newlines, and is similar to open(filename, "w"). It deals with both unicode() and bytes() in the same way that most obsolete python 2 string methods do: by allowing you to mix them without error, but only as long as you're lucky.
Thus py2.7's old StringIO.StringIO class is actually more similar to io.BytesIO than io.StringIO, as it is operating in terms of bytes()/str() and doesn't do newline conversions.
What should be preferred?
Don't use StringIO.StringIO, instead use io.BytesIO or io.StringIO, depending on the use-case. This is forward compatible with python 3 and commits to bytes or unicode, rather than "both, maybe". | 1 | 30 | 0 | Besides the obvious (one is a type, the other a class)? What should be preferred? Any notable difference in use cases, perhaps? | What is the difference between StringIO and io.StringIO in Python2.7? | 1 | 0 | 0 | 19,774 |
3,411,006 | 2010-08-05T00:54:00.000 | 1 | 0 | 1 | 0 | php,python,string | 3,599,461 | 3 | true | 0 | 0 | Normally, .replace method beats all other methods. (See my benchmarks above.) | 1 | 11 | 0 | Is there any recommended way to do multiple string substitutions other than doing replace chaining on a string (i.e. text.replace(a, b).replace(c, d).replace(e, f)...)?
How would you, for example, implement a fast function that behaves like PHP's htmlspecialchars in Python?
I compared (1) multiple replace method, (2) the regular expression method, and (3) Matt Anderson's method.
With n=10 runs, the results came up as follows:
On 100 characters:
TIME: 0 ms [ replace_method(str) ]
TIME: 5 ms [ regular_expression_method(str, dict) ]
TIME: 1 ms [ matts_multi_replace_method(list, str) ]
On 1000 characters:
TIME: 0 ms [ replace_method(str) ]
TIME: 3 ms [ regular_expression_method(str, dict) ]
TIME: 2 ms [ matts_multi_replace_method(list, str) ]
On 10000 characters:
TIME: 3 ms [ replace_method(str) ]
TIME: 7 ms [ regular_expression_method(str, dict) ]
TIME: 5 ms [ matts_multi_replace_method(list, str) ]
On 100000 characters:
TIME: 36 ms [ replace_method(str) ]
TIME: 46 ms [ regular_expression_method(str, dict) ]
TIME: 39 ms [ matts_multi_replace_method(list, str) ]
On 1000000 characters:
TIME: 318 ms [ replace_method(str) ]
TIME: 360 ms [ regular_expression_method(str, dict) ]
TIME: 320 ms [ matts_multi_replace_method(list, str) ]
On 3687809 characters:
TIME: 1.277524 sec [ replace_method(str) ]
TIME: 1.290590 sec [ regular_expression_method(str, dict) ]
TIME: 1.116601 sec [ matts_multi_replace_method(list, str) ]
So kudos to Matt for beating the multi replace method on a fairly large input string.
Anyone got ideas for beating it on a smaller string? | Fastest implementation to do multiple string substitutions in Python | 1.2 | 0 | 0 | 4,058 |
3,411,131 | 2010-08-05T01:30:00.000 | 1 | 0 | 0 | 0 | python,django,path,django-manage.py,devserver | 3,411,300 | 2 | true | 1 | 0 | manage.py imports settings.py from the current directory and pass settings as parameter to execute_manager. You probably defined project root in settings.py. | 1 | 0 | 0 | I recently moved a django app from c:\Users\user\django-projects\foo\foobar to c:\Python25\Lib\site-packages\foo\foobar (which is on the python path). I started a new app in the django-projects directory, and added foo.foobar to the INSTALLED_APPS setting. When I try to run the dev server (manage.py runserver) for my new app, I get the error ImportError: No module named foobar.
Looking through the traceback, it's looking in the c:\Users\user\django-projects\foo\..\foo\foobar for the foobar app. I checked my PATH and PYTHONPATH environment variables, and neither point to c:\Users\user\django-projects\foo and It doesn't show up in sys.path when I run the python interpreter.
I'm guessing I somehow added c:\Users\user\django-projects\foo to django's path sometime along the development of foo but I don't remember how I did it.
So, with all that lead up, my question is "how do I make manage.py look in c:\Python25\Lib\site-packages instead of c:\Users\user\django-projects\foo?"
Thanks,
Lexo | Where does django dev server (manage.py runserver) get its path from? | 1.2 | 0 | 0 | 2,351 |
3,411,749 | 2010-08-05T04:29:00.000 | 50 | 0 | 1 | 0 | python,operators | 3,411,760 | 5 | true | 0 | 0 | It's the right bit shift operator, 'moves' all bits once to the right.
10 in binary is
1010
shifted to the right it turns to
0101
which is 5 | 2 | 33 | 0 | What does the >> operator do? For example, what does the following operation 10 >> 1 = 5 do? | >> operator in Python | 1.2 | 0 | 0 | 48,345 |
3,411,749 | 2010-08-05T04:29:00.000 | 3 | 0 | 1 | 0 | python,operators | 3,411,765 | 5 | false | 0 | 0 | Its the right shift operator.
10 in binary is 1010 now >> 1 says to right shift by 1, effectively loosing the least significant bit to give 101, which is 5 represented in binary.
In effect it divides the number by 2. | 2 | 33 | 0 | What does the >> operator do? For example, what does the following operation 10 >> 1 = 5 do? | >> operator in Python | 0.119427 | 0 | 0 | 48,345 |
3,413,879 | 2010-08-05T10:39:00.000 | 4 | 0 | 0 | 0 | python,r,vector,rpy2 | 20,171,517 | 7 | false | 0 | 0 | As pointed out by Brani, vector() is a solution, e.g.
newVector <- vector(mode = "numeric", length = 50)
will return a vector named "newVector" with 50 "0"'s as initial values. It is also fairly common to just add the new scalar to an existing vector to arrive at an expanded vector, e.g.
aVector <- c(aVector, newScalar) | 1 | 104 | 1 | I want to use R in Python, as provided by the module Rpy2. I notice that R has very convenient [] operations by which you can extract the specific columns or lines. How could I achieve such a function by Python scripts?
My idea is to create an R vector and add those wanted elements into this vector so that the final vector is the same as that in R. I created a seq(), but it seems that it has an initial digit 1, so the final result would always start with the digit 1, which is not what I want. So, is there a better way to do this? | How to create an empty R vector to add new items | 0.113791 | 0 | 0 | 291,651 |
3,415,298 | 2010-08-05T13:49:00.000 | 13 | 0 | 1 | 0 | python,bit-manipulation | 3,415,333 | 6 | true | 0 | 0 | This is done by first masking the bits you want to erase (forcing them to zero while preserving the other bits) before applying the bitwise OR.
Use a bitwise AND with the pattern (in this case) 11100111.
If you already have a "positive" version of the pattern (here this would be 00011000), which is easier to generate, you can obtain the "negative" version 11100111 using what is called 1's complement, available as ~ in Python and most languages with a C-like syntax. | 1 | 8 | 0 | Given a series of bits, what's the best way to overwrite a particular range of them.
For example, given:
0100 1010
Say I want to overwrite the middle 2 bits with 10 to make the result:
0101 0010
What would be the best way of doing this?
At first, I thought I would just shift the overwriting bits I want to the correct position (10000), and then use a bitwise OR. But I realized that while it preserves the other bits, there's no way of specifying which bits I want to actually overwrite.
I was looking into Python's bitarray module, but I just want to double-check that I'm not looking over an extremely simple bitwise operation to do this for me.
Thanks. | Best way to overwrite some bits in a particular range | 1.2 | 0 | 0 | 5,765 |
3,416,342 | 2010-08-05T15:29:00.000 | 2 | 1 | 0 | 0 | c++,python | 3,416,435 | 8 | false | 0 | 0 | If you are new to programming, I would say start with the C++ class. If you get the hang of it and enjoy programming, you can always learn Python later. There are a wealth of good books and Internet resources on pretty much any programming language out there that you should be able to teach yourself any language in your spare time. I would recommend learning that first language in a formal classroom, however, to help make it easier to learn the general concepts behind programming.
Edit: To clarify the point I was trying to make, my recommendation is to take whichever course is geared more towards beginning programmers. The important things to learn first are the basic fundamentals of programming. These apply towards almost any language. Thanks to the wealth of resources available online or in your bookstore/library, you can teach yourself practically any programming language that you want to learn. First, however, you must grasp the basics, and intro C/C++ classes typically (in my experiences, at least) do a good job of teaching programming fundamentals as well as the language itself.
Since you are a beginning programmer, I would not recommend trying to learn two languages at once (especially if you are trying to learn fundamentals at the same time). That's a lot of very similar (yet very different) information to keep track of in your head, almost like trying to learn two brand new spoken languages at the same time. You may be able to handle it perfectly fine but at least for most programmers that I know, it is much easier to get a good grasp on one language first and then start learning the second. | 6 | 0 | 0 | I am totally new to programming as though I have my PhD as a molecular biologist for the last 10 years. Can someone please tell me: Would it be too hard to handle if I enrolled simultaneously in C++ and python? I am a full time employee too. Both courses start and finish on the same dates and is for 3 months. For a variety of complicated reasons, this fall is the only time I can learn both languages. Please advise.
GillingsT
Update:
A little more detail about myself: as I said I did a PhD in Molecular Genetic. I now wish to be able to obtain programming skills so that I can apply it to do bioinformatics- like sequence manipulation and pathway analysis. I was told that Python is great for that but our course does not cover basics for beginners. I approached a Comp Sci Prof. who suggested that I learn C++ first before learning Python. So I got into this dilemma (added to other logistics). | C++ and python simultaneously. Is it doable | 0.049958 | 0 | 0 | 4,081 |
3,416,342 | 2010-08-05T15:29:00.000 | 0 | 1 | 0 | 0 | c++,python | 3,648,941 | 8 | false | 0 | 0 | You've got to find out what people in your field are programming with so you can leverage existing libraries/APIs/projects. It won't do you any good re-inventing the wheel in C++ or Python if there's some wicked-cool FORTRAN library out there that is standard in your field. (And, if that is the case, God help you, I'm sorry.) Anyway, the CS prof you talked to might not have any idea what computational molecular geneticists use. | 6 | 0 | 0 | I am totally new to programming as though I have my PhD as a molecular biologist for the last 10 years. Can someone please tell me: Would it be too hard to handle if I enrolled simultaneously in C++ and python? I am a full time employee too. Both courses start and finish on the same dates and is for 3 months. For a variety of complicated reasons, this fall is the only time I can learn both languages. Please advise.
GillingsT
Update:
A little more detail about myself: as I said I did a PhD in Molecular Genetic. I now wish to be able to obtain programming skills so that I can apply it to do bioinformatics- like sequence manipulation and pathway analysis. I was told that Python is great for that but our course does not cover basics for beginners. I approached a Comp Sci Prof. who suggested that I learn C++ first before learning Python. So I got into this dilemma (added to other logistics). | C++ and python simultaneously. Is it doable | 0 | 0 | 0 | 4,081 |
3,416,342 | 2010-08-05T15:29:00.000 | 0 | 1 | 0 | 0 | c++,python | 3,648,912 | 8 | false | 0 | 0 | I come from a computational maths background, and have written sizeable (commercial and accademic) programs in both C++ and python. They are very different languages and I would probably learn one first (or only one).
Which one would depend on what you want to be able to do with the language.
If you want to build something useful with your language that is not (overly) compute or data heavy, go with python, you'll get something useful quicker.
If you need to do something useful that is either compute heavy or data heavy, then you'll probably need to go with C++. But it will take you longer to get to something to do what you need --- It will take a while to learn C++, then additional time to code data-heavy or compute-heavy code effectively.
Now some will say that python can handle data/compute heavy jobs well enough.. but in molecular biology "heavy" can mean very heavy.
Having said this, my suggestion is go with python if you can. | 6 | 0 | 0 | I am totally new to programming as though I have my PhD as a molecular biologist for the last 10 years. Can someone please tell me: Would it be too hard to handle if I enrolled simultaneously in C++ and python? I am a full time employee too. Both courses start and finish on the same dates and is for 3 months. For a variety of complicated reasons, this fall is the only time I can learn both languages. Please advise.
GillingsT
Update:
A little more detail about myself: as I said I did a PhD in Molecular Genetic. I now wish to be able to obtain programming skills so that I can apply it to do bioinformatics- like sequence manipulation and pathway analysis. I was told that Python is great for that but our course does not cover basics for beginners. I approached a Comp Sci Prof. who suggested that I learn C++ first before learning Python. So I got into this dilemma (added to other logistics). | C++ and python simultaneously. Is it doable | 0 | 0 | 0 | 4,081 |
3,416,342 | 2010-08-05T15:29:00.000 | 7 | 1 | 0 | 0 | c++,python | 3,416,441 | 8 | false | 0 | 0 | You'll get holes in the head.
Python's data structures and memory management are radically different from C++.
Whichever language you "get" first, you'll love. The other you'll hate. Indeed, you'll be confused at the weird things one language lacks that the other has. One language will be reasonable, logical, unsurprising. The other will be a mess of ad-hoc decisions and quirks.
If you learn one all the way through -- by itself -- you'll probably be happier.
I find that most folks can more easily add a language to a base of expertise.
[Not all, however. Some folks are so mired in the first language they ever learned that they challenge every feature of a new language as being nonsensical. I had a guy in a Java class who only wanted to complain about the numerous ways that Java wasn't Fortran. All the type-specific stuff in Java gave him fits. A lot of discussions had to be curtailed with "That's the way it is. If you don't like it, take it up with Gosling. My job isn't to justify Java; my job is to get you to be able to work with java. Can we move on, now?"] | 6 | 0 | 0 | I am totally new to programming as though I have my PhD as a molecular biologist for the last 10 years. Can someone please tell me: Would it be too hard to handle if I enrolled simultaneously in C++ and python? I am a full time employee too. Both courses start and finish on the same dates and is for 3 months. For a variety of complicated reasons, this fall is the only time I can learn both languages. Please advise.
GillingsT
Update:
A little more detail about myself: as I said I did a PhD in Molecular Genetic. I now wish to be able to obtain programming skills so that I can apply it to do bioinformatics- like sequence manipulation and pathway analysis. I was told that Python is great for that but our course does not cover basics for beginners. I approached a Comp Sci Prof. who suggested that I learn C++ first before learning Python. So I got into this dilemma (added to other logistics). | C++ and python simultaneously. Is it doable | 1 | 0 | 0 | 4,081 |
3,416,342 | 2010-08-05T15:29:00.000 | 1 | 1 | 0 | 0 | c++,python | 3,416,466 | 8 | false | 0 | 0 | I think that given the circumstances (fulltime employee, etc) studying one language will be hard enough. Pick one, then study another. You'll learn basics from either language.
As for "which language to pick"... I specialize in C++, and know a bit of python. C++ is much more difficult, more flexible, and more suitable for making "traditional" executables.
I'd recommend to start with C++. You'll learn more concepts (some of them doesn't exist in python), and learning python after C++ won't be a problem. | 6 | 0 | 0 | I am totally new to programming as though I have my PhD as a molecular biologist for the last 10 years. Can someone please tell me: Would it be too hard to handle if I enrolled simultaneously in C++ and python? I am a full time employee too. Both courses start and finish on the same dates and is for 3 months. For a variety of complicated reasons, this fall is the only time I can learn both languages. Please advise.
GillingsT
Update:
A little more detail about myself: as I said I did a PhD in Molecular Genetic. I now wish to be able to obtain programming skills so that I can apply it to do bioinformatics- like sequence manipulation and pathway analysis. I was told that Python is great for that but our course does not cover basics for beginners. I approached a Comp Sci Prof. who suggested that I learn C++ first before learning Python. So I got into this dilemma (added to other logistics). | C++ and python simultaneously. Is it doable | 0.024995 | 0 | 0 | 4,081 |
3,416,342 | 2010-08-05T15:29:00.000 | 0 | 1 | 0 | 0 | c++,python | 3,648,838 | 8 | false | 0 | 0 | I think you pretty much answered this question yourself:
I was told that Python is great for that but our course does not cover basics for beginners.
In other words, the Python course is not an introductory course -- it assumes you already know how the basics of programming. That's probably why the professor suggested you take the C++ course first. | 6 | 0 | 0 | I am totally new to programming as though I have my PhD as a molecular biologist for the last 10 years. Can someone please tell me: Would it be too hard to handle if I enrolled simultaneously in C++ and python? I am a full time employee too. Both courses start and finish on the same dates and is for 3 months. For a variety of complicated reasons, this fall is the only time I can learn both languages. Please advise.
GillingsT
Update:
A little more detail about myself: as I said I did a PhD in Molecular Genetic. I now wish to be able to obtain programming skills so that I can apply it to do bioinformatics- like sequence manipulation and pathway analysis. I was told that Python is great for that but our course does not cover basics for beginners. I approached a Comp Sci Prof. who suggested that I learn C++ first before learning Python. So I got into this dilemma (added to other logistics). | C++ and python simultaneously. Is it doable | 0 | 0 | 0 | 4,081 |
3,417,756 | 2010-08-05T18:09:00.000 | 0 | 0 | 0 | 0 | python,tabs,python-webbrowser | 3,418,619 | 1 | true | 0 | 0 | On WinXP, at least, it appears that this is not possible (from my tests with IE).
From what I can see, webbrowser is a fairly simple convenience module that creates (probably ) a subprocess-style call to the browser executable.
If you want that sort of granularity you'll have to see if your browser accepts command line arguments to that effect, or exposes that control in some other way. | 1 | 3 | 0 | I would like to open a new tab in my web browser using python's webbrowser. However, now my browser is brought to the top and I am directly moved to the opened tab. I haven't found any information about this in documentation, but maybe there is some hidden api. Can I open this tab in the possible most unobtrusive way, which means:
not bringing browser to the top if it's minimzed,
not moving me the opened tab (especially if I am at the moment working in other tab - my process is working in the background and it would be very annoying to have suddenly my work interrupted by a new tab)? | python: open unfocused tab with webbrowser | 1.2 | 0 | 1 | 392 |
3,418,834 | 2010-08-05T20:18:00.000 | 5 | 0 | 1 | 0 | python,exception-handling | 3,418,859 | 2 | true | 0 | 0 | print_exc() doesn't return anything, which in Python is actually returning None. Looks like IDLE is showing you the None it returned. | 2 | 1 | 0 | I am using the following line of code in IDLE to print out my traceback in an eception:
traceback.print_exc()
For some reason I get the red text error message, but then it is followed by a blue text of "None".
Not sure what that None is about, any ideas? | traceback.print_exc() python question | 1.2 | 0 | 0 | 4,068 |
3,418,834 | 2010-08-05T20:18:00.000 | 7 | 0 | 1 | 0 | python,exception-handling | 3,419,961 | 2 | false | 0 | 0 | print_exc() prints formatted exception to stderr. If you need string value, call format_exc() | 2 | 1 | 0 | I am using the following line of code in IDLE to print out my traceback in an eception:
traceback.print_exc()
For some reason I get the red text error message, but then it is followed by a blue text of "None".
Not sure what that None is about, any ideas? | traceback.print_exc() python question | 1 | 0 | 0 | 4,068 |
3,419,282 | 2010-08-05T21:24:00.000 | 3 | 0 | 1 | 0 | python,overloading | 3,419,420 | 2 | true | 0 | 1 | TypeError is just another Exception. You can take *args **kwargs, check those, and raise a TypeError yourself, specify the text displayed - e.g. listing the expected call.
That being said, PyQt is a bunch of .pyd == native python extension, written in C or C++ (using Boost::Python). At least the latter supports "real" overloads afaik.
Either way, you shouldn't do this unless you have a really good reason. Python is duck-typed, embrace it. | 2 | 1 | 0 | If I call QApplication's init without arguments i get
TypeError: arguments did not match any overloaded call:
QApplication(list-of-str): not enough arguments
QApplication(list-of-str, bool): not enough arguments
QApplication(list-of-str, QApplication.Type): not enough arguments
QApplication(Display, int visual=0, int colormap=0): not enough arguments
QApplication(Display, list-of-str, int visual=0, int cmap=0): not enough arguments
very interesting! How can I write a class like that?? I mean, every trick for this kind of function overloading I saw did not involve explicit signatures. | Python method overload based on argument count? | 1.2 | 0 | 0 | 1,044 |
3,419,282 | 2010-08-05T21:24:00.000 | 0 | 0 | 1 | 0 | python,overloading | 3,419,307 | 2 | false | 0 | 1 | It's quite possible that its init is simply using __init__(self, *args, **kwargs) and then doing its own signature testing against the args list and kwargs dict. | 2 | 1 | 0 | If I call QApplication's init without arguments i get
TypeError: arguments did not match any overloaded call:
QApplication(list-of-str): not enough arguments
QApplication(list-of-str, bool): not enough arguments
QApplication(list-of-str, QApplication.Type): not enough arguments
QApplication(Display, int visual=0, int colormap=0): not enough arguments
QApplication(Display, list-of-str, int visual=0, int cmap=0): not enough arguments
very interesting! How can I write a class like that?? I mean, every trick for this kind of function overloading I saw did not involve explicit signatures. | Python method overload based on argument count? | 0 | 0 | 0 | 1,044 |
3,419,624 | 2010-08-05T22:13:00.000 | 4 | 0 | 1 | 0 | python,sql | 3,419,835 | 5 | true | 0 | 0 | I don't know exactly what you are doing. But a database will just change how the data is stored. and in fact it might take longer since most reasonable databases may have constraints put on columns and additional processing for the checks. In many cases having the whole file local, going through and doing calculations is going to be more efficient than querying and writing it back to the database (subject to disk speeds, network and database contention, etc...). But in some cases the database may speed things up, especially because if you do indexing it is easy to get subsets of the data.
Anyway you mentioned logs, so before you go database crazy I have the following ideas for you to check out. Anyway I'm not sure if you have to keep going through every log since the beginning of time to download charts and you expect it to grow to 2 GB or if eventually you are expecting 2 GB of traffic per day/week.
ARCHIVING -- you can archive old logs, say every few months. Copy the production logs to an archive location and clear the live logs out. This will keep the file size reasonable. If you are wasting time accessing the file to find the small piece you need then this will solve your issue.
You might want to consider converting to Java or C. Especially on loops and calculations you might see a factor of 30 or more speedup. This will probably reduce the time immediately. But over time as data creeps up, some day this will slow down as well. if you have no bound on the amount of data, eventually even hand optimized Assembly by the world's greatest programmer will be too slow. But it might give you 10x the time...
You also may want to think about figuring out the bottleneck (is it disk access, is it cpu time) and based on that figuring out a scheme to do this task in parallel. If it is processing, look into multi-threading (and eventually multiple computers), if it is disk access consider splitting the file among multiple machines...It really depends on your situation. But I suspect archiving might eliminate the need here.
As was suggested, if you are doing the same calculations over and over again, then just store them. Whether you use a database or a file this will give you a huge speedup.
If you are downloading stuff and that is a bottleneck, look into conditional gets using the if modified request. Then only download changed items. If you are just processing new charts then ignore this suggestion.
Oh and if you are sequentially reading a giant log file, looking for a specific place in the log line by line, just make another file storing the last file location you worked with and then do a seek each run.
Before an entire database, you may want to think of SQLite.
Finally a "couple of years" seems like a long time in programmer time. Even if it is just 2, a lot can change. Maybe your department/division will be laid off. Maybe you will have moved on and your boss. Maybe the system will be replaced by something else. Maybe there will no longer be a need for what you are doing. If it was 6 months I'd say fix it. but for a couple of years, in most cases, I'd say just use the solution you have now and once it gets too slow then look to do something else. You could make a comment in the code with your thoughts on the issue and even an e-mail to your boss so he knows it as well. But as long as it works and will continue doing so for a reasonable amount of time, I would consider it "done" for now. No matter what solution you pick, if data grows unbounded you will need to reconsider it. Adding more machines, more disk space, new algorithms/systems/developments. Solving it for a "couple of years" is probably pretty good. | 5 | 4 | 0 | i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally.
can i continue to use python on a 2gig file or should i move the data into a database? | python or database? | 1.2 | 1 | 0 | 1,357 |
3,419,624 | 2010-08-05T22:13:00.000 | 2 | 0 | 1 | 0 | python,sql | 3,419,871 | 5 | false | 0 | 0 | I always reach for a database for larger datasets.
A database gives me some stuff for "free"; that is, I don't have to code it.
searching
sorting
indexing
language-independent connections
Something like SQLite might be the answer for you.
Also, you should investigate the "nosql" databases; it sounds like your problem might fit well into one of them. | 5 | 4 | 0 | i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally.
can i continue to use python on a 2gig file or should i move the data into a database? | python or database? | 0.07983 | 1 | 0 | 1,357 |
3,419,624 | 2010-08-05T22:13:00.000 | 4 | 0 | 1 | 0 | python,sql | 3,419,726 | 5 | false | 0 | 0 | If you need to go through all lines each time you perform the "fiddling" it wouldn't really make much difference, assuming the actual "fiddling" is whats eating your cycles.
Perhaps you could store the results of your calculations somehow, then a database would probably be nice. Also, databases have methods for ensuring data integrity and stuff like that, so a database is often a great place for storing large sets of data (duh! ;)). | 5 | 4 | 0 | i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally.
can i continue to use python on a 2gig file or should i move the data into a database? | python or database? | 0.158649 | 1 | 0 | 1,357 |
3,419,624 | 2010-08-05T22:13:00.000 | 4 | 0 | 1 | 0 | python,sql | 3,419,718 | 5 | false | 0 | 0 | I'd only put it into a relational database if:
The data is actually relational and expressing it that way helps shrink the size of the data set by normalizing it.
You can take advantage of triggers and stored procedures to offload some of the calculations that your Python code is performing now.
You can take advantage of queries to only perform calculations on data that's changed, cutting down on the amount of work done by Python.
If neither of those things is true, I don't see much difference between a database and a file. Both ultimately have to be stored on the file system.
If Python has to process all of it, and getting it into memory means loading an entire data set, then there's no difference between a database and a flat file.
2GB of data in memory could mean page swapping and thrashing by your application. I would be careful and get some data before I blamed the problem on the file. Just because you access the data from a database won't solve a paging problem.
If your data's flat, I see less advantage in a database, unless "flat" == "highly denormalized".
I'd recommend some profiling to see what's consuming CPU and memory before I made a change. You're guessing about the root cause right now. Better to get some data so you know where the time is being spent. | 5 | 4 | 0 | i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally.
can i continue to use python on a 2gig file or should i move the data into a database? | python or database? | 0.158649 | 1 | 0 | 1,357 |
3,419,624 | 2010-08-05T22:13:00.000 | 1 | 0 | 1 | 0 | python,sql | 3,419,687 | 5 | false | 0 | 0 | At 2 gigs, you may start running up against speed issues. I work with model simulations for which it calls hundreds of csv files and it takes about an hour to go through 3 iterations, or about 20 minutes per loop.
This is a matter of personal preference, but I would go with something like PostGreSql because it integrates the speed of python with the capacity of a sql-driven relational database. I encountered the same issue a couple of years ago when my Access db was corrupting itself and crashing on a daily basis. It was either MySQL or PostGres and I chose Postgres because of its python friendliness. Not to say MySQL would not work with Python, because it does, which is why I say its personal preference.
Hope that helps with your decision-making! | 5 | 4 | 0 | i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally.
can i continue to use python on a 2gig file or should i move the data into a database? | python or database? | 0.039979 | 1 | 0 | 1,357 |
3,420,594 | 2010-08-06T02:18:00.000 | 6 | 0 | 0 | 0 | .net,python,sql-server,django,asp.net-mvc-2 | 22,076,548 | 4 | false | 1 | 0 | I would also suggest we must compare runtimes and not limit to language features before making such moves. Python runs via interpreter CPython where C# runs on CLR in their default implementations.
Multitasking is very important in any large scale project; .NET can easily handle this via threads... and also it can take benefits of worker processes in IIS (ASP.NET). CPython doesn't offer true threading capabilities because of GIL...a lock that every thread has to acquire before executing any code, for true multitasking you have to use multiple processes.
When we host ASP.NET application on IIS on single worker process, ASP.NET can still take advantage of threading to serve multiple web requests simultaneously on different cores where CPython depends on multiple work processes to achieve parallel computing on different cores.
All of this leads to a big question, how we are going to host Python/Django app on windows. We all know forking process on windows is much more costly than Linux. So ideally to host Python/Django app; best environment would be Linux rather than windows.
If you choose Python, the right environment to developed and host Python would be Linux...and if you are like me coming from windows, choosing Python would introduce new learning curve of Linux as well...although is not very hard these days... | 1 | 58 | 0 | My organization currently delivers a web application primarily based on a SQL Server 2005/2008 back end, a framework of Java models/controllers, and ColdFusion-based views. We have decided to transition to a newer framework and after internal explorations and mini projects have narrowed the choice down to between Python and C#/.NET.
Let me start off by saying that I realize either technology will work great, and am looking for the key differentiators (and associated pros and cons) These languages have a lot in common, and a lot not--I'm looking for your take on their key differences.
Example tradeoff/differentiator I am looking for:
While it seems you can accomplish more with less code and be more creative with Python, since .NET is more structured it may be easier to take over understanding and modifying code somebody else wrote.
Some extra information that may be useful:
Our engineering team is about 20 large and we work in small teams of 5-7 of which we rotate people in and out frequently. We work on code that somebody else wrote as much as we write new code.
With python we would go the Django route, and with .NET we would go with MVC2. Our servers are windows servers running IIS.
Some things we like about ColdFusion include that its very easy to work with queries and that we can "hot-deploy" fixes to our webservers without having to reboot them or interrupt anybody on them.
I've read some of the other X vs Y threads involving these two languages and found them very helpful, but would like to put Python head-to-head against .Net directly. Thanks in advance for letting me tap your experiences for this difficult question! | Python vs C#/.NET -- what are the key differences to consider for using one to develop a large web application? | 1 | 0 | 0 | 86,003 |
3,421,200 | 2010-08-06T05:27:00.000 | 12 | 0 | 0 | 0 | python,web-services,twisted,network-protocols,amqp | 3,426,017 | 1 | true | 1 | 0 | As always, "it depends". First, let's clear up the terminology.
Twisted's Perspective Broker basically is a system you can use when you have control over both ends of a distributed action (both client and server ends). It provides a way to copy objects from one end to the other and to call methods on remote objects. Copying involves serialising the object to a format suitable for network transfer, and then transferring it using Twisted's own transfer protocol. This is useful when both ends can use Twisted, and you don't need to interface with non-Twisted systems.
Generally speaking, Web Services are client-server applications that rely on HTTP for communication. The client uses HTTP to make a request to the server, which returns a result. Parameters can be encoded in eg. GET or POST requests or use a data section in a POST request to send, for example, an XML-formatted document that describes the action to be taken.
REST is the architectural idea that all resources and operations on resources that a system exposes should be directly addressable. To put it somewhat simply, it means that the URI used to access or manipulate the resource includes the resource name and the operation to carry out on it. REST can be and commonly is implemented as a Web Service using HTTP.
SOAP is a protocol for message exchange. It consists of two parts: a choice of several transport methods, and a single XML-based message format. The transport method can be, for example, HTTP, which makes SOAP a candidate for implementing Wed Services. The message format specifies all details about the requested action and the result of the action.
JMS is an API standard for Java-based messaging systems. It defines some semantics for messages (such as one-to-one or one-to-many) and includes methods for addressing, creating messages, populating them with parameters and data, sending them, and receiving and decoding them. The API makes sure that you can, in theory, change the underlying messaging system implementation without having to rewrite all of your code. However, the message system implementation doesn't need to be protocol-compatible with another JMS-enabled messaging system. So having one JMS system doesn't automatically mean that you can exchange messages with another JMS system. You probably need to build some kind of bridge service for that to work, which is going to be a major challenge especially when it comes to addressing.
AMQP attempts to improve the situation by defining a wire protocol that messaging systems must obey. This means that messaging systems from different vendors can exchange messages.
Finally, SOA is an architecture concept where applications are broken down into reusable services. These services are then combined ("orchestrated") to implement the application. Each time a new application is made, there is a chance of reusing the existing services. SOA is also something that requires non-technical support activities so that the reuse really happens and services are designed to be general enough. Also, SOA is one way of starting to package functionality in legacy systems into a meaningful whole that can then be extended and developed further using more modern techniques. SOA can be implemented using a variety of technologies, such as Web Services, messaging systems, or an Enterprise Service Bus.
You pondered the tradeoff between one connection per request and keeping the connection open for multiple requests. This depends on available resources, the messaging pattern, and the size of your data. If the incoming message stream is constantly the same, then it could be fine to let connections stay open since their amount won't change very much. On the other hand, if there are bursts of messages from several systems, then it could be useful to release resources and not let connections linger for too long. Also, if lots of data is transferred per connection, then the overhead of opening and closing the connection is small compared to the total transaction length. On the other hand, if you transfer lots of very small messages, then keeping the connection open could prove beneficial. Benchmarking with your particular parameters is the only way to be sure.
AMQP could indeed replace the Twisted-specific protocol. This would allow interacting with a non-Twisted system.
I hope this proves useful to you. If you're still wondering about something (and I think you are, since this is such a large area) then I would suggest splitting things into smaller questions and posting them individually. The answers can then be more precise. | 1 | 4 | 0 | I'm currently using twisted's perspective broker on python and I have considered in the past switching to something like RabbitMQ but I'm not sure it could just replace pb - I feel like I might be comparing apples to oranges here.
I've been reading a lot about REST lately and the inevitable debate with SOAP, which led me to read about "enterprisey" web service stuff like SOA.
I have a project coming up in which I'll need to implement some erp-like functionality over web and desktop so I'm considering which approach/technology to use to communicate between servers and clients. But I'm also trying to learn as much as I can about all of this, so I don't want just to solve this particular problem.
What do you use for communication between your servers and clients?
I understand a python-specific protocol like perspective broker can limit my interoperability, but am I right to assume some AMQP protocol could replace it?
If I'm not mistaken, both twisted.pb and amqp use an always-on connection and a very low overhead protocol. But in one hand, keeping a large number of clients connected all the time could be a problem, and on the other hand, even with http keep-alive and whatever tricks they use the serialization part would still be a problem with web services.
If I'm wrong in any of my assumptions I would appreciate if someone could point me in the right direction to learn more. | Simple protocols (like twisted.pb) vs messaging (AMQP/JMS) vs web services (REST/SOAP) | 1.2 | 0 | 0 | 2,230 |
3,422,457 | 2010-08-06T09:20:00.000 | 2 | 0 | 0 | 0 | python,base64,byte | 3,422,530 | 3 | false | 0 | 0 | Each base64 encoded string should be decoded separately - you can't concatenate encoded strings (and get a correct decoding).
The result of the decode is a string, of byte-buffer - in Python, they're equivalent.
Regarding the network/host order - sequences of bytes, have no such 'order' (or endianity) - it only matters when interpreting these bytes as words / ints of larger width (i.e. more than 8 bits). | 1 | 0 | 0 | I am now using python base64 module to decode a base64 coded XML file, what I did was to find each of the data (there are thousands of them as for exmaple in "ABC....", the "ABC..." was the base64 encoded data) and add it to a string, lets say s, then I use base64.b64decode(s) to get the result, I am not sure of the result of the decoding, was it a string, or bytes? In addition, how should convert such decoded data from the so-called "network byte order" to a "host byte order"? Thanks! | Python base64 data decode and byte order convert | 0.132549 | 0 | 1 | 7,855 |
3,423,510 | 2010-08-06T11:58:00.000 | 0 | 1 | 0 | 0 | python,comparison,md5 | 3,423,577 | 3 | false | 0 | 0 | You can log in using ssh and make a md5 hash for the file remotely and a md5 hash for the current local file. If the md5s are matching the files are identicaly, else they are different. | 2 | 2 | 0 | I have the following problem: I have a local .zip file and a .zip file located on a server. I need to check if the .zip file on the server is different from the local one; if they are not I need to pull the new one from the server. My question is how do I compare them without downloading the file from the server and comparing them locally?
I could create an MD5 hash for the zip file on the server when creating the .zip file and then compare it with the MD5 of my local .zip file, but is there a simpler way? | Comparing local file with remote file | 0 | 0 | 0 | 4,055 |
3,423,510 | 2010-08-06T11:58:00.000 | 0 | 1 | 0 | 0 | python,comparison,md5 | 3,423,559 | 3 | false | 0 | 0 | I would like to know how you intend to compare them locally, if it were the case. You can apply the same logic to compare them remotely. | 2 | 2 | 0 | I have the following problem: I have a local .zip file and a .zip file located on a server. I need to check if the .zip file on the server is different from the local one; if they are not I need to pull the new one from the server. My question is how do I compare them without downloading the file from the server and comparing them locally?
I could create an MD5 hash for the zip file on the server when creating the .zip file and then compare it with the MD5 of my local .zip file, but is there a simpler way? | Comparing local file with remote file | 0 | 0 | 0 | 4,055 |
3,423,845 | 2010-08-06T12:45:00.000 | 1 | 0 | 0 | 0 | python,performance,twisted | 3,433,333 | 3 | false | 0 | 0 | If I understand Twisted Reactors correctly, they don't parallelize everything. Whatever operations have been queued is scheduled and is done one by one.
One way out for you is to have a custom addCallback which checks for how many callbacks have been registered already and drop if necessary. | 2 | 4 | 0 | Is it somehow possible to "detect" that the reactor is overloaded and start dropping connections, or refuse new connections? How can we avoid the reactor being completely overloaded and not being able to catch up? | Twisted: degrade gracefully performance in case reactor is overloaded? | 0.066568 | 0 | 0 | 487 |
3,423,845 | 2010-08-06T12:45:00.000 | 1 | 0 | 0 | 0 | python,performance,twisted | 3,445,076 | 3 | false | 0 | 0 | I would approach this per protocol. Throttle when the actual service requires it, not when you think it will. Rather than worrying about how many callbacks are waiting for a reactor tick, I'd worry about how long the HTTP requests (for example) are taking to complete. The number of operations waiting for the reactor could be an implementation detail - for example, if one access pattern ended up with callbacks on long DeferredLists, and another had a more linear chain of callbacks, the time to respond might not be different even though the number of callbacks would be.
This could be done by keeping metrics of the time to complete a logical operation (such as servicing a HTTP request). An advantage of this is that it gives you important information before a problem happens. | 2 | 4 | 0 | Is it somehow possible to "detect" that the reactor is overloaded and start dropping connections, or refuse new connections? How can we avoid the reactor being completely overloaded and not being able to catch up? | Twisted: degrade gracefully performance in case reactor is overloaded? | 0.066568 | 0 | 0 | 487 |
3,424,899 | 2010-08-06T14:48:00.000 | 0 | 0 | 1 | 0 | python,date | 47,685,571 | 23 | false | 0 | 0 | You can use below given function to get date before/after X month.
from datetime import date
def next_month(given_date, month):
yyyy = int(((given_date.year * 12 + given_date.month) + month)/12)
mm = int(((given_date.year * 12 + given_date.month) + month)%12)
if mm == 0:
yyyy -= 1
mm = 12
return given_date.replace(year=yyyy, month=mm)
if __name__ == "__main__":
today = date.today()
print(today)
for mm in [-12, -1, 0, 1, 2, 12, 20 ]:
next_date = next_month(today, mm)
print(next_date) | 4 | 136 | 0 | If only timedelta had a month argument in it's constructor. So what's the simplest way to do this?
EDIT: I wasn't thinking too hard about this as was pointed out below. Really what I wanted was any day in the last month because eventually I'm going to grab the year and month only. So given a datetime object, what's the simplest way to return any datetime object that falls in the previous month? | Return datetime object of previous month | 0 | 0 | 0 | 176,482 |
3,424,899 | 2010-08-06T14:48:00.000 | 1 | 0 | 1 | 0 | python,date | 61,557,952 | 23 | false | 0 | 0 | One liner ?
previous_month_date = (current_date - datetime.timedelta(days=current_date.day+1)).replace(day=current_date.day) | 4 | 136 | 0 | If only timedelta had a month argument in it's constructor. So what's the simplest way to do this?
EDIT: I wasn't thinking too hard about this as was pointed out below. Really what I wanted was any day in the last month because eventually I'm going to grab the year and month only. So given a datetime object, what's the simplest way to return any datetime object that falls in the previous month? | Return datetime object of previous month | 0.008695 | 0 | 0 | 176,482 |
3,424,899 | 2010-08-06T14:48:00.000 | 20 | 0 | 1 | 0 | python,date | 3,425,016 | 23 | false | 0 | 0 | If only timedelta had a month argument
in it's constructor. So what's the
simplest way to do this?
What do you want the result to be when you subtract a month from, say, a date that is March 30? That is the problem with adding or subtracting months: months have different lengths! In some application an exception is appropriate in such cases, in others "the last day of the previous month" is OK to use (but that's truly crazy arithmetic, when subtracting a month then adding a month is not overall a no-operation!), in others yet you'll want to keep in addition to the date some indication about the fact, e.g., "I'm saying Feb 28 but I really would want Feb 30 if it existed", so that adding or subtracting another month to that can set things right again (and the latter obviously requires a custom class holding a data plus s/thing else).
There can be no real solution that is tolerable for all applications, and you have not told us what your specific app's needs are for the semantics of this wretched operation, so there's not much more help that we can provide here. | 4 | 136 | 0 | If only timedelta had a month argument in it's constructor. So what's the simplest way to do this?
EDIT: I wasn't thinking too hard about this as was pointed out below. Really what I wanted was any day in the last month because eventually I'm going to grab the year and month only. So given a datetime object, what's the simplest way to return any datetime object that falls in the previous month? | Return datetime object of previous month | 1 | 0 | 0 | 176,482 |
3,424,899 | 2010-08-06T14:48:00.000 | 43 | 0 | 1 | 0 | python,date | 56,550,913 | 23 | false | 0 | 0 | A vectorized, pandas solution is very simple:
df['date'] - pd.DateOffset(months=1) | 4 | 136 | 0 | If only timedelta had a month argument in it's constructor. So what's the simplest way to do this?
EDIT: I wasn't thinking too hard about this as was pointed out below. Really what I wanted was any day in the last month because eventually I'm going to grab the year and month only. So given a datetime object, what's the simplest way to return any datetime object that falls in the previous month? | Return datetime object of previous month | 1 | 0 | 0 | 176,482 |
3,425,643 | 2010-08-06T16:12:00.000 | 1 | 0 | 1 | 1 | python,applescript | 3,426,504 | 1 | true | 0 | 0 | Seeing as how py-appscript is a layer between python and the application you are scripting via Applescript, I would suggest porting the statement to pure Applescript and see if it works there. There are a lot of things that can go wrong with Applescript (and your statement alone) to begin with and it's not obvious what is the expected before with py-appscript when an error occurs. | 1 | 0 | 0 | i have a py2app application, which runs an appscript using py-appscript. the Applescript code is this one line:
app('Finder').update(<file alias of a certain file>)
What this normally does is update a file's preview in Finder. It works most of the time, except for Leopard. In Leopard, everytime that script is executed, instead of updating the file, it starts a new instance of Finder. What am I doing wrong? The app was built on the same machine (the Leopard). | py-appscript is starting a new Finder instance | 1.2 | 0 | 0 | 266 |
3,427,287 | 2010-08-06T19:56:00.000 | 2 | 0 | 0 | 0 | python,django,passenger,wsgi | 3,428,960 | 1 | true | 1 | 0 | My advice would be to test locally using Django's builtin server.
It does precisely auto-reload, so that any change to your code will be available.
I'm not familiar with Dreamhost, but if modwsgi is on embedded mode this is not possible.
In Daemon mode, you could write some code to detect file changes and restart the processes. | 1 | 1 | 0 | I am using Django with Passenger on Dreamhost.
Every time I make a change to models, settings or views I need to pkill python from a terminal session. Does anyone know of a way to automate this? Is this something that Passenger can do? | Is there a way to automate restarting the python process after every change I make to Django models? | 1.2 | 0 | 0 | 205 |
3,427,505 | 2010-08-06T20:24:00.000 | 1 | 0 | 0 | 0 | python,django,postgresql,ipython | 3,428,147 | 1 | false | 1 | 0 | You may always run a cron job, that will call pg_cancel_backend() within the database, for the backends that are idle for longer than e.g. 1 day (of course that depends on the nagios settings). | 1 | 2 | 0 | It's not the fault of the django (iPython) shell, actually. The problem is developers who open the django shell ./manage.py shell run through some queries (it often only generates selects), and then either leave the shell running or somehow kill their (ssh) session (actually, I'm not sure if the latter case leaves the transaction open - I haven't tested it)
In any case, nagios regularly alerts over these idle transactions. We could, of course call developer.stop_doing_that_dammit() but it's unreliable.
I'm looking for thoughts on resolving this in a way that allows developers to use the django shell, but closes transactions should they forget to close their session out. | django shell triggering Postgres idle transaction problems | 0.197375 | 0 | 0 | 437 |
3,427,795 | 2010-08-06T21:05:00.000 | 3 | 0 | 1 | 0 | .net,python,language-agnostic,naming-conventions | 3,427,822 | 12 | false | 0 | 0 | I'd recommend purchasing a copy of "Clean Code" by Robert C. Martin. It is full of great suggstions ranging from naming conventions to how to write easy-to-understand functions and much more. Definitely worth a read. I know it influenced my coding style since reading it. | 7 | 4 | 0 | as you can probably tell from my previous posts i have horrific naming conventions. do you know of any tutorials dealing with how to name stuff? | are there tutorials on how to name variables? | 0.049958 | 0 | 0 | 509 |
3,427,795 | 2010-08-06T21:05:00.000 | 2 | 0 | 1 | 0 | .net,python,language-agnostic,naming-conventions | 3,427,834 | 12 | false | 0 | 0 | Have you read Code Complete? He does a full treatise on this in the book. Definitely the best naming strategy I've seen in print... And it's easy to find like 1000 programmers at the drop of a hat who name this one of the top 5 resources for programmers and program design.
Just my $.05 | 7 | 4 | 0 | as you can probably tell from my previous posts i have horrific naming conventions. do you know of any tutorials dealing with how to name stuff? | are there tutorials on how to name variables? | 0.033321 | 0 | 0 | 509 |
3,427,795 | 2010-08-06T21:05:00.000 | 3 | 0 | 1 | 0 | .net,python,language-agnostic,naming-conventions | 3,427,856 | 12 | false | 0 | 0 | There are many different views on the specifics of naming conventions, but the overall gist could be summed up as:
Each variable name should be relevant
to whatever data is stored in the
variable.
Your naming scheme should be consistent.
So a major no-no would be
single letter variables (some people
use i and j for indexing loops, which
is OK because every programmer knows
what they are. Nevertheless, I prefer
'idx' instead of 'i'). Also out are
names like 'method1', it means nothing
- it should indicate what the variable holds.
Another (less common) convention is the 'Hungarian' notation where the data type is prefixed to the variable name such as 'int i_idx'. This is quite useless in modern, object oriented programming languages. Not to mention a blatant violation of the DRY principle.
The second point, consistency, is just as important. camelCase, UpperCamelCase, whatever - just don't switch between them for no reason.
You'll find that naming conventions vary from language to language and often, a company will have their own rules on naming.
Its a worthwhile investment to properly name your variables because when you come to maintain your code much later on and you have forgotten what everything means, it will pay dividends. | 7 | 4 | 0 | as you can probably tell from my previous posts i have horrific naming conventions. do you know of any tutorials dealing with how to name stuff? | are there tutorials on how to name variables? | 0.049958 | 0 | 0 | 509 |
3,427,795 | 2010-08-06T21:05:00.000 | 1 | 0 | 1 | 0 | .net,python,language-agnostic,naming-conventions | 3,428,775 | 12 | false | 0 | 0 | It's not clear if your question relates to Python naming conventions.
If so, for starters I would try to follow just these simple rules:
ClassName - upper case for class names
variable_name - lower case and underscore for variables (I try to keep them at two words maximum) | 7 | 4 | 0 | as you can probably tell from my previous posts i have horrific naming conventions. do you know of any tutorials dealing with how to name stuff? | are there tutorials on how to name variables? | 0.016665 | 0 | 0 | 509 |
3,427,795 | 2010-08-06T21:05:00.000 | 5 | 0 | 1 | 0 | .net,python,language-agnostic,naming-conventions | 3,428,433 | 12 | false | 0 | 0 | A bad convention followed fully is better than a combination of different good "conventions" (which aren't conventions at all any more, if they aren't kept to).
However, a convention that is making something less clear than if it had been ignored, should be ignored.
Those are the only two I would state as any sort of rule. Beyond that convention preferences are a matter of opinions quickly turning into rants. The rest of this post is exactly that, and shouldn't be read as anything else.
For collections, use natural language plurals. In English, this means data, schemata, children, indices, criteria, formulae (and indeed foci, geese, feet, men, women, teeth) not made-up words like datums schemas, childs, indexes, criterions, formulas (and likewise focuses, gooses, foots, mans, womans, tooths, believe it or not I've actually seen some of those in use). Camel-casing and abbreviatiating does enough damage to English as it is, without doing more. Okay, I've never seen datums, but I have seen the meta-plural datas. Sweet Aradia, why?
That said, use American English for names, even if you use a different dialect of English. Most coders with such dialects have learnt to think of "color" as a word for colours in a computer context by the age of 12, and the principle applies more widely. If we can deal with "color" (one of Webster's worse bastardisations) we can deal with -ize and -ization (-ise and -isation is a pseudo-French 18th C affectation anyway, the Americans are the traditionalists on this one).
Similarly, if you aren't sure how to spell a word that you are using as the whole or part of a name, look it up (google it and see what google says). Somebody may spend a long time distracted by your misspelling that is so liberally distributed throughout running code as to make fixing it daunting.
Hungarian is bad (in many modern languages, though in some it has its place) but the principle is good. If you can tell a const, static, instance, local and parameter variable from each other in a split-second, that's good. If you can tell something of the intent immediately, that's good too.
Related to that, _ before public variables makes them non CLR compiant. That's actually a good thing for private variables (if you make them public for a quick experiment, and forget to fix the visibility, the compiler will warn you).
Remember Postel's Law, "be conservative in what you do, be liberal in what you accept from others". One example of this is to act as if you are using a case-sensitive langauge, even if you're using a case-insensitive one. A related one is to be more of a stickler in public names than private ones. Yet another is to pay more attention to following conventions well than to complaining about those who don't. | 7 | 4 | 0 | as you can probably tell from my previous posts i have horrific naming conventions. do you know of any tutorials dealing with how to name stuff? | are there tutorials on how to name variables? | 0.083141 | 0 | 0 | 509 |
3,427,795 | 2010-08-06T21:05:00.000 | 4 | 0 | 1 | 0 | .net,python,language-agnostic,naming-conventions | 3,428,164 | 12 | true | 0 | 0 | All the answers here are quite valid. Most important: be consistent.
That said, here are my rules (C#):
camelCase identifiers -- I personally
find this much easier to read than
underscores
Public properties start
with a capital letter
Something I
should never touch starts with an
underscore -- example, the backing
field to a property should only be
touched from the property. If I have
underscores elsewhere, I know I'm
wrong
Apps Hungarian where
appropriate -- ints describing row
IDs perhaps could be named
rowSelected, rowNextUnread, et
cetera. This is different than
Systems Hungarian, which would mark
them as ints such as iSelected,
iNextUnread. Systems Hungarian
doesn't add much if anything, where
Apps Hungarian gives information the
type doesn't: it tells me adding
rowItemsPerPage and colSelected is a
meaningless operation, even though it
compiles just fine. | 7 | 4 | 0 | as you can probably tell from my previous posts i have horrific naming conventions. do you know of any tutorials dealing with how to name stuff? | are there tutorials on how to name variables? | 1.2 | 0 | 0 | 509 |
3,427,795 | 2010-08-06T21:05:00.000 | 1 | 0 | 1 | 0 | .net,python,language-agnostic,naming-conventions | 3,428,577 | 12 | false | 0 | 0 | Can I make a shameless plug for the "Names" chapter in my book, "A Sane Approach to Database Design" ? I'm specifically talking about names for things in databases, but most of the same considerations apply to variables in programs. | 7 | 4 | 0 | as you can probably tell from my previous posts i have horrific naming conventions. do you know of any tutorials dealing with how to name stuff? | are there tutorials on how to name variables? | 0.016665 | 0 | 0 | 509 |
3,427,946 | 2010-08-06T21:31:00.000 | 2 | 0 | 0 | 1 | java,python,google-app-engine,web-applications,stripes | 3,428,411 | 9 | false | 1 | 0 | As many things in life, this depends on what your goals are. If you intend to learn a web framework that is used in corporate environments, then choose a Java solution. If not, don't. Python is certainly more elegant and generally more fun in pretty much every way.
As to which framework to use, django has the most mindshare, as evidenced by the number of questions asked about it here. My understanding is that it's also pretty good. It's best suited for CMS-like web sites, though - at least that's what it's coming from and what it's optimized for. You might also have a look at one of the simpler, nimbler ones, such as the relatively new flask. All of these are enjoyable, though they may not all have all features on AppEngine. | 5 | 2 | 0 | I plan to start a mid sized web project, what language + framework would you recommend?
I know Java and Python. I am looking for something simple.
Is App Engine a good option? I like the overall simplicity and free hosting, but I am worried about the datastore (how difficult is it to make it similarly fast as a standard SQL solution? + I need fulltext search + I need to filter objects by several parameters).
What about Java with Stripes? Should I use another framework in addition to Stripes (e.g. for database).
UPDATE:
Thanks for the advice, I finally decided to use Django with Eclipse/PyDev as an IDE.
Python/Django is simple and elegant, it's widely used and there is a great documentation. A small disadvantage is that perhaps I'll have to buy a VPS, but it shouldn't be very hard to port the project to App Engine, which is free to some extent. | What language (Java or Python) + framework for mid sized web project? | 0.044415 | 0 | 0 | 492 |
3,427,946 | 2010-08-06T21:31:00.000 | 0 | 0 | 0 | 1 | java,python,google-app-engine,web-applications,stripes | 3,428,479 | 9 | false | 1 | 0 | It depends on your personality. There's no right answer to this question any more than there's a right answer to "what kind of car should I drive?"
If you're artistic and believe code should be beautiful, use Rails.
If you're a real hacker type, I think you'll find a full-stack framework such as Rails or Django to be unsatisfying. These frameworks are "opinionated" software, which means you have to really embrace the author's vision to be most productive.
The wonderful thing about web development in the Python world is there's several great minimal frameworks. I've used several, including web.py, GAE's webapp, and cherrypy. These frameworks are like "here's a request, give me a string to serve up." It's raw. Don't think you'll be stuck in Python concatenating strings though, God no. There's also several excellent templating libraries for Python. I can personally recommend Cheetah but Mako also looks good. | 5 | 2 | 0 | I plan to start a mid sized web project, what language + framework would you recommend?
I know Java and Python. I am looking for something simple.
Is App Engine a good option? I like the overall simplicity and free hosting, but I am worried about the datastore (how difficult is it to make it similarly fast as a standard SQL solution? + I need fulltext search + I need to filter objects by several parameters).
What about Java with Stripes? Should I use another framework in addition to Stripes (e.g. for database).
UPDATE:
Thanks for the advice, I finally decided to use Django with Eclipse/PyDev as an IDE.
Python/Django is simple and elegant, it's widely used and there is a great documentation. A small disadvantage is that perhaps I'll have to buy a VPS, but it shouldn't be very hard to port the project to App Engine, which is free to some extent. | What language (Java or Python) + framework for mid sized web project? | 0 | 0 | 0 | 492 |
3,427,946 | 2010-08-06T21:31:00.000 | 0 | 0 | 0 | 1 | java,python,google-app-engine,web-applications,stripes | 3,428,497 | 9 | false | 1 | 0 | Google App Engine + GWT and you have a pretty powerful combination for developing web applications. The datastore is quite fast, and it has so far done the job quite nicely for me.
In my project I had to do a lot of redesigning of my database model, because it was made for a traditional relational database, and some things were not (directly) possible with the datastore.
GWT has a fairly moderate learning curve, but it gets the job done very well. The gui code is really easy to get started with, but it's the asynchronous way of thinking that's the hardest part.
As for search I don't think it's supported in the framework. Filtering is possible on parameters.
There are some limitations to GAE, and you should consider them before putting all your eggs in that basket. The fact that GAE uses J2EE distribution standards makes the application very easy to move to a dedicated server, should the limitations of GAE become a problem. In fact I only think you would have to refactor the part of your code that makes the queries and stores the data (which shouldn't be much more than 100 lines). | 5 | 2 | 0 | I plan to start a mid sized web project, what language + framework would you recommend?
I know Java and Python. I am looking for something simple.
Is App Engine a good option? I like the overall simplicity and free hosting, but I am worried about the datastore (how difficult is it to make it similarly fast as a standard SQL solution? + I need fulltext search + I need to filter objects by several parameters).
What about Java with Stripes? Should I use another framework in addition to Stripes (e.g. for database).
UPDATE:
Thanks for the advice, I finally decided to use Django with Eclipse/PyDev as an IDE.
Python/Django is simple and elegant, it's widely used and there is a great documentation. A small disadvantage is that perhaps I'll have to buy a VPS, but it shouldn't be very hard to port the project to App Engine, which is free to some extent. | What language (Java or Python) + framework for mid sized web project? | 0 | 0 | 0 | 492 |
3,427,946 | 2010-08-06T21:31:00.000 | 0 | 0 | 0 | 1 | java,python,google-app-engine,web-applications,stripes | 3,428,141 | 9 | false | 1 | 0 | I don't think the datastore is a problem. Many people will reject it out of hand because they want a standard relational database; if you are willing to consider a datastore in general then I doubt you will have any problems with the GAE datastore. Personally, I quite like it.
The thing that might trip you up is the operational limitations. For example, did you know that an HTTP request must complete within 10 seconds?
What if you get 50% of the way through a project and then find that a web service you are using sometimes take 15 seconds to respond? Now you are toast. You can't pay extra to get the limit raised or anything like that.
So, my point is that you must approach GAE with great care. Learn about the limitations and make sure that they will not be a problem before you start using it. | 5 | 2 | 0 | I plan to start a mid sized web project, what language + framework would you recommend?
I know Java and Python. I am looking for something simple.
Is App Engine a good option? I like the overall simplicity and free hosting, but I am worried about the datastore (how difficult is it to make it similarly fast as a standard SQL solution? + I need fulltext search + I need to filter objects by several parameters).
What about Java with Stripes? Should I use another framework in addition to Stripes (e.g. for database).
UPDATE:
Thanks for the advice, I finally decided to use Django with Eclipse/PyDev as an IDE.
Python/Django is simple and elegant, it's widely used and there is a great documentation. A small disadvantage is that perhaps I'll have to buy a VPS, but it shouldn't be very hard to port the project to App Engine, which is free to some extent. | What language (Java or Python) + framework for mid sized web project? | 0 | 0 | 0 | 492 |
3,427,946 | 2010-08-06T21:31:00.000 | 0 | 0 | 0 | 1 | java,python,google-app-engine,web-applications,stripes | 3,430,891 | 9 | false | 1 | 0 | I've built several apps on GAE (with Python) over the last year. It's hard to beat the ease with which you can get an app up and running quickly. Don't discount the value in that alone.
While you may not understand the datastore yet, it is extremely well documented and there are great resources - including this one - to help you get past any problem you might have. | 5 | 2 | 0 | I plan to start a mid sized web project, what language + framework would you recommend?
I know Java and Python. I am looking for something simple.
Is App Engine a good option? I like the overall simplicity and free hosting, but I am worried about the datastore (how difficult is it to make it similarly fast as a standard SQL solution? + I need fulltext search + I need to filter objects by several parameters).
What about Java with Stripes? Should I use another framework in addition to Stripes (e.g. for database).
UPDATE:
Thanks for the advice, I finally decided to use Django with Eclipse/PyDev as an IDE.
Python/Django is simple and elegant, it's widely used and there is a great documentation. A small disadvantage is that perhaps I'll have to buy a VPS, but it shouldn't be very hard to port the project to App Engine, which is free to some extent. | What language (Java or Python) + framework for mid sized web project? | 0 | 0 | 0 | 492 |
3,428,245 | 2010-08-06T22:29:00.000 | 18 | 1 | 1 | 0 | python | 3,428,288 | 2 | true | 0 | 0 | I would recommend studying the Standard Python Library (all the parts of it that are coded in Python, that is) -- it's not uniformly excellent in elegance, but it sets a pretty high standard. Plus, the study has the extra benefit of making you very familiar with the library itself (an absolutely crucial part of mastering Python), in addition to showing you a lot good to excellent Python style code;-).
Edit: I have to point out (or my wife and co-author Anna has threatened to not cook the yummy steak I see waiting;-) that the Python Cookbook, 2nd printed edition, also has a lot of code examples, in the best style Anna and I could make them, and with substantial discussion of style variations and alternatives. However, it's stuck back in time to the days of Python 2.4 (sorry, no time to do a third edition for now...), and that's a real block for some people (though I think that having learned good Python 2.4 style, moving to good 2.7 or 3.1 style is really an "incremental" matter, that's definitely a subjective opinion). "Declaring my interest": Anna and I still get some royalties if you buy the book, and, more importantly, the Python Software Foundation (near and dear to both our hearts -- our Prius's vanity license plate reads "P♥THON"...!-) gets more -- so obviously we're biased in the book's favor;-). If you don't want to spend money, you can read some parts of the book online and for free on Google Books (O'Reilly gets to pick and choose which parts are thus freely readable, so please don't complain to me [[or Anna]] about those choices...!-).
I wish I could recommend the online edition of the Cookbook, which does have recipes that are very recent as well as the classic old ones among which we picked and chose most of the printed edition's ones -- but, unfortunately, there are lots of style issues with too many of the online recipes to recommend them collectively as "good style examples" (and that goes for the good recipes too: most of the recipes we picked for the book, we also heavily edited to improve style (and readability, and performance, but those often go hand-in-hand with Python). | 1 | 13 | 0 | I am trying to teach myself Python, and I have realized that the only way I really learn stuff is by reading the actual programs. Tutorials/manuals just cause me to feel deeply confused.
It's just my learning style, and I'm like that with everything I've studied (including natural languages -- I've managed to teach myself three of them by just getting into the actual 'flow of it').
Classical music once had the concept of a 'gamut' -- playing the entire range of an instrument in an artful manner. I'm guessing that there may be a few well-written scripts out there that really show off every feature of the language. It doesn't matter what they do, I just want to start studying Python by reading programs themselves.
I remember coming across a similar method years ago when I studied some LISP. It was a book, published by Springer Verlag, consisting solely of AI programs, to be read for their didactic merit. | Elegant Python? | 1.2 | 0 | 0 | 2,872 |
3,429,159 | 2010-08-07T04:47:00.000 | 3 | 0 | 1 | 0 | python,multithreading,ironpython,jython,gil | 3,429,166 | 2 | false | 0 | 0 | My guess, because the C libraries that CPython is built upon aren't thread-safe. Whereas Jython and IronPython are built against the Java and .Net respectively. | 1 | 6 | 0 | Why is it that you can run Jython and IronPython without the need for a GIL but Python (CPython) requires a GIL? | Python requires a GIL. But Jython & IronPython don't. Why? | 0.291313 | 0 | 0 | 2,607 |
3,429,887 | 2010-08-07T09:23:00.000 | 3 | 1 | 0 | 0 | python,android,eclipse,workspace,eclipse-pdt | 3,430,003 | 2 | true | 1 | 0 | The plug-ins are stored in the Eclipse installation, not in the workspace folder. So one solution would be to different Eclipse installations for every task, in this case only the required plug-ins would load (and the others not available), on the other hand, you have to maintain at least three parallel Eclipse installations.
Another solution is to disable plug-in activation on startup: in Preferences/General/Startup and Shutdown you can disable single plug-ins not loading. The problem with this approach is, that this only helps to not load plug-ins, but its menu and toolbar contributions will be loaded. | 1 | 4 | 0 | I use Eclipse for programming in PHP (PDT), Python and sometimes Android. Each of this programming languages requires to run many things after Eclipse start.
Of course I do not use all of them at one moment, I have different workspace for each of those. Is there any way, or recommendation, how to make Eclipse to run only neccessary tools when opening defined workspace?
e.g.:
I choose /workspace/www/, so then only PDT tools will run
I choose /workspace/android/, so then only Android tools and buttons in toolbars will appears
Do I have to manually remove all unneccessary things from each of the workspace? Or it is either possible to remove all? | How to organize Eclipse - Workspace VS Programming languages | 1.2 | 0 | 0 | 741 |
3,430,016 | 2010-08-07T10:08:00.000 | 1 | 1 | 1 | 0 | python,algorithm,data-structures,md5,base64 | 3,452,248 | 4 | true | 0 | 0 | David gave an answer that works on all base64 strings.
Just use base64.decodestring in base64 module. That is,
import base64
binary = base64.decodestring(base64_string)
is a more memory efficient representation of the original base64 string. If you
are truncating trailing '==' in your base64 md5, use it like
base64.decodestring(md5+'==') | 3 | 4 | 0 | Suppose you have a MD5 hash encoded in base64. Then each
character needs only 6 bits to store each character in the
resultant 22-byte string (excluding the ending '=='). Thus, each
base64 md5 hash can shrink down to 6*22 = 132 bits, which
requires 25% less memory space compared to the original 8*22=176
bits string.
Is there any Python module or function that lets you store base64
data in the way described above? | Most memory-efficient way of holding base64 data in Python? | 1.2 | 0 | 0 | 1,820 |
3,430,016 | 2010-08-07T10:08:00.000 | 4 | 1 | 1 | 0 | python,algorithm,data-structures,md5,base64 | 3,430,256 | 4 | false | 0 | 0 | "store base64 data"
Don't.
Do. Not. Store. Base64. Data.
Base64 is built by encoding something to make it bigger.
Store the original something. Never store the base64 encoding of something. | 3 | 4 | 0 | Suppose you have a MD5 hash encoded in base64. Then each
character needs only 6 bits to store each character in the
resultant 22-byte string (excluding the ending '=='). Thus, each
base64 md5 hash can shrink down to 6*22 = 132 bits, which
requires 25% less memory space compared to the original 8*22=176
bits string.
Is there any Python module or function that lets you store base64
data in the way described above? | Most memory-efficient way of holding base64 data in Python? | 0.197375 | 0 | 0 | 1,820 |
3,430,016 | 2010-08-07T10:08:00.000 | 8 | 1 | 1 | 0 | python,algorithm,data-structures,md5,base64 | 3,430,100 | 4 | false | 0 | 0 | The most efficient way to store base64 encoded data is to decode it and store it as binary. base64 is a transport encoding - there's no sense in storing data in it, especially in memory, unless you have a compelling reason otherwise.
Also, nitpick: The output of a hash function is not a hex string - that's just a common representation. The output of a hash function is some number of bytes of binary data. If you're using the md5, sha, or hashlib modules, for example, you don't need to encode it as anything in the first place - just call .digest() instead of .hexdigest() on the hash object. | 3 | 4 | 0 | Suppose you have a MD5 hash encoded in base64. Then each
character needs only 6 bits to store each character in the
resultant 22-byte string (excluding the ending '=='). Thus, each
base64 md5 hash can shrink down to 6*22 = 132 bits, which
requires 25% less memory space compared to the original 8*22=176
bits string.
Is there any Python module or function that lets you store base64
data in the way described above? | Most memory-efficient way of holding base64 data in Python? | 1 | 0 | 0 | 1,820 |
3,431,154 | 2010-08-07T16:25:00.000 | 0 | 0 | 0 | 0 | python,wxpython | 3,431,426 | 2 | false | 0 | 1 | You could put each picture in a panel, and use SetBackgroundColour()to set the background color of the panel. | 1 | 1 | 0 | I have a Panel with a bunch of pictures placed on it in a GridSizer layout. How can I draw a highlighted color around the edge of an image or its border to show that it has been selected upon a mouse click event? | wxPython: Highlight item in GidSizer upon mouse click | 0 | 0 | 0 | 233 |
3,431,844 | 2010-08-07T19:55:00.000 | 1 | 0 | 1 | 1 | python,tar | 3,431,918 | 1 | true | 0 | 0 | You can't scan the contents of a tar without scanning the entire file; it has no central index. You need something like a ZIP. | 1 | 0 | 0 | I use python tarfile module.
I have a system backup in tar.gz file.
I need to get first level dirs and files list without getting ALL the list of files in the archive because it's TOO LONG.
For example: I need to get ['bin/', 'etc/', ... 'var/'] and that's all.
How can I do it? May be not even with a tar-file? Then how? | Get big TAR(gz)-file contents by dir levels | 1.2 | 0 | 0 | 247 |
3,433,131 | 2010-08-08T04:58:00.000 | 10 | 0 | 0 | 0 | python,django,django-models,model | 3,433,146 | 2 | true | 1 | 0 | You can use ModelName.add_to_class (or .contribute_to_class), but if you have already run syncdb, then there is no way to automatically have it add the columns you need.
For maintainable code, you will probably want to extend by sub-classing the desired model in your own app, and use something like south to handle the database migrations, or just use a OneToOneField, and have a related model (like UserProfile is to auth.User). | 1 | 10 | 0 | I want to add a column to a database table but I don't want to modify the 3rd party module in case I need/decide to upgrade the module in the future. Is there a way I can add this field within my code so that with new builds I don't have to add the field manually? | Django - how to extend 3rd party models without modifying | 1.2 | 0 | 0 | 3,026 |
3,433,559 | 2010-08-08T08:09:00.000 | 2 | 0 | 1 | 0 | python,time | 67,426,999 | 4 | false | 0 | 0 | If the non-block feature is not needed, just use time.sleep(5) which will work anywhere and save your life. | 2 | 57 | 0 | I want to know how to call a function after a certain time. I have tried time.sleep() but this halts the whole script. I want the script to carry on, but after ???secs call a function and run the other script at the same time | Python Time Delays | 0.099668 | 0 | 0 | 55,176 |
3,433,559 | 2010-08-08T08:09:00.000 | 5 | 0 | 1 | 0 | python,time | 3,434,738 | 4 | false | 0 | 0 | If you want a function to be called after a while and not to stop your script you are inherently dealing with threaded code. If you want to set a function to be called and not to worry about it, you have to either explicitly use multi-threading - like em Mark Byers's answr, or use a coding framework that has a main loop which takes care of function dispatching for you - like twisted, qt, gtk, pyglet, and so many others. Any of these would require you to rewrite your code so that it works from that framework's main loop.
It is either that, or writing some main loop from event checking yourself on your code -
All in all, if the only thing you want is single function calls, threading.Timer is the way to do it. If you want to use these timed calls to actually loop the program as is usually done with javascript's setTimeout, you are better of selecting one of the coding frameworks I listed above and refactoring your code to take advantage of it. | 2 | 57 | 0 | I want to know how to call a function after a certain time. I have tried time.sleep() but this halts the whole script. I want the script to carry on, but after ???secs call a function and run the other script at the same time | Python Time Delays | 0.244919 | 0 | 0 | 55,176 |
3,438,531 | 2010-08-09T08:50:00.000 | 5 | 0 | 1 | 1 | python,user-interface,ipython,python-idle | 11,456,303 | 14 | false | 0 | 0 | Try Spyder, I have spent all day trying to find an IDE which has the functionality of ipython and Spyder just kicks it out of the park..
Autocomplete is top notch right from install, no config files and all that crap, and it has an Ipython terminal in the corner for you to instantly run your code.
big thumbs up | 1 | 51 | 0 | Is there a GUI for IPython that allows me to open/run/edit Python files? My way of working in IDLE is to have two windows open: the shell and a .py file. I edit the .py file, run it, and interact with the results in the shell.
Is it possible to use IPython like this? Or is there an alternative way of working? | IPython workflow (edit, run) | 0.071307 | 0 | 0 | 33,816 |
3,439,020 | 2010-08-09T10:14:00.000 | 1 | 0 | 0 | 1 | python,queue,task | 3,439,041 | 3 | false | 0 | 0 | This is a bit of a vague question. One thing you should remember is that it is very difficult to leak memory in Python, because of the automatic garbage collection. croning a Python script to handle the queue isn't very nice, although it would work fine.
I would use method 1; if you need more power you could make a small Python process that monitors the DB queue and starts new processes to handle the tasks. | 1 | 0 | 0 | Task is:
I have task queue stored in db. It grows. I need to solve tasks by python script when I have resources for it. I see two ways:
python script working all the time. But i don't like it (reason posible memory leak).
python script called by cron and do a little part of task. But i need to solve the problem of one working active script in memory (To prevent active scripts count grow). What is the best solution to implement it in python?
Any ideas to solve this problem at all? | Tasks queue process in python | 0.066568 | 0 | 0 | 883 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.