Q_Id int64 337 49.3M | CreationDate stringlengths 23 23 | Users Score int64 -42 1.15k | Other int64 0 1 | Python Basics and Environment int64 0 1 | System Administration and DevOps int64 0 1 | Tags stringlengths 6 105 | A_Id int64 518 72.5M | AnswerCount int64 1 64 | is_accepted bool 2
classes | Web Development int64 0 1 | GUI and Desktop Applications int64 0 1 | Answer stringlengths 6 11.6k | Available Count int64 1 31 | Q_Score int64 0 6.79k | Data Science and Machine Learning int64 0 1 | Question stringlengths 15 29k | Title stringlengths 11 150 | Score float64 -1 1.2 | Database and SQL int64 0 1 | Networking and APIs int64 0 1 | ViewCount int64 8 6.81M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,979,697 | 2010-06-05T08:31:00.000 | 0 | 0 | 0 | 0 | python,geometry,tkinter | 2,979,707 | 4 | false | 0 | 1 | One approach is to go through your points and partition them into sets with a "center". Since you have 5 colours, you'll have 5 sets. You compare the distance of the new point from each of the centers and then put it in the same group as the closest one.
Each set corresponds to a different colour so you can just plot it after this partitioning is done. | 2 | 4 | 0 | I have a dense set of points in the plane. I want them colored so that points that are close to each other have the same color, and a different color if they're far away. For simplicity assume that there are, say, 5 different colors to choose from. Turns out I've not the slightest idea how to do that ..
I'm using Tkinter with Python, by the way | Coloring close points | 0 | 0 | 0 | 270 |
2,980,257 | 2010-06-05T12:08:00.000 | 1 | 0 | 0 | 0 | python,database,sqlite,dictionary,csv | 2,980,269 | 1 | true | 0 | 0 | As long as they will all fit in memory, a dict will be the most efficient solution. It's also a lot easier to code. 100k records should be no problem on a modern computer.
You are right that switching to an SQLite database is a good choice when the number of records gets very large. | 1 | 3 | 1 | I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers.
Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key.
My question is what is the best way to do this if the CSV file increases to 100K+ records?
Would it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory? | Efficient way to access a mapping of identifiers in Python | 1.2 | 1 | 0 | 109 |
2,982,708 | 2010-06-06T00:52:00.000 | 1 | 0 | 0 | 0 | python,django,admin | 2,982,717 | 1 | false | 1 | 0 | Not reliably. What will happen if multiple people access it at the same time is that data will be overwritten. Let the PK serve its purpose, behind the scenes. | 1 | 1 | 0 | When adding a new data, can we automatically add a dynamic default data where the value is previous recorded data(0002)+1=0003 | Django admin add page, how to, autofill with latest data(0002)+1=0003 | 0.197375 | 0 | 0 | 165 |
2,982,829 | 2010-06-06T01:54:00.000 | 5 | 0 | 1 | 1 | python,assembly | 2,982,842 | 13 | false | 0 | 0 | I don't think writing it in assembly will help you. Writing a routine in assembly could help you if you are processor-bound and think you can do something smarter than your compiler. But in a network copy, you will be IO bound, so shaving a cycle here or there almost certainly will not make a difference.
I think the genreal rule here is that it's always best to profile your process to see where you are spending the time before thinking about optimizations. | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 0.076772 | 0 | 0 | 3,559 |
2,982,829 | 2010-06-06T01:54:00.000 | 4 | 0 | 1 | 1 | python,assembly | 2,982,838 | 13 | false | 0 | 0 | I don't believe it will make a discernable difference which language you use for this purpose. The bottleneck here is not your application but the disk performance.
Just because a language is interpreted, it doesn't mean that every single operation in it is slow. As an example, it's a fairly safe bet that the lower-level code in Python will call assembly (or compiled) code to do copying.
Similarly, when you do stuff with collections and other libraries in Java, that's mostly compiled C, not interpreted Java.
There are a couple of things you can do to possibly speed up the process.
Buy faster hard disks (10K RPMs rather than 7.5K or less latency, larger caches and so forth).
Copying between two physical disks may be faster than copying on a single disk (due to the head movement).
If you're copying across the network, stage it. In other words, copy it fast to another local disk, then slow from there across the net.
You can also stage it in a different way. If you run a nightly (or even weekly) process to keep the copy up to date (only copying changed files) rather than three times a year, you won't find yourself in a situation where you have to copy a massive amount.
Also if you're using the network, run it on the box where the repository is. You don't want to copy all the data from a remote disk to another PC then back to yet another remote disk.
You may also want to be careful with Python. I may be mistaken (and no doubt the Pythonistas will set me straight if I'm mistaken on this count ) but I have a vague recollection that its threading may not fully utilise multi-core CPUs. In that case, you'd be better off with another solution.
You may well be better off sticking with your current solution. I suspect a specialised copy program will already be optimised as much as possible since that's what they do. | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 0.061461 | 0 | 0 | 3,559 |
2,982,829 | 2010-06-06T01:54:00.000 | 1 | 0 | 1 | 1 | python,assembly | 2,982,852 | 13 | false | 0 | 0 | Right, here the bottleneck is not in the execution of the copying software itself but rather the disk access.
Going lower level does not mean that you will have better performance. Take a simple example of open() and fopen() APIs where open is much lower level is is more direct and fopen() is a library wrapper for the system open() function.
But in reality fopen has better berformance because it adds buffering and optimizes a lot of stuff that is not done in the raw open() function.
Implementing optimizations in assembly level is much harder and less efficient than in python. | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 0.015383 | 0 | 0 | 3,559 |
2,982,829 | 2010-06-06T01:54:00.000 | 2 | 0 | 1 | 1 | python,assembly | 2,982,870 | 13 | false | 0 | 0 | Before you question the copying app, you should most likely question the data path. What are the theoretical limits and what are you achieving? What are the potential bottlenecks? If there is a single data path, you are probably not going to get a significant boost by parallelizing storage tasks. You may even exacerbate it. Most of the benefits you'll get with asynchronous I/O come at the block level - a level lower than the file system.
One thing you could do to boost I/O is decouple the fetch from source and store to destination portions. Assuming that the source and destination are separate entities, you could theoretically halve the amount of time for the process. But are the standard tools already doing this??
Oh - and on Python and the GIL - with I/O-bound execution, the GIL is really not quite that bad of a penalty. | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 0.03076 | 0 | 0 | 3,559 |
2,982,829 | 2010-06-06T01:54:00.000 | 42 | 0 | 1 | 1 | python,assembly | 2,982,837 | 13 | false | 0 | 0 | Copying files is an I/O bound process. It is unlikely that you will see any speed up from rewriting it in assembly, and even multithreading may just cause things to go slower as different threads requesting different files at the same time will result in more disk seeks.
Using a standard tool is probably the best way to go here. If there is anything to optimize, you might want to consider changing your file system or your hardware. | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 1 | 0 | 0 | 3,559 |
2,982,829 | 2010-06-06T01:54:00.000 | 8 | 0 | 1 | 1 | python,assembly | 2,982,847 | 13 | false | 0 | 0 | There are 2 places for slowdown:
Per-file copy is MUCH slower than a disk copy (where you literally clone 100% of each sector's data). Especially for 20mm files. You can't fix that one with the most tuned assembly, unless you switch from cloning files to cloning raw disk data. In the latter case, yes, Assembly is indeed your ticket (or C).
Simply storing 20mm files and recursively finding them may be less efficient in Python. But that's more likely a function of finding better algorithm and is not likely to be significantly improved by Assembly. Plus, that will NOT be the main contributor to 50 hrs
In summary - Assembly WILL help if you do raw disk sector copy, but will NOT help if you do filesystem level copy. | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 1 | 0 | 0 | 3,559 |
2,982,829 | 2010-06-06T01:54:00.000 | 1 | 0 | 1 | 1 | python,assembly | 2,982,913 | 13 | false | 0 | 0 | 1,5 TB in approximately 50 hours gives a throughput of (1,5 * 1024^2) MB / (50 * 60^2) s = 8,7 MB/s. A theoretical 100 mbit/s bandwidth should give you 12,5 MB/s. It seems to me that your firewire connection is a problem. You should look at upgrading drivers, or upgrading to a better firewire/esata/usb interface.
That said, rather than the python/assembly question, you should look at acquiring a file syncing solution. It shouldn't be necessary copying that data over and over again. | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 0.015383 | 0 | 0 | 3,559 |
2,982,829 | 2010-06-06T01:54:00.000 | 0 | 0 | 1 | 1 | python,assembly | 2,983,371 | 13 | false | 0 | 0 | As already said, it is not the language here to make the difference; assembly could be cool or fast for computations, but when the processor have to "speak" to peripherals, the limit is given by these. In this case the speed is given by your hard disk speed, and this is a limit you hardly can change wiithout changing your hd and waiting for better hd in future, but also by the way data are organized on the disk, i.e. by the filesystem. AFAIK, most used filesystems are not optimized to handle fastly tons of "small" files, rather they are optimized to hold "few" huge files.
So, changing the filesystem you're using could increase your copy speed, as far as it is more suitable to your case (and of course hd limits still apply!). If you want to "taste" the real limit of you hd, you should try a copy "sector by sector", replycating the exact image of your source hd to the dest hd. (But this option has some points to be aware of) | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 0 | 0 | 0 | 3,559 |
2,982,829 | 2010-06-06T01:54:00.000 | 0 | 0 | 1 | 1 | python,assembly | 3,181,017 | 13 | false | 0 | 0 | Neither. If you want to take advantage of OS features to speed up I/O, you'll need to use some specialized system calls that are most easily accessed in C (or C++). You don't need to know a lot of C to write such a program, but you really need to know the system call interfaces.
In all likelihood, you can solve the problem without writing any code by using an existing tool or tuning the operating system, but if you really do need to write a tool, C is the most straightforward way to do it. | 9 | 18 | 0 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. | Should I use Python or Assembly for a super fast copy program | 0 | 0 | 0 | 3,559 |
2,982,959 | 2010-06-06T03:01:00.000 | 8 | 0 | 0 | 1 | python,debugging,google-app-engine | 17,550,812 | 4 | false | 1 | 0 | In case someone is using the Windows Google Application Launcher. The argument for debug can be set under Edit > Application Settings
In the Extra Command Line Flags, add --log_level=debug | 2 | 25 | 0 | I'm just getting started on building a Python app for Google App Engine. In the localhost environment (on a Mac)
I'm trying to send debug info to the GoogleAppEngineLauncher Log Console via logging.debug(), but it isn't showing up. However, anything sent through, say, logging.info() or logging.error() does show up. I've tried a logging.basicConfig(level=logging.DEBUG) before the logging.debug(), but to no avail.
What am I missing? | Getting logging.debug() to work on Google App Engine/Python | 1 | 0 | 0 | 7,359 |
2,982,959 | 2010-06-06T03:01:00.000 | 1 | 0 | 0 | 1 | python,debugging,google-app-engine | 27,547,085 | 4 | false | 1 | 0 | On a Mac:
1) click Edit > Application Settings
2) then copy and paste the following line into the "Extra Flags:" field
--log_level=debug
3) click Update
your debug logs will now show up in the Log Console | 2 | 25 | 0 | I'm just getting started on building a Python app for Google App Engine. In the localhost environment (on a Mac)
I'm trying to send debug info to the GoogleAppEngineLauncher Log Console via logging.debug(), but it isn't showing up. However, anything sent through, say, logging.info() or logging.error() does show up. I've tried a logging.basicConfig(level=logging.DEBUG) before the logging.debug(), but to no avail.
What am I missing? | Getting logging.debug() to work on Google App Engine/Python | 0.049958 | 0 | 0 | 7,359 |
2,983,649 | 2010-06-06T09:03:00.000 | 1 | 0 | 1 | 0 | python,static-libraries,ctypes | 2,983,728 | 2 | false | 0 | 1 | I can't say for sure there are no modules out there, but the advantages of dynamic libraries (uses less space, can update without recompiling dependent programs) are such that you're probably better off doing just that. | 1 | 16 | 0 | I'm attempting to write a Python wrapper for poker-eval, a c static library. All the documentation I can find on ctypes indicates that it works on shared/dynamic libraries. Is there a ctypes for static libraries?
I know about cython, but should I use that or recompile the poker-eval into a dynamic library so that I can use ctypes?
Thanks,
Mike | ctypes for static libraries? | 0.099668 | 0 | 0 | 12,127 |
2,983,963 | 2010-06-06T11:09:00.000 | 1 | 0 | 1 | 0 | python,multithreading,subprocess | 2,984,037 | 4 | false | 0 | 0 | A nasty hack on linux is to use the timeout program to run the command. You may opt for a nicer all Python solution, however. | 1 | 4 | 0 | I want to execute an external program in each thread of a multi-threaded python program.
Let's say max running time is set to 1 second. If started process completes within 1 second, main program capture its output for further processing. If it doesn't finishes in 1 second, main program just terminate it and start another new process.
How to implement this? | Run a external program with specified max running time | 0.049958 | 0 | 0 | 1,440 |
2,984,460 | 2010-06-06T14:09:00.000 | 5 | 0 | 1 | 0 | python,oop,functional-programming | 2,984,604 | 6 | false | 0 | 0 | Most answers on StackOverflow are short, concise answers, and the functional aspects of python make writing that kind of answers easy.
Python's OO-features simply aren't needed in 10-20 line answers, so you don't see them around here as much. | 5 | 37 | 0 | I see what seems like a majority of Python developers on StackOverflow endorsing the use of concise functional tools like lambdas, maps, filters, etc., while others say their code is clearer and more maintainable by not using them. What is your preference?
Also, if you are a die-hard functional programmer or hardcore into OO, what other specific programming practices do you use that you think are best for your style?
Thanks in advance for your opinions! | Do you use Python mostly for its functional or object-oriented features? | 0.16514 | 0 | 0 | 31,150 |
2,984,460 | 2010-06-06T14:09:00.000 | 1 | 0 | 1 | 0 | python,oop,functional-programming | 2,984,508 | 6 | false | 0 | 0 | I select Python when I'm taking on a problem that maps well to an OO solution. Python only provides a limited ability to program in a functional manner compared to full blown functional languages.
If I really want functional programming, I use Lisp. | 5 | 37 | 0 | I see what seems like a majority of Python developers on StackOverflow endorsing the use of concise functional tools like lambdas, maps, filters, etc., while others say their code is clearer and more maintainable by not using them. What is your preference?
Also, if you are a die-hard functional programmer or hardcore into OO, what other specific programming practices do you use that you think are best for your style?
Thanks in advance for your opinions! | Do you use Python mostly for its functional or object-oriented features? | 0.033321 | 0 | 0 | 31,150 |
2,984,460 | 2010-06-06T14:09:00.000 | 25 | 0 | 1 | 0 | python,oop,functional-programming | 2,984,471 | 6 | false | 0 | 0 | I use the features of the language that get the job done with the shortest, cleanest code possible. If that means that I have to mix the two, which I do quite often, then that's what gets done. | 5 | 37 | 0 | I see what seems like a majority of Python developers on StackOverflow endorsing the use of concise functional tools like lambdas, maps, filters, etc., while others say their code is clearer and more maintainable by not using them. What is your preference?
Also, if you are a die-hard functional programmer or hardcore into OO, what other specific programming practices do you use that you think are best for your style?
Thanks in advance for your opinions! | Do you use Python mostly for its functional or object-oriented features? | 1 | 0 | 0 | 31,150 |
2,984,460 | 2010-06-06T14:09:00.000 | 6 | 0 | 1 | 0 | python,oop,functional-programming | 2,984,486 | 6 | false | 0 | 0 | Python has only marginal functional programming features so I would be surprised if many people would use it especially for that. For example there is no standard way to do function composition and the standard library's reduce() has been deprecated in favor of explicit loops.
Also, I don't think that map() or filter() are generally endorsed. In opposite, usually list comprehensions seem to be preferred. | 5 | 37 | 0 | I see what seems like a majority of Python developers on StackOverflow endorsing the use of concise functional tools like lambdas, maps, filters, etc., while others say their code is clearer and more maintainable by not using them. What is your preference?
Also, if you are a die-hard functional programmer or hardcore into OO, what other specific programming practices do you use that you think are best for your style?
Thanks in advance for your opinions! | Do you use Python mostly for its functional or object-oriented features? | 1 | 0 | 0 | 31,150 |
2,984,460 | 2010-06-06T14:09:00.000 | 10 | 0 | 1 | 0 | python,oop,functional-programming | 2,984,618 | 6 | false | 0 | 0 | I am both a die-hard OOP and functional programmer and these styles work very well together, mostly because they are completely orthogonal. There are plenty of object-oriented, functional languages and Python is one of them.
So basically, decomposing a application into classes is very helpful when designing a system. When you're doing the actual implementation, FP helps to write correct code.
Also I find it very offensive that you imply that functional programming just means "use folds everywhere". That is probably the biggest and worst misconception about FP. Much has been written of that topic, so I'll just say that the great thing about FP is the idea to combine simple (,correct and reusable) functions into new, more and more complex function. That way it's pretty hard to write "almost correct" code - either the whole thing does exactly what you want, or it breaks completely.
FP in Python mostly revolves around writing generators and their relatives (list comprehensions) and the things in the itertools module. Explicit map/filter/reduce calls are just unneeded. | 5 | 37 | 0 | I see what seems like a majority of Python developers on StackOverflow endorsing the use of concise functional tools like lambdas, maps, filters, etc., while others say their code is clearer and more maintainable by not using them. What is your preference?
Also, if you are a die-hard functional programmer or hardcore into OO, what other specific programming practices do you use that you think are best for your style?
Thanks in advance for your opinions! | Do you use Python mostly for its functional or object-oriented features? | 1 | 0 | 0 | 31,150 |
2,984,888 | 2010-06-06T16:16:00.000 | -1 | 0 | 0 | 0 | python,csv | 2,985,662 | 5 | false | 0 | 0 | Try parsing it as CSV and see if you get an error. | 2 | 34 | 0 | Could someone provide an effective way to check if a file has CSV format using Python ? | Check if file has a CSV format with Python | -0.039979 | 0 | 0 | 32,667 |
2,984,888 | 2010-06-06T16:16:00.000 | -2 | 0 | 0 | 0 | python,csv | 2,984,979 | 5 | false | 0 | 0 | You need to think clearly on what you consider a CSV file to be.
For example, what sort of characters can occur between the commas. Is it text-only? Can it be Unicode characters as well? Should every line have the same number of commas?
There is no strict definition of a CSV file that I'm aware of. Usually it's ASCII text separated by commas and every line has the same number of commas and is terminated by your platform's line terminator.
Anyway, once you answer the questions above you'll be a bit farther on your way to knowing how to detect when a file is a CSV file. | 2 | 34 | 0 | Could someone provide an effective way to check if a file has CSV format using Python ? | Check if file has a CSV format with Python | -0.07983 | 0 | 0 | 32,667 |
2,985,426 | 2010-06-06T18:44:00.000 | 4 | 1 | 0 | 1 | java,python,ubuntu,debian,operating-system | 2,985,450 | 6 | false | 0 | 0 | Both use Debian packages and Ubuntu is based on Debian but is more user friendly. Everything yo can do on one you can do on the other. I'd recommend Ubuntu if your new to linux on a Desktop. Though when it comes to servers I'd recommend Debian as it has less stuff "taken out" basically. | 5 | 10 | 0 | Are there any real differences between them?
I want to program in java and python. And of corse be a normal user: internet, etc
Which one will give me less headaches/more satisfaction ?
And which is better for a server machine ?
Thank you | Which os is better for development : Debian or Ubuntu? | 0.132549 | 0 | 0 | 20,812 |
2,985,426 | 2010-06-06T18:44:00.000 | 1 | 1 | 0 | 1 | java,python,ubuntu,debian,operating-system | 2,985,456 | 6 | false | 0 | 0 | In Ubuntu it is a bit easier to install packages for Java development, but it doesn't really matter that much. Remember that Ubuntu is based on Debian, so it works the same. Ubuntu just adds more user-friendly GUI's. | 5 | 10 | 0 | Are there any real differences between them?
I want to program in java and python. And of corse be a normal user: internet, etc
Which one will give me less headaches/more satisfaction ?
And which is better for a server machine ?
Thank you | Which os is better for development : Debian or Ubuntu? | 0.033321 | 0 | 0 | 20,812 |
2,985,426 | 2010-06-06T18:44:00.000 | 2 | 1 | 0 | 1 | java,python,ubuntu,debian,operating-system | 2,985,463 | 6 | false | 0 | 0 | Ubuntu is the more user-friendly of the two (I think Ubuntu is actually one of the most newbie-friendly Linux distros), so if you are new to Linux, Ubuntu is the way to go. Otherwise, the packages are mostly the same except for branding, so it's pretty much your choice. | 5 | 10 | 0 | Are there any real differences between them?
I want to program in java and python. And of corse be a normal user: internet, etc
Which one will give me less headaches/more satisfaction ?
And which is better for a server machine ?
Thank you | Which os is better for development : Debian or Ubuntu? | 0.066568 | 0 | 0 | 20,812 |
2,985,426 | 2010-06-06T18:44:00.000 | 1 | 1 | 0 | 1 | java,python,ubuntu,debian,operating-system | 2,985,472 | 6 | false | 0 | 0 | Neither is better. They both support the same tools and libraries. They are both linux. Anything and everything you can do on one you can do on the other. | 5 | 10 | 0 | Are there any real differences between them?
I want to program in java and python. And of corse be a normal user: internet, etc
Which one will give me less headaches/more satisfaction ?
And which is better for a server machine ?
Thank you | Which os is better for development : Debian or Ubuntu? | 0.033321 | 0 | 0 | 20,812 |
2,985,426 | 2010-06-06T18:44:00.000 | 2 | 1 | 0 | 1 | java,python,ubuntu,debian,operating-system | 2,985,442 | 6 | false | 0 | 0 | java and python would most likely run the same on both.
With Ubuntu you get additional space of support and active community, and perhaps larger user base.
So if and when you face a particular problem, chances are with Ubuntu, the solution will appear faster.
(although, whatever works on this should work on the other as well in theory) | 5 | 10 | 0 | Are there any real differences between them?
I want to program in java and python. And of corse be a normal user: internet, etc
Which one will give me less headaches/more satisfaction ?
And which is better for a server machine ?
Thank you | Which os is better for development : Debian or Ubuntu? | 0.066568 | 0 | 0 | 20,812 |
2,985,678 | 2010-06-06T19:53:00.000 | 1 | 0 | 1 | 0 | python,xlwt,pyexcelerator | 2,985,689 | 4 | false | 0 | 0 | The 00.0% number format expects percentages, so multiplying by 100 to display it is the correct behavior. To get the results you want, you could either put the data in the cell as a string in whatever format you choose or you could divide by 100.0 before you store the value in the cell. | 2 | 2 | 0 | just a quick question, how to add percent sign to numbers without modifying number. I have tried format percent with myStyleFont.num_format_str = '0.00%' but it multiplies with 100 but I need just to append percent.
Ty in advance.
Regards. | Adding percentage in python - xlwt / pyexcelerator | 0.049958 | 0 | 0 | 5,708 |
2,985,678 | 2010-06-06T19:53:00.000 | 5 | 0 | 1 | 0 | python,xlwt,pyexcelerator | 2,986,184 | 4 | true | 0 | 0 | Note carefully: "modifying" and "multiplies with 100" affect the displayed result, they affect neither the value stored in the file nor the value used in formula calculations.
The technique of making an operator be treated as a literal is known as "escaping".
The escape character in Excel number format strings is the backslash.
To do what you want, your format should be r"0.00\%" or "0.00\\%"
The above is a property of Excel, not restricted to Python or xlwt etc. You can test this using one of the UIs (Excel, OpenOffice Calc, Gnumeric).
Note that unless you have a restricted set of users who will read and understand the documentation that you will write [unlikely * 3], you shouldn't do that; it will be rather confusing -- the normal Excel user on seeing 5.00% assumes that the underlying number is 0.05. To see this, in Excel etc type these into A1, A2, A3: 5% then .05 then =A1=A2 ... you will see TRUE in A3.
pyExcelerator is abandonware; why do you mention it???
About myStyleFont.num_format_str = '0.00%' (1) "font" and "num_format_str" are quite independent components of a style aka XF; your variable name "myStyleFont" is confused/confusing. (2) Setting up XFs by poking attributes into objects is old hat in xlwt; use easyxf instead. | 2 | 2 | 0 | just a quick question, how to add percent sign to numbers without modifying number. I have tried format percent with myStyleFont.num_format_str = '0.00%' but it multiplies with 100 but I need just to append percent.
Ty in advance.
Regards. | Adding percentage in python - xlwt / pyexcelerator | 1.2 | 0 | 0 | 5,708 |
2,985,725 | 2010-06-06T20:08:00.000 | 4 | 0 | 1 | 0 | python,file,python-3.x | 2,985,733 | 3 | false | 0 | 0 | The problem is that since your lines are not of fixed length, you have to pay attention to line end markers to do your seeking, and that effectively becomes "traversing the file line by line". Thus, any viable approach is still going to be traversing the file, it's merely a matter of what can traverse it fastest. | 2 | 10 | 0 | Let's say that I routinely have to work with files with an unknown, but large, number of lines. Each line contains a set of integers (space, comma, semicolon, or some non-numeric character is the delimiter) in the closed interval [0, R], where R can be arbitrarily large. The number of integers on each line can be variable. Often times I get the same number of integers on each line, but occasionally I have lines with unequal sets of numbers.
Suppose I want to go to Nth line in the file and retrieve the Kth number on that line (and assume that the inputs N and K are valid --- that is, I am not worried about bad inputs). How do I go about doing this efficiently in Python 3.1.2 for Windows?
I do not want to traverse the file line by line.
I tried using mmap, but while poking around here on SO, I learned that that's probably not the best solution on a 32-bit build because of the 4GB limit. And in truth, I couldn't really figure out how to simply move N lines away from my current position. If I can at least just "jump" to the Nth line then I can use .split() and grab the Kth integer that way.
The nuance here is that I don't just need to grab one line from the file. I will need to grab several lines: they are not necessarily all near each other, the order in which I get them matters, and the order is not always based on some deterministic function.
Any ideas? I hope this is enough information.
Thanks! | Moving to an arbitrary position in a file in Python | 0.26052 | 0 | 0 | 4,225 |
2,985,725 | 2010-06-06T20:08:00.000 | 0 | 0 | 1 | 0 | python,file,python-3.x | 2,986,055 | 3 | false | 0 | 0 | Another solution, if the file is potentially going to change a lot, is to go full-way to a proper database. The database engine will create and maintain the indexes for you so you can do very fast searches/queries.
This may be an overkill though. | 2 | 10 | 0 | Let's say that I routinely have to work with files with an unknown, but large, number of lines. Each line contains a set of integers (space, comma, semicolon, or some non-numeric character is the delimiter) in the closed interval [0, R], where R can be arbitrarily large. The number of integers on each line can be variable. Often times I get the same number of integers on each line, but occasionally I have lines with unequal sets of numbers.
Suppose I want to go to Nth line in the file and retrieve the Kth number on that line (and assume that the inputs N and K are valid --- that is, I am not worried about bad inputs). How do I go about doing this efficiently in Python 3.1.2 for Windows?
I do not want to traverse the file line by line.
I tried using mmap, but while poking around here on SO, I learned that that's probably not the best solution on a 32-bit build because of the 4GB limit. And in truth, I couldn't really figure out how to simply move N lines away from my current position. If I can at least just "jump" to the Nth line then I can use .split() and grab the Kth integer that way.
The nuance here is that I don't just need to grab one line from the file. I will need to grab several lines: they are not necessarily all near each other, the order in which I get them matters, and the order is not always based on some deterministic function.
Any ideas? I hope this is enough information.
Thanks! | Moving to an arbitrary position in a file in Python | 0 | 0 | 0 | 4,225 |
2,986,317 | 2010-06-06T23:14:00.000 | 1 | 0 | 0 | 0 | python,authentication,rest | 2,986,411 | 3 | false | 1 | 0 | Assuming you plan to write your own auth client code, it isn't event-driven, and you don't need to validate an https certificate, I would suggest using python's built-in urllib2 to call the auth server. This will minimize dependencies, which ought to make deployment and upgrades easier.
That being said, there are more than a few existing auth-related protocols and libraries in the world, some of which might save you some time and security worries over writing code from scratch. For example, if you make your auth server speak OpenID, many off-the-self applications and servers (including Apache) will have auth client plugins already made for you. | 2 | 0 | 0 | I'm building my startup and I'm thinking ahead for shared use of services.
So far I want to allow people who have a user account on one app to be able to use the same user account on another app. This means I will have to build an authentication server.
I would like some opinions on how to allow an app to talk to the authentication server. Should I use curl? Should I use Python's http libs? All the code will be in Python.
All it's going to do is ask the authentication server if the person is allowed to use that app and the auth server will return a JSON user object. All authorization (roles and resources) will be app independent, so this app will not have to handle that.
Sorry if this seems a bit newbish; this is the first time I have separated authentication from the actual application. | Talking to an Authentication Server | 0.066568 | 0 | 1 | 308 |
2,986,317 | 2010-06-06T23:14:00.000 | 0 | 0 | 0 | 0 | python,authentication,rest | 2,986,610 | 3 | false | 1 | 0 | Your question isn't really a programming problem so much as it is an architecture problem. What I would recommend for your specific situation is to setup an LDAP server for authentication, authorization, and accounting (AAA). Then have your applications use that (every language has modules and libraries for LDAP). It is a reliable, secure, proven, and well-known way of handling such things.
Even if you strictly want to enforce HTTP-based authentication it is easy enough to slap an authentication server in front of your LDAP and call it a day. There's even existing code to do just that so you won't have to re-invent the wheel. | 2 | 0 | 0 | I'm building my startup and I'm thinking ahead for shared use of services.
So far I want to allow people who have a user account on one app to be able to use the same user account on another app. This means I will have to build an authentication server.
I would like some opinions on how to allow an app to talk to the authentication server. Should I use curl? Should I use Python's http libs? All the code will be in Python.
All it's going to do is ask the authentication server if the person is allowed to use that app and the auth server will return a JSON user object. All authorization (roles and resources) will be app independent, so this app will not have to handle that.
Sorry if this seems a bit newbish; this is the first time I have separated authentication from the actual application. | Talking to an Authentication Server | 0 | 0 | 1 | 308 |
2,986,357 | 2010-06-06T23:32:00.000 | 1 | 1 | 1 | 1 | python,version-control,module,easy-install | 2,986,445 | 4 | false | 0 | 0 | Packages installed by easy_install tend to come from snapshots of the developer's version control, generally made when the developer releases an official version. You're therefore going to have to choose between convenient automatic downloads via easy_install and up-to-the-minute code updates via version control. If you pick the latter, you can build and install most packages seen in the python package index directly from a version control checkout by running python setup.py install.
If you don't like the default installation directory, you can install to a custom location instead, and export a PYTHONPATH environment variable whose value is the path of the installed package's parent folder. | 1 | 5 | 0 | I'm newish to the python ecosystem, and have a question about module editing.
I use a bunch of third-party modules, distributed on PyPi. Coming from a C and Java background, I love the ease of easy_install <whatever>. This is a new, wonderful world, but the model breaks down when I want to edit the newly installed module for two reasons:
The egg files may be stored in a folder or archive somewhere crazy on the file system.
Using an egg seems to preclude using the version control system of the originating project, just as using a debian package precludes development from an originating VCS repository.
What is the best practice for installing modules from an arbitrary VCS repository? I want to be able to continue to import foomodule in other scripts. And if I modify the module's source code, will I need to perform any additional commands? | Best practice for installing python modules from an arbitrary VCS repository | 0.049958 | 0 | 0 | 1,655 |
2,986,419 | 2010-06-06T23:53:00.000 | 4 | 0 | 1 | 0 | python,python-module | 2,986,469 | 2 | false | 0 | 0 | You can also run the interpreter with -v option if you just want to see the modules that are imported (and the order they are imported in) | 1 | 3 | 0 | How can I get a list of the modules that have been imported into my process? | Python: what modules have been imported in my process? | 0.379949 | 0 | 0 | 741 |
2,986,659 | 2010-06-07T01:44:00.000 | 8 | 0 | 0 | 0 | python,django | 2,987,352 | 3 | true | 1 | 0 | First, large files are pretty common in python. Python is not java, which has one class per file, rather one module per file.
Next, views, even as the standard used, is a python module. A module need not be a single file. It can be a directory containing many files, and __init__.py
And then, views.py is only a convention. You, the application programmer are referring to it, and django itself doesn't refer anywhere. So, you are free to put it in as many files and refer appropriate functions to be handed over, the request to, in the urls.py | 2 | 6 | 0 | I have been reading some django tutorial and it seems like all the view functions have to go in a file called "views.py" and all the models go in "models.py". I fear that I might end up with a lot of view functions in my view.py file and the same is the case with models.py.
Is my understanding of django apps correct?
Django apps lets us separate common functionality into different apps and keep the file size of views and models to a minimum? For example: My project can contain an app for recipes (create, update, view, and search) and a friend app, the comments app, and so on.
Can I still move some of my view functions to a different file? So I only have the CRUD in one single file? | django app organization | 1.2 | 0 | 0 | 1,802 |
2,986,659 | 2010-06-07T01:44:00.000 | 1 | 0 | 0 | 0 | python,django | 2,986,813 | 3 | false | 1 | 0 | They don't have to go in views.py. They have to be referenced there.
views.py can include other files. So, if you feel the need, you can create other files in one app that contain your view functions and just include them in views.py.
The same applies to models.py.
Django apps lets us separate common
functionality into different apps and
keep the file size of views and models
to a minimum? For example: My project
can contain an app for recipes
(create, update, view, and search) and
a friend app, the comments app, and so
on.
I don't know about the "to a minimum" part - some apps are just big in views, others big in models. You should strive to partition things well, but sometimes there is just a lot of code. But other than that, this is a fair summary of Django apps, yes. | 2 | 6 | 0 | I have been reading some django tutorial and it seems like all the view functions have to go in a file called "views.py" and all the models go in "models.py". I fear that I might end up with a lot of view functions in my view.py file and the same is the case with models.py.
Is my understanding of django apps correct?
Django apps lets us separate common functionality into different apps and keep the file size of views and models to a minimum? For example: My project can contain an app for recipes (create, update, view, and search) and a friend app, the comments app, and so on.
Can I still move some of my view functions to a different file? So I only have the CRUD in one single file? | django app organization | 0.066568 | 0 | 0 | 1,802 |
2,986,766 | 2010-06-07T02:24:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,openid,integration | 2,988,807 | 1 | false | 1 | 0 | I understand your question.
You wish to be able to maintain a list of users that have signed up with your service, and also want to record users using OpenID to authenticate.
In order to solve this I would do either of the following:
Create a new user in your users table for each new user logged in under OpenID, and store their OpenID in this table to allow you to join the two.
Move your site to OpenID and change all references to your current users to OpenID users.
I'd probably go with Option 1 if you already have this app in production.
Note: More experienced Open ID users will probably correct me! | 1 | 0 | 0 | how to integration local user and openid(or facebook twitter) user ,
did you know some framework have already done this ,
updated
my mean is : how to deal with 'local user' and 'openid user',
and how to mix them in one model .
please give me a framework that realize 'local user' and 'openid user' | what should i do after openid (or twitter ,facebook) user login my site ,on gae | 0.197375 | 0 | 0 | 264 |
2,987,168 | 2010-06-07T05:04:00.000 | 0 | 1 | 0 | 1 | python,linux,sockets,port | 42,511,544 | 5 | false | 0 | 0 | One thing that wasn't mentioned. Most port applications in python take a command line argument. You can parse /proc/pid/cmdline and parse out the port number. This avoids the large overhead of using ss or netstat on servers with a ton of connections. | 3 | 23 | 0 | How do I get the ports that a process is listening on using python? The pid of the process is known. | How to obtain ports that a process in listening on? | 0 | 0 | 0 | 21,117 |
2,987,168 | 2010-06-07T05:04:00.000 | 1 | 1 | 0 | 1 | python,linux,sockets,port | 2,987,231 | 5 | false | 0 | 0 | You can use netstat -lnp, last column will contain pid and process name. In Python you can parse output of this command. | 3 | 23 | 0 | How do I get the ports that a process is listening on using python? The pid of the process is known. | How to obtain ports that a process in listening on? | 0.039979 | 0 | 0 | 21,117 |
2,987,168 | 2010-06-07T05:04:00.000 | 4 | 1 | 0 | 1 | python,linux,sockets,port | 2,987,379 | 5 | false | 0 | 0 | If you don't want to parse the output of a program like netstat or lsof, you can grovel through the /proc filesystem and try to find documentation on the files within. /proc/<pid>/net/tcp might be especially interesting to you. Of course, the format of those files might change between kernel releases, so parsing command output is generally considered more reliable. | 3 | 23 | 0 | How do I get the ports that a process is listening on using python? The pid of the process is known. | How to obtain ports that a process in listening on? | 0.158649 | 0 | 0 | 21,117 |
2,987,429 | 2010-06-07T06:18:00.000 | 0 | 0 | 0 | 0 | python,google-app-engine,thread-safety | 2,989,839 | 4 | false | 1 | 0 | memcache is 'just' a cache, and in its usual guise it's not suitable for an atomic data store, which is what you're trying to use it for. I'd suggest using the GAE datastore instead, which is designed for this sort of issue. | 1 | 0 | 0 | This code should automatically connect players when they enter a game.
But the problem is when two users try to connect at the same time - in this case 2nd user can easily overwrite changes made by 1st user ('room_1' variable).
How could I make it thread safe?
def join(userId):
users = memcache.get('room_1')
users.append(userId)
memcache.set('room_1', users)
return users
I'm using Google App Engine (python) and going to implement simple game-server for exchanging peers given by Adobe Stratus. | Read -> change -> save. Thread safe | 0 | 0 | 0 | 249 |
2,987,980 | 2010-06-07T08:22:00.000 | 2 | 1 | 1 | 0 | python,multithreading,parallel-processing,message-passing | 2,988,386 | 5 | false | 0 | 0 | Depending on how much data you need to process and how many CPUs/machines you intend to use, it is in some cases better to write a part of it in C (or Java/C# if you want to use jython/IronPython)
The speedup you can get from that might do more for your performance than running things in parallel on 8 CPUs. | 1 | 26 | 0 | What are the options for achieving parallelism in Python? I want to perform a bunch of CPU bound calculations over some very large rasters, and would like to parallelise them. Coming from a C background, I am familiar with three approaches to parallelism:
Message passing processes, possibly distributed across a cluster, e.g. MPI.
Explicit shared memory parallelism, either using pthreads or fork(), pipe(), et. al
Implicit shared memory parallelism, using OpenMP.
Deciding on an approach to use is an exercise in trade-offs.
In Python, what approaches are available and what are their characteristics? Is there a clusterable MPI clone? What are the preferred ways of achieving shared memory parallelism? I have heard reference to problems with the GIL, as well as references to tasklets.
In short, what do I need to know about the different parallelization strategies in Python before choosing between them? | Parallelism in Python | 0.07983 | 0 | 0 | 14,722 |
2,988,211 | 2010-06-07T09:11:00.000 | 1 | 0 | 1 | 0 | python,file-io,character | 2,988,272 | 14 | false | 0 | 0 | You should try f.read(1), which is definitely correct and the right thing to do. | 1 | 90 | 0 | Can anyone tell me how can I do this? | How to read a single character at a time from a file in Python? | 0.014285 | 0 | 0 | 190,751 |
2,988,636 | 2010-06-07T10:20:00.000 | 4 | 0 | 0 | 0 | python,cherrypy | 2,989,432 | 1 | true | 0 | 0 | I got it figured out:
cherrypy.engine.start(); cherrypy.server.wait()
it's the way to go.
Otherwise, I think you can get away with some tricks with
cherrypy.server.bus.states | 1 | 4 | 0 | I am trying to write some unit tests for a small web service written with Cherrypy and I am wondering what's the best way to figure out that the server has started, so i don't get connection refused if I try to connect too early to the service ? | cherrypy when to know that the server has started | 1.2 | 0 | 1 | 250 |
2,989,388 | 2010-06-07T12:23:00.000 | 1 | 0 | 0 | 1 | python,file-io | 2,989,433 | 2 | false | 0 | 0 | In Linux, you can open a file while another process is writing to it without Python throwing an OSError, so in general, you cannot know for sure whether the other side has finished writing into that file. You can try some hacks, though:
You can check the file size regularly to see whether it increased since the last check. If it hasn't increased in, say, five seconds, you might be safe to assume that the copy has finished. I'm saying might since this is not true in all circumstances. If the other process that is writing the file is blocked for whatever reason, it might temporarily stop writing to the file and resume it later. So this is not 100% fool-proof, but might work for local file copies if the system is never under a heavy load that would stall the writing process.
You can check the output of fuser (this is a shell command), which will list the process IDs for all the files that are holding a file handle to a given file name. If this list includes any process other than yours, you can assume that the copying process hasn't finished yet. However, you will have to make sure that fuser is installed on the target system in order to make it work. | 1 | 0 | 0 | In Linux, how can we know if a file has completed copying before reading it? In Windows, an OSError is raised. | File copy completion? | 0.099668 | 0 | 0 | 995 |
2,989,398 | 2010-06-07T12:24:00.000 | 2 | 1 | 1 | 0 | python,logging | 3,060,995 | 2 | false | 0 | 0 | Here is my solution that came out of this discussion. Thanks to everyone for suggestions.
Usage:
>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> from hierlogger import hierlogger as logger
>>> def main():
... logger().debug("test")
...
>>> main()
DEBUG:main:test
By default it will name logger as ... You can also control the depth by providing parameter:
3 - module.class.method default
2 - module.class
1 - module only
Logger instances are also cached to prevent calculating logger name on each call.
I hope someone will enjoy it.
The code:
import logging
import inspect
class NullHandler(logging.Handler):
def emit(self, record): pass
def hierlogger(level=3):
callerFrame = inspect.stack()[1]
caller = callerFrame[0]
lname = '__heirlogger'+str(level)+'__'
if lname not in caller.f_locals:
loggerName = str()
if level >= 1:
try:
loggerName += inspect.getmodule(inspect.stack()[1][0]).__name__
except: pass
if 'self' in caller.f_locals and (level >= 2):
loggerName += ('.' if len(loggerName) > 0 else '') +
caller.f_locals['self'].__class__.__name__
if callerFrame[3] != '' and level >= 3:
loggerName += ('.' if len(loggerName) > 0 else '') + callerFrame[3]
caller.f_locals[lname] = logging.getLogger(loggerName)
caller.f_locals[lname].addHandler(NullHandler())
return caller.f_locals[lname] | 1 | 6 | 0 | I'm writing python package/module and would like the logging messages mention what module/class/function they come from. I.e. if I run this code:
import mymodule.utils.worker as worker
w = worker.Worker()
w.run()
I'd like to logging messages looks like this:
2010-06-07 15:15:29 INFO mymodule.utils.worker.Worker.run <pid/threadid>: Hello from worker
How can I accomplish this?
Thanks. | logger chain in python | 0.197375 | 0 | 0 | 3,330 |
2,989,823 | 2010-06-07T13:17:00.000 | 0 | 0 | 1 | 0 | python,multiprocessing,file-descriptor | 2,989,840 | 4 | true | 0 | 0 | There isn't a way that I know of to share file descriptors between processes.
If a way exists, it is most likely OS specific.
My guess is that you need to share data on another level. | 1 | 8 | 0 | I am using multiprocessing module, and using pools to start multiple workers. But the file descriptors which are opened at the parent process are closed in the worker processes. I want them to be open..! Is there any way to pass file descriptors to be shared across parent and children? | How to pass file descriptors from parent to child in python? | 1.2 | 0 | 0 | 6,758 |
2,990,301 | 2010-06-07T14:22:00.000 | 0 | 0 | 1 | 0 | python,fastcgi | 2,990,364 | 5 | false | 0 | 0 | When you use a module in Python, it (generally) gets compiled if it hasn't been already. For example, if you have a Django app deployed with just .py files, they'll get compiled (and output as .pyc files) as the modules are imported by the app. | 3 | 0 | 0 | Does python compile down to some byte code or is it rendered on the fly each time like php/asp?
From my readings I read it has its own byte code format, so i figured it was like java/.net where it compiles into a intermediate language/byte code.
so it is more effecient in that respect that php right? | python on the web, does it compile down to bytecode or is it more like php? | 0 | 0 | 0 | 169 |
2,990,301 | 2010-06-07T14:22:00.000 | 1 | 0 | 1 | 0 | python,fastcgi | 2,990,377 | 5 | false | 0 | 0 | Given a language X, and a way the server can be aware of it (a module or whatever) or a proper "intermediate" CGI program mX, this mX can be programmed so that it indeed interprets directly plain text script in X (like php), or bytecode compiled code (originally written in X). So, provided the existance of the proper mX, it could be both options. But I think the most common one is the same as php and asp.
Coping with bytecodes can be more efficient than interpreting scripts (even though modern interpreters are not implemented in the simple way and use "tricks" to boost performance) | 3 | 0 | 0 | Does python compile down to some byte code or is it rendered on the fly each time like php/asp?
From my readings I read it has its own byte code format, so i figured it was like java/.net where it compiles into a intermediate language/byte code.
so it is more effecient in that respect that php right? | python on the web, does it compile down to bytecode or is it more like php? | 0.039979 | 0 | 0 | 169 |
2,990,301 | 2010-06-07T14:22:00.000 | 0 | 0 | 1 | 0 | python,fastcgi | 2,990,401 | 5 | false | 0 | 0 | Python modules are 'compiled' to .pyc files when they are imported. This isn't the same as compiling Java or .NET though, what's left has an almost 1-1 correspondence with your source; it means the file doesn't have to be parsed next time, but that's about all.
You can use the compile or compileall modules to pre-compile a bundle of scripts, or to compile a script which wouldn't otherwise be compiled. I don't think that running a script from the command line (or from CGI) would use the .pyc though. | 3 | 0 | 0 | Does python compile down to some byte code or is it rendered on the fly each time like php/asp?
From my readings I read it has its own byte code format, so i figured it was like java/.net where it compiles into a intermediate language/byte code.
so it is more effecient in that respect that php right? | python on the web, does it compile down to bytecode or is it more like php? | 0 | 0 | 0 | 169 |
2,990,995 | 2010-06-07T15:52:00.000 | 3 | 0 | 0 | 0 | python,serialization,pickle,object-persistence | 2,991,030 | 3 | false | 0 | 0 | Pickle (cPickle) can handle any (picklable) Python object. So as long, as you're not trying to pickle thread or filehandle or something like that, you're ok. | 1 | 3 | 1 | I'm doing a project with reasonalby big DataBase. It's not a probper DB file, but a class with format as follows:
DataBase.Nodes.Data=[[] for i in range(1,1000)] f.e. this DataBase is all together something like few thousands rows. Fisrt question - is the way I'm doing efficient, or is it better to use SQL, or any other "proper" DB, which I've never used actually.
And the main question - I'd like to save my DataBase class with all record, and then re-open it with Python in another session. Is that possible, what tool should I use? cPickle - it seems to be only for strings, any other?
In matlab there's very useful functionality named save workspace - it saves all Your variables to a file that You can open at another session - this would be vary useful in python! | How to save big "database-like" class in python | 0.197375 | 1 | 0 | 306 |
2,991,554 | 2010-06-07T17:07:00.000 | 2 | 1 | 0 | 0 | c#,java,python | 2,991,586 | 10 | false | 0 | 0 | Java and C# are statically typed languages, while Python is a dynamically typed language. That's a huge difference.
The syntax of Java and C# is similar (but I would not call it "almost identical" as Justin Niessner says). | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | 0.039979 | 0 | 0 | 29,161 |
2,991,554 | 2010-06-07T17:07:00.000 | 7 | 1 | 0 | 0 | c#,java,python | 2,991,562 | 10 | false | 0 | 0 | C# and Java have almost identical syntax and very similar libraries. There are differences that you have to be aware of (Type Erasure in Java, for example).
Python is a completely different animal. It is a dynamic language (where the other two aren't). Python winds up being closer in style to something like Ruby. | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | 1 | 0 | 0 | 29,161 |
2,991,554 | 2010-06-07T17:07:00.000 | 11 | 1 | 0 | 0 | c#,java,python | 2,991,567 | 10 | false | 0 | 0 | Python is a dynamic language where Java and C# are really not. It is totally different than the other two. There are ways to accomplishing things in Python that do not translate well to the others and vice versa.
Java and C# look the same, but they have differences between the two under the sheets. Being an expert in one, does not make you an expert in the other by any stretch of the imagination. The syntax is similar and libraries are too, so it would be easier to get up to speed in one or the other, but there are subtleties that can trip you up. | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | 1 | 0 | 0 | 29,161 |
2,991,554 | 2010-06-07T17:07:00.000 | -2 | 1 | 0 | 0 | c#,java,python | 2,991,564 | 10 | false | 0 | 0 | They are not similar at ALL. They all take widely different approaches to OOP, syntax, and static/dynamic typing. | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | -0.039979 | 0 | 0 | 29,161 |
2,991,554 | 2010-06-07T17:07:00.000 | 1 | 1 | 0 | 0 | c#,java,python | 2,991,569 | 10 | false | 0 | 0 | Java and c# are pretty similar in terms of syntax and are mostly strongly typed (C# is getting more dynamic with every version), Python is a dynamic language | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | 0.019997 | 0 | 0 | 29,161 |
2,991,554 | 2010-06-07T17:07:00.000 | 1 | 1 | 0 | 0 | c#,java,python | 2,991,581 | 10 | false | 0 | 0 | Java and C# are very similar and are syntactically similar to C/C++. They also use braces to mark code blocks.
Python is completely different. Although imperative like Java and C#, Python uses indentation to define blocks of code.
Java and C# are also compiled languages, whereas Python is interpreted and dynamic.
Python, Ruby, and Groovy are somewhat similar languages. | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | 0.019997 | 0 | 0 | 29,161 |
2,991,554 | 2010-06-07T17:07:00.000 | 1 | 1 | 0 | 0 | c#,java,python | 2,992,318 | 10 | false | 0 | 0 | C# and Java are the two languages you listed that are most similar. Python has a very different syntax, and uses a slightly different programming model. Both C# and Java are Object Oriented languages at their core, with increasing nods to Dynamic Typing. Python began as a Dynamically Typed scripting language and has been picking up more and more Object Oriented features over the years.
The C# class library (.NET Framework) is theoretically multi-platform, though it's heavily weighted towards the Windows platform, and any other OS compatibility is largely an afterthought. The .NET framework currently has two "official" frameworks for building windowed applications (Windows Forms, and WPF) and two "official" frameworks for building web applications (ASP.NET, and ASP.NET MVC). Windows Forms is similar to Java Swing, but the other four frameworks are very different from much of what is found in the Java or Python worlds. There are many language features in C# that are different or lacking in Java, such as Delegates.
The Java class library is pretty solidly multi-platform. It's officially supported desktop and web frameworks (Swing and J2EE) are generally regarded as slow, and difficult to use. However, there is a very lively open source community which has built several competing frameworks that are very powerful and versatile. Java as a language is very slow to introduce new language features, though it is runtime-compatible with several other languages that run on the Java platform (Groovy, Jython, Scala, etc..). Java is the language which has has the most run-time optimizations put into it, so an application written in Java is almost certainly going to be faster than an application written in C# or Python.
Python is an interpreted language (in general), and is pretty solidly multi-platform. Python has no "official" desktop or web frameworks, though desktop applications can be written using GTK or Qt support, both of which are multi-platform. Django has become a de-facto standard for Python web development, and is regarded as a very powerful and expressive framework. Python is at this point fully Object Oriented, and is notable for it's powerful tools for working with collections/arrays/lists. As an interpreted language, Python will be significantly slower than either C# or Java. | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | 0.019997 | 0 | 0 | 29,161 |
2,991,554 | 2010-06-07T17:07:00.000 | 1 | 1 | 0 | 0 | c#,java,python | 2,991,604 | 10 | false | 0 | 0 | C# and Java are easy to move between, although I don't know many people who are experts in both. C#'s syntax is based off of Java, so they read very, very similarly. They both run cross-platform; Java on the JVM, C# on .NET or Mono. They're both OOP, and widely used for web development. I'd use whichever the team was more familiar with.
Python's off to the side there. It's also used frequently as a scripting language. It can use classes and object orientation, but isn't forced to. It's not as well supported for web work. I'd use this for a different set of tasks than C#/Java. | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | 0.019997 | 0 | 0 | 29,161 |
2,991,554 | 2010-06-07T17:07:00.000 | -1 | 1 | 0 | 0 | c#,java,python | 2,991,639 | 10 | false | 0 | 0 | Python was made to be simpler, more readable, flexible and object oriented than what existed before - i.e. Java, Perl etc. It's actually closer to Java than it is to Ruby. Ruby is more like Smalltalk. Think of Python as Java without the stuff that mostly gets in your way, makes things awkward to do, slows you down or clutters the essence of your logic. So no semi-colons, curly braces for scoping. No static variable declaration or variables at all really they're identifiers that point to objects instead.
There's also a standard style guide for Python unlike other languages. Indentation is used to indicate scope and inconsistent indentation is a syntax error.
It also includes some often used things built into the language: lists, dictionaries, sets, generators etc.
Java is nice for those familiar with C / C++ syntax and are set in their ways, like that syntax and find it readable. Ruby and Python are for those that preferred Pascal or Smalltalk to C, like Lisp etc. | 9 | 6 | 0 | I know it is a kind of broad question but any answer are appreciated. | How Similar are Java, C#, and Python? | -0.019997 | 0 | 0 | 29,161 |
2,991,852 | 2010-06-07T17:47:00.000 | 1 | 0 | 0 | 0 | python,networking | 2,991,942 | 4 | true | 0 | 0 | Disclaimer: I have little experience with network applications.
That being said, the raw sockets isn't terribly difficult to wrap your head around/use, especially if you're not too worried about optimization. That takes more thought, of course. But using GTK and raw sockets should be fairly straightforward. Especially since you've used the twisted framework, which IIRC, just abstracts some of the more nitty-gritty details of socket managing. | 2 | 1 | 0 | I'm writing an application that sends files over network, I want to develop a custom protocol to not limit myself in term on feature richness (http wouldn't be appropriate, the nearest thing is the bittorrent protocol maybe).
I've tried with twisted, I've built a good app but there's a bug in twisted that makes my GUI blocking, so I've to switch to another framework/strategy.
What do you suggest? Using raw sockets and using gtk mainloop (there are select-like functions in the toolkit) is too much difficult?
It's viable running two mainloops in different threads?
Asking for suggestions | networking application and GUI in python | 1.2 | 0 | 1 | 1,158 |
2,991,852 | 2010-06-07T17:47:00.000 | 1 | 0 | 0 | 0 | python,networking | 2,991,935 | 4 | false | 0 | 0 | Two threads: one for the GUI, one for sending/receiving data. Tkinter would be a perfectly fine toolkit for this. You don't need twisted or any other external libraries or toolkits -- what comes out of the box is sufficient to get the job done. | 2 | 1 | 0 | I'm writing an application that sends files over network, I want to develop a custom protocol to not limit myself in term on feature richness (http wouldn't be appropriate, the nearest thing is the bittorrent protocol maybe).
I've tried with twisted, I've built a good app but there's a bug in twisted that makes my GUI blocking, so I've to switch to another framework/strategy.
What do you suggest? Using raw sockets and using gtk mainloop (there are select-like functions in the toolkit) is too much difficult?
It's viable running two mainloops in different threads?
Asking for suggestions | networking application and GUI in python | 0.049958 | 0 | 1 | 1,158 |
2,991,910 | 2010-06-07T17:58:00.000 | 3 | 0 | 0 | 0 | python,user-interface,portability | 2,991,933 | 2 | false | 0 | 1 | Write a core library that handles the functionality and provides hooks for progress notification. Then write the interfaces as separate applications or libraries that use the core library. | 1 | 2 | 0 | The substance of an app is more important to me than its apperance, yet GUI always seems to dominate a disproportionate percentage of programmer time, development and target resource requirements/constraints.
Ideally I'd like an application architecture that will permit me to develop an app
using a lightweight reference GUI/kit and focus on non gui aspects to produce
a quality app which is GUI enabled/friendly.
I would want APP and the GUI to be sufficiently decoupled to maximize the ease
for you GUI experts to plug the app into to some target GUI design/framework/context.
e.g. targets such as: termcap GUI, web app GUI framework, desktop GUI, thin client GUI.
In short: How do I mostly ignore the GUI, but avoid painting you into a corner when I don't even know who you are yet? | How to ignore GUI as much as possible without rendering APP less GUI developer friendly | 0.291313 | 0 | 0 | 141 |
2,992,057 | 2010-06-07T18:22:00.000 | 0 | 0 | 0 | 1 | python,encryption,passwords | 2,992,104 | 4 | false | 0 | 0 | On first i think you can change passwords on md5 of this passwords..
it will give more safety. | 1 | 1 | 0 | Where we work we need to remember about 10 long passwords which need to change every so often. I would like to create a utility which can potentially save these passwords in an encrypted file so that we can keep track of them.
I can think of some sort of dictionary passwd = {'host1':'pass1', 'host2':'pass2'}, etc, but I don't know what to do about encryption (absolutely zero experience in the topic).
So, my question is really two questions:
Is there a Linux-based utility which lets you do that?
If you were to program it in Python, how would you go about it?
A perk of approach two, would be for the software to update the ssh public keys after the password has been changed (you know the pain of updating ~15 tokens once you change your password).
As it can be expected, I have zero control over the actual network configuration and the management of scp keys. I can only hope to provide a simple utility to me an my very few coworkers so that, if we need to, we can retrieve a password on demand.
Cheers. | Python-based password tracker (or dictionary) | 0 | 0 | 0 | 313 |
2,993,393 | 2010-06-07T21:43:00.000 | 3 | 1 | 0 | 0 | c++,python,reference,weak | 2,993,422 | 2 | true | 0 | 0 | If you call PyWeakref_GetObject on the weak reference it should return either Py_None or NULL, I forget which. But you should check if it's returning one of those and that will tell you that the referenced object is no longer alive. | 1 | 7 | 0 | I am passing some weakrefs from Python into C++ class, but C++ destructors are actively trying to access the ref when the real object is already dead, obviously it crashes...
Is there any Python C/API approach to find out if Python reference is still alive or any other known workaround for this ?
Thanks | Python - how to check if weak reference is still available | 1.2 | 0 | 0 | 1,004 |
2,993,805 | 2010-06-07T23:09:00.000 | 3 | 0 | 1 | 1 | python,command-line | 2,993,839 | 2 | true | 0 | 0 | Outputting \b will move the output cursor left 1 cell, and outputting \r will return it to column 0. Make sure to flush the output often though. | 1 | 1 | 0 | I am essentially building a timer. I have a python script that monitors for an event and then prints out the seconds that have elapsed since that event.
Instead of an ugly stream of numbers printed to the command line, I would like to display only the current elapsed time "in-place"-- so that only one number is visible at any given time.
Is there a simple way to do this?
If possible I'd like to use built-in python modules. I'm on Windows, so simpler the better. (E.g. no X11). | Display constantly updating information in-place in command-line window using python? | 1.2 | 0 | 0 | 638 |
2,994,398 | 2010-06-08T02:17:00.000 | 2 | 0 | 0 | 0 | python,image,opencv,identification | 2,994,438 | 1 | true | 0 | 0 | Your question is difficult to answer without more clarification about the types of images you are analyzing and your purpose.
The tone of the post seems that you are interested in tinkering -- that's fine. If you want to tinker, one example application might be iris identification using wavelet analysis. You can also try motion tracking; I've done that in OpenCV using the sample projects, and it is kind of interesting. You can try image segmentation for the purpose of scene analysis; take an outdoor photo and segment the image according to texture and/or color.
There is no hard number for how large your training set must be. It is highly application dependent. A few hundred images may suffice. | 1 | 1 | 1 | What are some fast and somewhat reliable ways to extract information about images? I've been tinkering with OpenCV and this seems so far to be the best route plus it has Python bindings.
So to be more specific I'd like to determine what I can about what's in an image. So for example the haar face detection and full body detection classifiers are great - now I can tell that most likely there are faces and / or people in the image as well as about how many.
okay - what else - how about whether there are any buildings and if so what do they seem to be - huts, office buildings etc? Is there sky visible, grass, trees and so forth.
From what I've read about training classifiers to detect objects, it seems like a rather laborious process 10,000 or so wrong images and 5,000 or so correct samples to train a classifier.
I'm hoping that there are some decent ones around already instead of having to do this all myself for a bunch of different objects - or is there some other way to go about this sort of thing? | Extracting Information from Images | 1.2 | 0 | 0 | 1,546 |
2,995,041 | 2010-06-08T05:51:00.000 | 1 | 1 | 1 | 0 | python,plugins,gedit | 2,995,380 | 8 | false | 0 | 0 | What I do is keep a file called python_temp.py. I have a shortcut to it in my dock. I use it as a scratch pad. Whenever I want to quickly run some code, I copy the code, click the shortcut in the doc, paste in the text and hit f5 to run. Quick, easy, simple, flexible. | 3 | 7 | 0 | I'm just starting out learning python with GEdit plus various plugins as my IDE.
Visual Studio/F# has a feature which permits the highlighting on a piece of text in the code window which then, on a keypress, gets executed in the F# console.
Is there a similar facility/plugin which would enable this sort of behaviour for GEdit/Python? I do have various execution type plugins (Run In Python,Better Python Console) but they don't give me this particular behaviour - or at least I'm not sure how to configure them to give me this. I find it useful because in learning python, I have some test code I want to execute particular individual lines or small segments of code (rather then a complete file) to try and understand what they are doing (and the copy/paste can get a bit tiresome)
... or perhaps there is a better way to do code exploration?
Many thx
Simon | GEdit/Python execution plugin? | 0.024995 | 0 | 0 | 23,890 |
2,995,041 | 2010-06-08T05:51:00.000 | 1 | 1 | 1 | 0 | python,plugins,gedit | 18,914,523 | 8 | false | 0 | 0 | The closest to a decent IDE...
Install gedit-developer-plugins (through synaptic || apt-get) and don't forget to enable (what you need) from gEdit's plugins (Edit->Preferences [tab] plugins) and happy coding | 3 | 7 | 0 | I'm just starting out learning python with GEdit plus various plugins as my IDE.
Visual Studio/F# has a feature which permits the highlighting on a piece of text in the code window which then, on a keypress, gets executed in the F# console.
Is there a similar facility/plugin which would enable this sort of behaviour for GEdit/Python? I do have various execution type plugins (Run In Python,Better Python Console) but they don't give me this particular behaviour - or at least I'm not sure how to configure them to give me this. I find it useful because in learning python, I have some test code I want to execute particular individual lines or small segments of code (rather then a complete file) to try and understand what they are doing (and the copy/paste can get a bit tiresome)
... or perhaps there is a better way to do code exploration?
Many thx
Simon | GEdit/Python execution plugin? | 0.024995 | 0 | 0 | 23,890 |
2,995,041 | 2010-06-08T05:51:00.000 | 1 | 1 | 1 | 0 | python,plugins,gedit | 2,995,332 | 8 | false | 0 | 0 | I installed iPython console in gedit and do most of my simple scripting in it, but gedit is a very simple editor, so it'll not have some advance feature like an IDE
But if you want code exploring, or auto completion, I recommend a real IDE like Eclipse.
If you just want a editor, KomodoEdit is fine. | 3 | 7 | 0 | I'm just starting out learning python with GEdit plus various plugins as my IDE.
Visual Studio/F# has a feature which permits the highlighting on a piece of text in the code window which then, on a keypress, gets executed in the F# console.
Is there a similar facility/plugin which would enable this sort of behaviour for GEdit/Python? I do have various execution type plugins (Run In Python,Better Python Console) but they don't give me this particular behaviour - or at least I'm not sure how to configure them to give me this. I find it useful because in learning python, I have some test code I want to execute particular individual lines or small segments of code (rather then a complete file) to try and understand what they are doing (and the copy/paste can get a bit tiresome)
... or perhaps there is a better way to do code exploration?
Many thx
Simon | GEdit/Python execution plugin? | 0.024995 | 0 | 0 | 23,890 |
2,996,110 | 2010-06-08T09:27:00.000 | 31 | 1 | 1 | 0 | python,scripting,module | 2,997,044 | 2 | false | 0 | 0 | Any Python module may be executed as a script. The only significant difference is that when imported as a module the filename is used as the basis for the module name whereas if you execute it as a script the module is named __main__.
This distinction makes it possible to have different behaviour when imported by enclosing script specific code in a block guarded by if __name__=="__main__". This has been known to cause confusion when a user attempts to import the main module under its own name rather than importing __main__.
A minor difference between scripts and modules is that when you import a module the system will attempt to use an existing .pyc file (provided it exists and is up to date and for that version of Python) and if it has to compile from a .py file it will attempt to save a .pyc file. When you run a .py file as script it does not attempt to load a previously compiled module, nor will it attempt to save the compiled code. For this reason it may be worth keeping scripts small to minimise startup time. | 2 | 51 | 0 | Think the title summarizes the question :-) | What is the difference between a module and a script in Python? | 1 | 0 | 0 | 32,140 |
2,996,110 | 2010-06-08T09:27:00.000 | 62 | 1 | 1 | 0 | python,scripting,module | 2,996,170 | 2 | true | 0 | 0 | A script is generally a directly executable piece of code, run by itself. A module is generally a library, imported by other pieces of code.
Note that there's no internal distinction -- both are executable and importable, although library code often won't do anything (or will just run its unit tests) when executed directly and importing code designed to be a script will cause it to execute, hence the common if __name__ == "__main__" test. | 2 | 51 | 0 | Think the title summarizes the question :-) | What is the difference between a module and a script in Python? | 1.2 | 0 | 0 | 32,140 |
2,996,854 | 2010-06-08T11:29:00.000 | 3 | 0 | 0 | 0 | .net,wpf,visual-studio-2010,ironpython | 3,000,618 | 1 | true | 0 | 1 | Currently we don't have support for double clicking and adding an event handler. For the time being you'll need to wire it up by hand. We are going to spend some time on improving the designer experience so that this should eventually work. | 1 | 1 | 0 | I installed VS2010 and IronPython tools. When I start a VB.NET WPFProject everything works fine. But when I start a WPF IronPython project, it creates a button by default which fills all the window, and when you try to add an event to that control or another control dragged from the toolbox, you just cant do it. You double click on them, but the event is never added to the sourcecode. Anyone had this problem? | Cant add events for controls in WPF IronPython VS2010 | 1.2 | 0 | 0 | 355 |
2,997,697 | 2010-06-08T13:31:00.000 | 1 | 0 | 1 | 0 | python,module,sap-ase | 2,997,750 | 3 | false | 0 | 0 | If you can get Sybase to use a virtual environment (I know nothing about Sybase, sorry), perhaps you could install the module using virtualenv, which generally doesn't require root access or SA approval. | 1 | 3 | 0 | I need to use the Sybase Python module but our SA's won't install because it's not in the repo's. I've downloaded it and placed it on the box and would just like to 'import' or 'include' the module without installing it first. - Is this possible? From the looks of it (Sybase ASE) it needs some type of compilation before use. Is it possible for this type of work around? | Importing Python modules without installing - Sybase ASE | 0.066568 | 0 | 0 | 1,885 |
2,997,764 | 2010-06-08T13:39:00.000 | 0 | 0 | 0 | 0 | python,django | 2,998,022 | 1 | false | 1 | 0 | Within the templates folder, there is should be a 404.html. Remove that, and django defaults to the standard 404 page! | 1 | 0 | 0 | I 'm using django-lfs with default django-app.Its appear django-lfs override 404 default template. How to avoid this process | Avoid 404 page override | 0 | 0 | 0 | 87 |
2,997,869 | 2010-06-08T13:51:00.000 | 0 | 0 | 1 | 0 | python,regex | 2,997,898 | 3 | false | 0 | 0 | You need to escape the + as it has a special meaning in regexp (one or many a's).
search a\+ instead of a+ | 1 | 6 | 0 | This should be easy, but I've managed to stump 2 people so far at work & I've been at it for over 3 hours now, so here goes.
I need to replace a+ with aplus (along with a few other cases) with the Python re module.
eg. "I passed my a+ exam." needs to become "I passed my aplus exam."
Just using \ba+ works fine most of the time, but fails in the case of a+b, so I can't use it, it needs to match a+ as a distinct word. I've tried \ba+\b but that fails because I assume the + is a word boundary.
I've also tried \ba+\W which does work, but is greedy and eats up the space (or any other non-alpha char that would be there).
Any suggestions please? | Matching a+ in a regex | 0 | 0 | 0 | 753 |
2,998,215 | 2010-06-08T14:27:00.000 | 20 | 0 | 1 | 0 | python,compiled,interpreted-language,pyc | 2,998,549 | 12 | false | 0 | 0 | Python (at least the most common implementation of it) follows a pattern of compiling the original source to byte codes, then interpreting the byte codes on a virtual machine. This means (again, the most common implementation) is neither a pure interpreter nor a pure compiler.
The other side of this is, however, that the compilation process is mostly hidden -- the .pyc files are basically treated like a cache; they speed things up, but you normally don't have to be aware of them at all. It automatically invalidates and re-loads them (re-compiles the source code) when necessary based on file time/date stamps.
About the only time I've seen a problem with this was when a compiled bytecode file somehow got a timestamp well into the future, which meant it always looked newer than the source file. Since it looked newer, the source file was never recompiled, so no matter what changes you made, they were ignored... | 6 | 1,238 | 0 | Python is an interpreted language. But why does my source directory contain .pyc files, which are identified by Windows as "Compiled Python Files"? | If Python is interpreted, what are .pyc files? | 1 | 0 | 0 | 597,924 |
2,998,215 | 2010-06-08T14:27:00.000 | 1,098 | 0 | 1 | 0 | python,compiled,interpreted-language,pyc | 2,998,544 | 12 | false | 0 | 0 | I've been given to understand that
Python is an interpreted language...
This popular meme is incorrect, or, rather, constructed upon a misunderstanding of (natural) language levels: a similar mistake would be to say "the Bible is a hardcover book". Let me explain that simile...
"The Bible" is "a book" in the sense of being a class of (actual, physical objects identified as) books; the books identified as "copies of the Bible" are supposed to have something fundamental in common (the contents, although even those can be in different languages, with different acceptable translations, levels of footnotes and other annotations) -- however, those books are perfectly well allowed to differ in a myriad of aspects that are not considered fundamental -- kind of binding, color of binding, font(s) used in the printing, illustrations if any, wide writable margins or not, numbers and kinds of builtin bookmarks, and so on, and so forth.
It's quite possible that a typical printing of the Bible would indeed be in hardcover binding -- after all, it's a book that's typically meant to be read over and over, bookmarked at several places, thumbed through looking for given chapter-and-verse pointers, etc, etc, and a good hardcover binding can make a given copy last longer under such use. However, these are mundane (practical) issues that cannot be used to determine whether a given actual book object is a copy of the Bible or not: paperback printings are perfectly possible!
Similarly, Python is "a language" in the sense of defining a class of language implementations which must all be similar in some fundamental respects (syntax, most semantics except those parts of those where they're explicitly allowed to differ) but are fully allowed to differ in just about every "implementation" detail -- including how they deal with the source files they're given, whether they compile the sources to some lower level forms (and, if so, which form -- and whether they save such compiled forms, to disk or elsewhere), how they execute said forms, and so forth.
The classical implementation, CPython, is often called just "Python" for short -- but it's just one of several production-quality implementations, side by side with Microsoft's IronPython (which compiles to CLR codes, i.e., ".NET"), Jython (which compiles to JVM codes), PyPy (which is written in Python itself and can compile to a huge variety of "back-end" forms including "just-in-time" generated machine language). They're all Python (=="implementations of the Python language") just like many superficially different book objects can all be Bibles (=="copies of The Bible").
If you're interested in CPython specifically: it compiles the source files into a Python-specific lower-level form (known as "bytecode"), does so automatically when needed (when there is no bytecode file corresponding to a source file, or the bytecode file is older than the source or compiled by a different Python version), usually saves the bytecode files to disk (to avoid recompiling them in the future). OTOH IronPython will typically compile to CLR codes (saving them to disk or not, depending) and Jython to JVM codes (saving them to disk or not -- it will use the .class extension if it does save them).
These lower level forms are then executed by appropriate "virtual machines" also known as "interpreters" -- the CPython VM, the .Net runtime, the Java VM (aka JVM), as appropriate.
So, in this sense (what do typical implementations do), Python is an "interpreted language" if and only if C# and Java are: all of them have a typical implementation strategy of producing bytecode first, then executing it via a VM/interpreter.
More likely the focus is on how "heavy", slow, and high-ceremony the compilation process is. CPython is designed to compile as fast as possible, as lightweight as possible, with as little ceremony as feasible -- the compiler does very little error checking and optimization, so it can run fast and in small amounts of memory, which in turns lets it be run automatically and transparently whenever needed, without the user even needing to be aware that there is a compilation going on, most of the time. Java and C# typically accept more work during compilation (and therefore don't perform automatic compilation) in order to check errors more thoroughly and perform more optimizations. It's a continuum of gray scales, not a black or white situation, and it would be utterly arbitrary to put a threshold at some given level and say that only above that level you call it "compilation"!-) | 6 | 1,238 | 0 | Python is an interpreted language. But why does my source directory contain .pyc files, which are identified by Windows as "Compiled Python Files"? | If Python is interpreted, what are .pyc files? | 1 | 0 | 0 | 597,924 |
2,998,215 | 2010-06-08T14:27:00.000 | 7 | 0 | 1 | 0 | python,compiled,interpreted-language,pyc | 62,559,744 | 12 | false | 0 | 0 | tldr; it's a converted code from the source code, which the python VM interprets for execution.
Bottom-up understanding: the final stage of any program is to run/execute the program's instructions on the hardware/machine. So here are the stages preceding execution:
Executing/running on CPU
Converting bytecode to machine code.
Machine code is the final stage of conversion.
Instructions to be executed on CPU are given in machine code. Machine code can be executed directly by CPU.
Converting Bytecode to machine code.
Bytecode is a medium stage. It could be skipped for efficiency, but sacrificing portability.
Converting Source code to bytecode.
Source code is a human readable code. This is what is used when working on IDEs (code editors) such as Pycharm.
Now the actual plot. There are two approaches when carrying any of these stages: convert [or execute] a code all at once (aka compile) and convert [or execute] the code line by line (aka interpret).
For example, we could compile a source code to bytecode, compile bytecode to machine code, interpret machine code for execution.
Some implementations of languages skip stage 3 for efficiency, i.e. compile source code into machine code and then interpret machine code for execution.
Some implementations skip all middle steps and interpret the source code directly for execution.
Modern languages often involve both compiling an interpreting.
JAVA for example, compiles source code to bytecode [that is how JAVA source is stored, as a bytecode, compile bytecode to machine code [using JVM], and interpret machine code for execution. [Thus JVM is implemented differently for different OSs, but the same JAVA source code could be executed on different OS that have JVM installed.]
Python for example, compile source code to bytecode [usually found as .pyc files accompanying the .py source codes], compile bytecode to machine code [done by a virtual machine such as PVM and the result is an executable file], interpret the machine code/executable for execution.
When can we say that a language is interpreted or compiled?
The answer is by looking into the approach used in execution. If it executes the machine code all at once (== compile), then it's a compiled language. On the other hand, if it executes the machine code line-by-line (==interpret) then it's an interpreted language.
Therefore, JAVA and Python are interpreted languages.
A confusion might occur because of the third stage, that's converting bytecode to machine code. Often this is done using a software called a virtual machine. The confusion occurs because a virtual machine acts like a machine, but it's actually not! Virtual machines are introduced for portability, having a VM on any REAL machine will allow us to execute the same source code. The approach used in most VMs [that's the third stage] is compiling, thus some people would say it's a compiled language. For the importance of VMs, we often say that such languages are both compiled and interpreted. | 6 | 1,238 | 0 | Python is an interpreted language. But why does my source directory contain .pyc files, which are identified by Windows as "Compiled Python Files"? | If Python is interpreted, what are .pyc files? | 1 | 0 | 0 | 597,924 |
2,998,215 | 2010-06-08T14:27:00.000 | 66 | 0 | 1 | 0 | python,compiled,interpreted-language,pyc | 2,998,248 | 12 | false | 0 | 0 | These are created by the Python interpreter when a .py file is imported, and they contain the "compiled bytecode" of the imported module/program, the idea being that the "translation" from source code to bytecode (which only needs to be done once) can be skipped on subsequent imports if the .pyc is newer than the corresponding .py file, thus speeding startup a little. But it's still interpreted. | 6 | 1,238 | 0 | Python is an interpreted language. But why does my source directory contain .pyc files, which are identified by Windows as "Compiled Python Files"? | If Python is interpreted, what are .pyc files? | 1 | 0 | 0 | 597,924 |
2,998,215 | 2010-06-08T14:27:00.000 | 210 | 0 | 1 | 0 | python,compiled,interpreted-language,pyc | 2,998,750 | 12 | false | 0 | 0 | There is no such thing as an interpreted language. Whether an interpreter or a compiler is used is purely a trait of the implementation and has absolutely nothing whatsoever to do with the language.
Every language can be implemented by either an interpreter or a compiler. The vast majority of languages have at least one implementation of each type. (For example, there are interpreters for C and C++ and there are compilers for JavaScript, PHP, Perl, Python and Ruby.) Besides, the majority of modern language implementations actually combine both an interpreter and a compiler (or even multiple compilers).
A language is just a set of abstract mathematical rules. An interpreter is one of several concrete implementation strategies for a language. Those two live on completely different abstraction levels. If English were a typed language, the term "interpreted language" would be a type error. The statement "Python is an interpreted language" is not just false (because being false would imply that the statement even makes sense, even if it is wrong), it just plain doesn't make sense, because a language can never be defined as "interpreted."
In particular, if you look at the currently existing Python implementations, these are the implementation strategies they are using:
IronPython: compiles to DLR trees which the DLR then compiles to CIL bytecode. What happens to the CIL bytecode depends upon which CLI VES you are running on, but Microsoft .NET, GNU Portable.NET and Novell Mono will eventually compile it to native machine code.
Jython: interprets Python sourcecode until it identifies the hot code paths, which it then compiles to JVML bytecode. What happens to the JVML bytecode depends upon which JVM you are running on. Maxine will directly compile it to un-optimized native code until it identifies the hot code paths, which it then recompiles to optimized native code. HotSpot will first interpret the JVML bytecode and then eventually compile the hot code paths to optimized machine code.
PyPy: compiles to PyPy bytecode, which then gets interpreted by the PyPy VM until it identifies the hot code paths which it then compiles into native code, JVML bytecode or CIL bytecode depending on which platform you are running on.
CPython: compiles to CPython bytecode which it then interprets.
Stackless Python: compiles to CPython bytecode which it then interprets.
Unladen Swallow: compiles to CPython bytecode which it then interprets until it identifies the hot code paths which it then compiles to LLVM IR which the LLVM compiler then compiles to native machine code.
Cython: compiles Python code to portable C code, which is then compiled with a standard C compiler
Nuitka: compiles Python code to machine-dependent C++ code, which is then compiled with a standard C compiler
You might notice that every single one of the implementations in that list (plus some others I didn't mention, like tinypy, Shedskin or Psyco) has a compiler. In fact, as far as I know, there is currently no Python implementation which is purely interpreted, there is no such implementation planned and there never has been such an implementation.
Not only does the term "interpreted language" not make sense, even if you interpret it as meaning "language with interpreted implementation", it is clearly not true. Whoever told you that, obviously doesn't know what he is talking about.
In particular, the .pyc files you are seeing are cached bytecode files produced by CPython, Stackless Python or Unladen Swallow. | 6 | 1,238 | 0 | Python is an interpreted language. But why does my source directory contain .pyc files, which are identified by Windows as "Compiled Python Files"? | If Python is interpreted, what are .pyc files? | 1 | 0 | 0 | 597,924 |
2,998,215 | 2010-06-08T14:27:00.000 | 15 | 0 | 1 | 0 | python,compiled,interpreted-language,pyc | 34,282,133 | 12 | false | 0 | 0 | Python's *.py file is just a text file in which you write some lines of code. When you try to execute this file using say "python filename.py"
This command invokes Python Virtual Machine. Python Virtual Machine has 2 components: "compiler" and "interpreter". Interpreter cannot directly read the text in *.py file, so this text is first converted into a byte code which is targeted to the PVM (not hardware but PVM). PVM executes this byte code. *.pyc file is also generated, as part of running it which performs your import operation on file in shell or in some other file.
If this *.pyc file is already generated then every next time you run/execute your *.py file, system directly loads your *.pyc file which won't need any compilation(This will save you some machine cycles of processor).
Once the *.pyc file is generated, there is no need of *.py file, unless you edit it. | 6 | 1,238 | 0 | Python is an interpreted language. But why does my source directory contain .pyc files, which are identified by Windows as "Compiled Python Files"? | If Python is interpreted, what are .pyc files? | 1 | 0 | 0 | 597,924 |
3,001,185 | 2010-06-08T20:50:00.000 | 6 | 0 | 0 | 0 | python,user-interface,desktop,pyqt,wsgi | 3,003,086 | 3 | true | 1 | 0 | Use something like CherryPy or paste.httpserver. You can use wsgiref's server, and it generally works okay locally, but if you are doing Ajax the single-threaded nature of wsgiref can cause some odd results, or if you ever do a subrequest you'll get a race condition. But for most cases it'll be fine. It might be useful to you not to have an embedded threaded server (both CherryPy and paste.httpserver are threaded), in which case wsgiref would be helpful (all requests will run from the same thread).
Note that if you use CherryPy or paste.httpserver all requests will automatically happen in subthreads (those packages do the thread spawning for you), and you probably will not be able to directly touch the GUI code from your web code (since GUI code usually doesn't like to be handled by threads). For any of them the server code blocks, so you need to spawn a thread to start the server in. Twisted can run in your normal GUI event loop, but unless that's important it adds a lot of complexity.
Do not use BaseHTTPServer or SimpleHTTPServer, they are silly and complicated and in all cases where you might use then you should use wsgiref instead. Every single case, as wsgiref is has a sane API (WSGI) while these servers have silly APIs. | 1 | 6 | 0 | The desktop app should start the web server on launch and should shut it down on close.
Assuming that the desktop is the only client allowed to connect to the web server, what is the best way to write this?
Both the web server and the desktop run in a blocking loop of their own. So, should I be using threads or multiprocessing? | what is the recommended way of running a embedded web server within a desktop app (say wsgi server with pyqt) | 1.2 | 0 | 0 | 2,601 |
3,001,827 | 2010-06-08T22:36:00.000 | 0 | 0 | 0 | 1 | python,c,networking,network-protocols,inter-process-communicat | 60,171,476 | 6 | false | 0 | 0 | if both applications running on same computer, use socket and serialize your objects to jsun. otherwise, use web service and jsun or xml. You can find jsun and xml parser in both languages. | 2 | 6 | 0 | I have very little idea what I'm doing here, I've never done anything like this before, but a friend and I are writing competing chess programs and they need to be able to communicate to each other.
He'll be writing mainly in C, the bulk of mine will be in Python, and I can see a few options:
Alternately write to a temp file, or successive temp files. As the communication won't be in any way bulky this could work, but seems like an ugly work-around to me, the programs will have to keep checking for change/new files, it just seems ugly.
Find some way of manipulating pipes i.e. mine.py| ./his . This seems like a bit of a dead end.
Use sockets. But I don't know what I'd be doing, so could someone give me a pointer to some reading material? I'm not sure if there are OS-independent, language independent methods. Would there have to be some kind of supervisor server program to administrate?
Use some kind of HTML protocol, which seems like overkill. I don't mind the programs having to run on the same machine.
What do people recommend, and where can I start reading? | OS-independent Inter-program communication between Python and C | 0 | 0 | 0 | 3,514 |
3,001,827 | 2010-06-08T22:36:00.000 | 0 | 0 | 0 | 1 | python,c,networking,network-protocols,inter-process-communicat | 3,099,077 | 6 | false | 0 | 0 | Sockets with a client/server model...
Basically, you and your friend are creating different implementations of the client.
The local client shows a visual representation of the game and stores the state of the pieces (position, killed/not-killed) and the rules about what the pieces can/can't do (which moves can be made with which pieces and whether the board's state is in check).
The remote server stores state about the players (whose turn it is, points earned, whether the game is won or not), and a listing of moves that have occurred.
When you make a move, your client validates the move against the rules of the game, then sends a message to the server that says, I've made this move, your turn.
The other client sees that a turn has been made, pulls the last move from the server, calculates whether where the movement took place, validates the move against the game rules, and replays the action locally. After that's all done, it's now allows the user to make the next move (or not if the game is over).
The most important part of client/server gaming communication is, send as little data to and store as little state as possible on the server. That way you can play it locally, or across the world with little or no latency. As long as your client is running under the same set of rules as your opponent's client everything should work.
If you want to ensure that no one can cheat by hacking their version of the client, you can make the position and rule calculations all be done on the server and just make the clients nothing but simple playback mechanisms.
The reason why sockets are the best communication medium are:
the limitations on cross process communication are almost as difficult as cross node communication
networking is widely supported on all systems
there's little or no barrier-of-entry to using this remotely if you choose
the networking is robust, flexible, and proven
That's part of the reason why many major systems like Databases uses sockets as a networking as-well-as local communication medium. | 2 | 6 | 0 | I have very little idea what I'm doing here, I've never done anything like this before, but a friend and I are writing competing chess programs and they need to be able to communicate to each other.
He'll be writing mainly in C, the bulk of mine will be in Python, and I can see a few options:
Alternately write to a temp file, or successive temp files. As the communication won't be in any way bulky this could work, but seems like an ugly work-around to me, the programs will have to keep checking for change/new files, it just seems ugly.
Find some way of manipulating pipes i.e. mine.py| ./his . This seems like a bit of a dead end.
Use sockets. But I don't know what I'd be doing, so could someone give me a pointer to some reading material? I'm not sure if there are OS-independent, language independent methods. Would there have to be some kind of supervisor server program to administrate?
Use some kind of HTML protocol, which seems like overkill. I don't mind the programs having to run on the same machine.
What do people recommend, and where can I start reading? | OS-independent Inter-program communication between Python and C | 0 | 0 | 0 | 3,514 |
3,002,999 | 2010-06-09T03:55:00.000 | 23 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore | 3,003,170 | 2 | true | 1 | 0 | Assign each entity a random number and store it in the entity. Then query for ten records whose random number is greater than (or less than) some other random number.
This isn't totally random, however, since entities with nearby random numbers will tend to show up together. If you want to beat this, do ten queries based around ten random numbers, but this will be less efficient. | 1 | 21 | 0 | I have a datastore with around 1,000,000 entities in a model. I want to fetch 10 random entities from this.
I am not sure how to do this? can someone help? | Fetching a random record from the Google App Engine Datastore? | 1.2 | 0 | 0 | 5,002 |
3,003,450 | 2010-06-09T06:12:00.000 | 1 | 0 | 0 | 1 | python,networking,twisted | 3,011,987 | 2 | false | 0 | 0 | Seconding what Jean-Paul said, if you need more fine grained TCP connection management, just use reactor.CallLater. We have exactly that implementation on a Twisted/wxPython trading platform, and it works a treat. You might also want to tweak the behaviour of the ReconnectingClientFactory in order to achieve the results I understand your looking for. | 1 | 4 | 0 | I wrote a server based on Twisted, and I encountered a problem, some of the clients are disconnected not gracefully. For example, the user pulls out the network cable.
For a while, the client on Windows is disconnected (the connectionLost is called, and it is also written in Twisted). And on the Linux server side, my connectionLost of twisted is never triggered. Even it try to writes data to client when the connection is lost. Why Twisted can't detect those non-graceful disconnection (even write data to client) on Linux? How to makes Twisted detect non-graceful disconnections? Because the feature Twisted can't detect non-graceful, I have lots of zombie user on my server.
---- Update ----
I thought it might be the feature of socket of unix-like os, so, what is the behavior of socket on unix-like for handling situation like this?
Thanks.
Victor Lin. | How to detect non-graceful disconnect of Twisted on Linux? | 0.099668 | 0 | 0 | 1,538 |
3,005,063 | 2010-06-09T10:45:00.000 | 3 | 0 | 0 | 0 | python,pyqt4,qtextedit | 3,005,405 | 2 | true | 1 | 0 | What you're asking for is called a fixed-width font. As James Hopkin remarked, HTML text in <tt> or <pre> tags is rendered with a fixed-width font.
However, what you describe sounds like a table. HTML has direct support for that, with <table>, <tr> (row) and <td> (data/cell). Don't bother with fixed-width fonts; just put your A and the Z in the second <td> of their rows. | 2 | 2 | 0 | Is there a way to set a fix size for the characters in HTML?
That means, say …
First row, 8th character is “Z”
Second row’s 8th character is “A”
I want to print out , when printed the “Z” has to be exactly on top of “A”
*Note: I'm using the insertHtml method in QTextEdit() | Is there a way to set a fixed width for the characters in HTML? | 1.2 | 0 | 0 | 396 |
3,005,063 | 2010-06-09T10:45:00.000 | 0 | 0 | 0 | 0 | python,pyqt4,qtextedit | 3,005,297 | 2 | false | 1 | 0 | Put the text in <tt> tags (or <pre> around a whole paragraph of text). | 2 | 2 | 0 | Is there a way to set a fix size for the characters in HTML?
That means, say …
First row, 8th character is “Z”
Second row’s 8th character is “A”
I want to print out , when printed the “Z” has to be exactly on top of “A”
*Note: I'm using the insertHtml method in QTextEdit() | Is there a way to set a fixed width for the characters in HTML? | 0 | 0 | 0 | 396 |
3,005,522 | 2010-06-09T11:54:00.000 | 4 | 0 | 0 | 0 | python,clipboard,pygtk,monitor | 3,010,018 | 1 | true | 0 | 1 | Without a proper notification API, such as WM_DrawClipboard messages, you would probably have to resort to a polling loop. And then you will cause major conflicts with other apps that are trying to use this shared resource.
Do not resort to a polling loop. | 1 | 6 | 0 | How can I make a simple clipboard monitor in Python using the PyGTK GUI?
I found gtk.clipboard class and but I couldn't find any solution to get the "signals" to trigger the event when the clipboard content has changed.
Any ideas? | PyGTK: how to make a clipboard monitor? | 1.2 | 0 | 0 | 1,505 |
3,005,632 | 2010-06-09T12:11:00.000 | 0 | 0 | 1 | 0 | python,windows,passwords,encryption | 3,005,689 | 2 | false | 0 | 0 | If you want to be able to get back the password (instead you should hash it), you could always salt it for extra measures. But that wouldn't be much help if the user can get the salt out of the executable.
Really the best way would be to not let them access the database at all. Use a web service or a server on your DB machine. | 1 | 0 | 0 | I have a script running on a remote machine. db info is stored in a configuration file. I want to be able to encrypt the password in the conf text so that no one can just read the file and gain access to the database. This is my current set up:
My conf file sensitive info is encoded with base64 module. The main script then decodes the info. I have compiled the script using py2exe to make it a bit harder to see the code.
My question is:
Is there a better way of doing this? I know that base64 is not a very safe way of encrypting. Is there a way to encode using a key? I also know that py2exe can be reversed engineered very easily and the key could be found. Any other thoughts?
I am also running this script on a windows machine, so any modules that are suggested should be able to run in a windows environment with ease. I know there are several other posts on this topic but I have not found one with a windows solution, or at least one that is will explained. | encrypting passwords in a python conf file on a windows platform | 0 | 0 | 0 | 1,330 |
3,005,640 | 2010-06-09T12:12:00.000 | 0 | 0 | 0 | 0 | python,django,facebook | 6,583,766 | 1 | false | 1 | 0 | This is possible, using the access token provided for your page you can publish to this as you would with a user. If you want to post FROM the USER than you need to use the current user's access token, if you want to post FROM the PAGE then using the access token from the page you can publish to that | 1 | 1 | 0 | I'm at the moment working on a web page where the users who visit it should have the possibility to create an event in my web page's name. There is a Page on Facebook for the web page which should be the owner of the user created event. Is this possible? All users are authenticated using Facebook Connect, but since the event won't be created in their name I don't know if that's so much of help. The Python SDK will be used since the event shall be implemented server side.
/ D | Create event for another owner using Facebook Graph API | 0 | 0 | 1 | 700 |
3,006,769 | 2010-06-09T14:28:00.000 | 3 | 1 | 1 | 0 | python,optimization | 3,006,810 | 5 | false | 0 | 0 | As a general strategy, it's best to keep this data in an in-memory cache if it's static, and relatively small. Then, the 10k calls will read an in-memory cache rather than a file. Much faster.
If you are modifying the data, the alternative might be a database like SQLite, or embedded MS SQL Server (and there are others, too!).
It's not clear what kind of data this is. Is it simple config/properties data? Sometimes you can find libraries to handle the loading/manipulation/storage of this data, and it usually has it's own internal in-memory cache, all you need to do is call one or two functions.
Without more information about the files (how big are they?) and the data (how is it formatted and structured?), it's hard to say more. | 4 | 1 | 0 | in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation.
I approximatily need to call this method about 10,000 times.which is making my program very slow?
any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list?
I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question...
thanks in advance | how to speed up the code? | 0.119427 | 0 | 0 | 209 |
3,006,769 | 2010-06-09T14:28:00.000 | 2 | 1 | 1 | 0 | python,optimization | 3,006,800 | 5 | false | 0 | 0 | Opening, closing, and reading a file 10,000 times is always going to be slow. Can you open the file once, do 10,000 operations on the list, then close the file once? | 4 | 1 | 0 | in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation.
I approximatily need to call this method about 10,000 times.which is making my program very slow?
any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list?
I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question...
thanks in advance | how to speed up the code? | 0.07983 | 0 | 0 | 209 |
3,006,769 | 2010-06-09T14:28:00.000 | 0 | 1 | 1 | 0 | python,optimization | 3,006,875 | 5 | false | 0 | 0 | Call the open to the file from the calling method of the one you want to run. Pass the data as parameters to the method | 4 | 1 | 0 | in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation.
I approximatily need to call this method about 10,000 times.which is making my program very slow?
any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list?
I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question...
thanks in advance | how to speed up the code? | 0 | 0 | 0 | 209 |
3,006,769 | 2010-06-09T14:28:00.000 | 0 | 1 | 1 | 0 | python,optimization | 3,006,895 | 5 | false | 0 | 0 | If the files are structured, kinda configuration files, it might be good to use ConfigParser library, else if you have other structural format then I think it would be better to store all this data in JSON or XML and perform any necessary operations on your data | 4 | 1 | 0 | in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation.
I approximatily need to call this method about 10,000 times.which is making my program very slow?
any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list?
I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question...
thanks in advance | how to speed up the code? | 0 | 0 | 0 | 209 |
3,007,678 | 2010-06-09T16:11:00.000 | 2 | 1 | 1 | 0 | python,optimization | 3,008,037 | 4 | false | 0 | 0 | So it seems you don't want to speed up the compile but want to speed up the execution.
If that is the case, my mantra is "do less." Save off results and keep them around, don't re-read the same file(s) over and over again. Read a lot of data out of the file at once and work with it.
On files specifically, your performance will be pretty miserable if you're reading a little bit of data out of each file and switching between a number of files while doing it. Just read in each file in completion, one at a time, and then work with them. | 1 | 0 | 0 | i want to speed my code compilation..I have searched the internet and heard that psyco is a very tool to improve the speed.i have searched but could get a site for download.
i have installed any additional libraries or modules till date in my python..
can psyco user,tell where we can download the psyco and its installation and using procedures??
i use windows vista and python 2.6 does this work on this ?? | how to speed up code? | 0.099668 | 0 | 0 | 1,091 |
3,008,509 | 2010-06-09T18:01:00.000 | 0 | 0 | 1 | 0 | python,installation | 13,910,180 | 9 | false | 1 | 0 | maybe your installer is i386 and your computer is AMD64. try to find the right package! | 3 | 57 | 0 | Can't download any python Windows modules and install. I wanted to experiment with scrapy framework and stackless but unable to install due to error "Python version 2.6 required, which was not found in the registry".
Trying to install it to
Windows 7, 64 bit machine | Python version 2.6 required, which was not found in the registry | 0 | 0 | 0 | 72,470 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.