Q_Id int64 337 49.3M | CreationDate stringlengths 23 23 | Users Score int64 -42 1.15k | Other int64 0 1 | Python Basics and Environment int64 0 1 | System Administration and DevOps int64 0 1 | Tags stringlengths 6 105 | A_Id int64 518 72.5M | AnswerCount int64 1 64 | is_accepted bool 2
classes | Web Development int64 0 1 | GUI and Desktop Applications int64 0 1 | Answer stringlengths 6 11.6k | Available Count int64 1 31 | Q_Score int64 0 6.79k | Data Science and Machine Learning int64 0 1 | Question stringlengths 15 29k | Title stringlengths 11 150 | Score float64 -1 1.2 | Database and SQL int64 0 1 | Networking and APIs int64 0 1 | ViewCount int64 8 6.81M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,098,509 | 2010-11-04T15:51:00.000 | 1 | 1 | 0 | 0 | java,c++,python,storage,simulation | 4,098,941 | 6 | false | 0 | 0 | Using D-Bus format to send the information may be to your advantage. The format is standard, binary, and D-Bus is implemented in multiple languages, and can be used to send both over the network and inter-process on the same machine. | 4 | 2 | 1 | I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)
(1) Optimize for disk space, I/O speed, or memory?
For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited.
(2) Store text, or something else (binary numeric)?
(3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else?
I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.
Edit: this is for a Linux system (Ubuntu)
Thanks | Collecting, storing, and retrieving large amounts of numeric data | 0.033321 | 0 | 0 | 2,032 |
4,098,509 | 2010-11-04T15:51:00.000 | 0 | 1 | 0 | 0 | java,c++,python,storage,simulation | 4,098,550 | 6 | false | 0 | 0 | If you are just storing, then use system tools. Don't write your own. If you need to do some real-time processing of the data before it is stored, then that's something completely different. | 4 | 2 | 1 | I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)
(1) Optimize for disk space, I/O speed, or memory?
For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited.
(2) Store text, or something else (binary numeric)?
(3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else?
I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.
Edit: this is for a Linux system (Ubuntu)
Thanks | Collecting, storing, and retrieving large amounts of numeric data | 0 | 0 | 0 | 2,032 |
4,098,509 | 2010-11-04T15:51:00.000 | 1 | 1 | 0 | 0 | java,c++,python,storage,simulation | 4,098,613 | 6 | false | 0 | 0 | Actually, this is quite similar to what I'm doing, which is monitoring changes players make to the world in a game. I'm currently using an sqlite database with python.
At the start of the program, I load the disk database into memory, for fast writing procedures. Each change is put in to two lists. These lists are for both the memory database and the disk database. Every x or so updates, the memory database is updated, and a counter is pushed up one. This is repeated, and when the counter equals 5, it's reset and the list with changes for the disk is flushed to the disk database and the list is cleared.I have found this works well if I also set the writing more to WOL(Write Ahead Logging). This method can stand about 100-300 updates a second if I update memory every 100 updates and the disk counter is set to update every 5 memory updates. You should probobly choose binary, sense, unless you have faults in your data sources, would be most logical | 4 | 2 | 1 | I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)
(1) Optimize for disk space, I/O speed, or memory?
For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited.
(2) Store text, or something else (binary numeric)?
(3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else?
I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.
Edit: this is for a Linux system (Ubuntu)
Thanks | Collecting, storing, and retrieving large amounts of numeric data | 0.033321 | 0 | 0 | 2,032 |
4,098,509 | 2010-11-04T15:51:00.000 | 3 | 1 | 0 | 0 | java,c++,python,storage,simulation | 4,098,582 | 6 | false | 0 | 0 | Optimizing for disk space and IO speed is the same thing - these days, CPUs are so fast compared to IO that it's often overall faster to compress data before storing it (you may actually want to do that). I don't really see memory playing a big role (though you should probably use a reasonably-sized buffer to ensure you're doing sequential writes).
Binary is more compact (and thus faster). Given the amount of data, I doubt whether being human-readable has any value. The only advantage of a text format would be that it's easier to figure out and correct if it gets corrupted or you lose the parsing code. | 4 | 2 | 1 | I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)
(1) Optimize for disk space, I/O speed, or memory?
For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited.
(2) Store text, or something else (binary numeric)?
(3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else?
I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.
Edit: this is for a Linux system (Ubuntu)
Thanks | Collecting, storing, and retrieving large amounts of numeric data | 0.099668 | 0 | 0 | 2,032 |
4,099,817 | 2010-11-04T18:14:00.000 | 1 | 1 | 0 | 0 | python,windows,excel | 4,103,210 | 2 | false | 0 | 0 | "Python functions which can be called from a number of excel worksheets"
And you're not blaming Excel for randomly running Python modules? Why not? How have you proven that Excel is behaving properly? | 1 | 3 | 0 | We have an application based on Excel 2003 and Python 2.4 on Windows XP 32bit. The application consists of a large collection of Python functions which can be called from a number of excel worksheets.
We've notcied an anomolous behavior which is that sometimes in the middle of one of these calls the python interpreter will start hunting around for modules which almost certainly are already loaded and in memory.
We know this because we were able to hook-up Sysinternal's Process Monitor to the process and observe that from time to time the process (when called) starts hunting around a bunch of directories and eggs for certain .py files.
The obvious thing to try is to see if the python search-path had become modified, however we found this not to be the case. It's exactly what we'd expect. The odd thing is that:
The occasions on which this searching behavior was triggered appears to be random, i.e. it did not happen every time or with any noticable pattern.
The behavior did not affect the result of the function. It returned the same value irrespective of whether this file searching behavior was triggered.
The folders that were being scanned were non-existant (e.g. J:/python-eggs ) on a machine where J-drive contained no-such folder. Naturally procmon reports that this generated a file-not found error.
It's all very mysterious so I dont expect anybody to be able to provide a definitive answer as to what might be going wrong. I would appreciate any suggestions about how this problem might be debugged.
Thanks!
Answers to comments
All the things that are being searched for are actual, known python files which exist in the main project .egg file. The odd thing is that at the time they are being searched-for those particuar modules have already been imported. They must be in memory in order for the process to work.
Yes, this affects performance because sometimes this searching behavior tries to hit network drives. Also by searching eggs which couldnt possibly contain certain modules it the process gets interrupted by the corporate mandated virus-scanner. That slows down what would normally be a harmless and instant interruption.
This is stock python 2.4.4. No modifications. | Odd python search-path behavior, what's going wrong here? | 0.099668 | 0 | 0 | 111 |
4,100,532 | 2010-11-04T19:34:00.000 | 1 | 0 | 1 | 0 | python,performance,runtime | 4,100,562 | 3 | false | 0 | 0 | Two things:
Code in separate modules is compiled into bytecode at first runtime and saved as a precompiled .pyc file, so it doesn't have to be recompiled at the next run as long as the source hasn't been modified since. This might result in a small performance advantage, but only at program startup.
Also, Python stores variables etc. a bit more efficiently if they are placed inside functions instead of at the top level of a file. But I don't think that's what you're referring to here, is it? | 3 | 1 | 0 | I thought I once read on SO that Python will compile and run slightly more quickly if commonly called code is placed into methods or separate files. Does putting Python code in methods have an advantage over separate files or vice versa? Could someone explain why this is? I'd assume it has to do with memory allocation and garbage collection or something. | Will Python be faster if I put commonly called code into separate methods or files? | 0.066568 | 0 | 0 | 150 |
4,100,532 | 2010-11-04T19:34:00.000 | 4 | 0 | 1 | 0 | python,performance,runtime | 4,100,574 | 3 | true | 0 | 0 | It doesn't matter. Don't structure your program around code speed; structure it around coder speed. If you write something in Python and it's too slow, find the bottleneck with cProfile and speed it up. How do you speed it up? You try things and profile them. In general, function call overhead in critical loops is high. Byte compiling your code takes a very small amount of time and only needs to be done once. | 3 | 1 | 0 | I thought I once read on SO that Python will compile and run slightly more quickly if commonly called code is placed into methods or separate files. Does putting Python code in methods have an advantage over separate files or vice versa? Could someone explain why this is? I'd assume it has to do with memory allocation and garbage collection or something. | Will Python be faster if I put commonly called code into separate methods or files? | 1.2 | 0 | 0 | 150 |
4,100,532 | 2010-11-04T19:34:00.000 | 2 | 0 | 1 | 0 | python,performance,runtime | 4,100,592 | 3 | false | 0 | 0 | No. Regardless of where you put your code, it has to be parsed once and compiled if necessary. Distinction between putting code in methods or different files might have an insignificant performance difference, but you shouldn't worry about it.
About the only language right now that you have to worry about structuring "right" is Javascript. Because it has to be downloaded from net to client's computer. That's why there are so many compressors and obfuscators for it. Stuff like this isn't done with Python because it's not needed. | 3 | 1 | 0 | I thought I once read on SO that Python will compile and run slightly more quickly if commonly called code is placed into methods or separate files. Does putting Python code in methods have an advantage over separate files or vice versa? Could someone explain why this is? I'd assume it has to do with memory allocation and garbage collection or something. | Will Python be faster if I put commonly called code into separate methods or files? | 0.132549 | 0 | 0 | 150 |
4,100,628 | 2010-11-04T19:45:00.000 | 6 | 0 | 0 | 1 | python,pygtk | 4,100,722 | 2 | true | 0 | 0 | Calling urllib2 from the main thread blocks the Gtk event loop and consequently freezes the user interface. This is not specific to urllib2, but happens with any longer running function (e.g. subprocess.call).
Either use the asynchronous IO facilities from glib or call urllib2 in a separate thread to avoid this issue. | 2 | 1 | 0 | I'm using PyGTK for a small app I've developing. The usage of URLlib2 through a proxy will freeze my GUI. Is there anyway to prevent that?
My code that actually does the work is seperate from the GUI, so I was thinking may be using subprocess to call the python file. However, how would that work if I was to convert the app to an exe file?
Thanks | urllib2 freezes GUI | 1.2 | 0 | 0 | 310 |
4,100,628 | 2010-11-04T19:45:00.000 | 0 | 0 | 0 | 1 | python,pygtk | 4,101,507 | 2 | false | 0 | 0 | I'd consider using the multiprocess module, creating a pair of Queue objects ... one for the GUI controller or other components to send requests to the urllib2 process; the other for returning the results.
Just a pair of Queue objects would be sufficient for a simple design (just two processes). The urllib2 process simple consumes requests from it's request queue and posts response to the results queue. The process on the other side can operate asynchronously, posting requests and, from anywhere in the event loop (or from a separate thread), pulling responses out and posting them back to a dictionary or dispatching a callback function (probably also maintained as a dictionary).
(For example I might have the request model create a callback handling object, store it in a dictionary using the object's ID as the key, and post a tuple of of that ID and the URL to the request queue, then have the response processing pull IDs and response text of the response queue so that the event handling loop can then dispatch the response to the .callback() method of the object which was stored in the dictionary to begin with. The responses could be URL text results but handling for Exception objects could also be implemented (perhaps dispatched to a .errback() method in our hypothetical callback object's interface). Naturally if our main GUI is multi-threaded we have to ensure coherent access to this dictionary. However there should be relatively low contention on that. All access to this dictionary is non-blocking).
More complex designs are possible. A pool of urllib2 handling processes could all share one pair of Queue objects (the beauty of these queues is that they handle all the locking and coherency details for us; multiple producers/consumers are supported).
If the GUI needed to be fanned out into multiple processes that could share the same urllib2 process or pool then it would be time to look for a message bus (spread or AMQP for example). Share memory and the multiprocess locking primitives could also be used; but that would involve quite a bit more effort. | 2 | 1 | 0 | I'm using PyGTK for a small app I've developing. The usage of URLlib2 through a proxy will freeze my GUI. Is there anyway to prevent that?
My code that actually does the work is seperate from the GUI, so I was thinking may be using subprocess to call the python file. However, how would that work if I was to convert the app to an exe file?
Thanks | urllib2 freezes GUI | 0 | 0 | 0 | 310 |
4,101,130 | 2010-11-04T20:36:00.000 | 1 | 0 | 0 | 1 | c++,python,c,binding | 4,101,176 | 2 | false | 0 | 0 | One way would be to
re-factor your command line utility so that command line handling is separated and the actual functionality is exposed as shared archive.
Then you could expose those function using cython.
Write your complete command line utility in python that exploits those functions.
This makes distribution hard though.
What you are doing is still the best way. | 2 | 2 | 0 | I'm interested in writing a python binding or wrapper for an existing command line utility that I use on Linux, so that I can access its features in my python programs. Is there a standard approach to doing this that someone could point me to?
At the moment, I have wrapped the command line executable in a subprocess.Popen call, which works but feels quite brittle, and I'd like to make the integration between the two sides much more stable so that it works in places other than my own computer! | How to write Python bindings for command line applications | 0.099668 | 0 | 0 | 510 |
4,101,130 | 2010-11-04T20:36:00.000 | 5 | 0 | 0 | 1 | c++,python,c,binding | 4,101,158 | 2 | true | 0 | 0 | If you must use a command line interface, then subprocess.Popen is your best bet. Remember that you can use shell=True to let it pick the path variables, you can use os.path.join to use OS-dependent path separators etc.
If, however, your command line utility has shared libraries, look at ctypes, which allows you to connect directly to those libraries and expose functionality directly. | 2 | 2 | 0 | I'm interested in writing a python binding or wrapper for an existing command line utility that I use on Linux, so that I can access its features in my python programs. Is there a standard approach to doing this that someone could point me to?
At the moment, I have wrapped the command line executable in a subprocess.Popen call, which works but feels quite brittle, and I'd like to make the integration between the two sides much more stable so that it works in places other than my own computer! | How to write Python bindings for command line applications | 1.2 | 0 | 0 | 510 |
4,101,815 | 2010-11-04T22:01:00.000 | 0 | 0 | 0 | 1 | python,matlab,amazon-web-services,hadoop,mapreduce | 4,101,917 | 2 | false | 0 | 0 | The following is not exactly an answer to your Hadoop question, but I couldn't resist not asking why you don't execute your processing jobs on the Grid resources? There are proven solutions for executing compute intensive workflows on the Grid. And as far as I know matlab runtime environment is usually available on these resources. You may also consider using the Grid especially if you are in academia.
Good luck | 1 | 1 | 1 | I am writing and distributed image processing application using hadoop streaming, python, matlab, and elastic map reduce. I have compiled a binary executable of my matlab code using the matlab compiler. I am wondering how I can incorporate this into my workflow so the binary is part of the processing on Amazon's elastic map reduce?
It looks like I have to use the Hadoop Distributed Cache?
The code is very complicated (and not written by me) so porting it to another language is not possible right now.
THanks | Hadoop/Elastic Map Reduce with binary executable? | 0 | 0 | 0 | 1,138 |
4,102,409 | 2010-11-04T23:41:00.000 | 0 | 0 | 0 | 0 | java,python,database,algorithm,unique | 4,102,448 | 8 | false | 0 | 0 | If there is only one source of IDs (that is: you don't need to coordinate multiple independent sources on different machines) you can do the following:
Calculate the maximum number of bits that a number may have so that it doesn't exceed the information contained in an 8-symbol string of 0-9A-Z. This would be floor(log2(36^8)) = 41 bits.
Have a counter (with 41 bits) start at zero
Return transform(counter++) for each ID request
The transform function has to be bijective and can be an arbitrarily long sequence of the following operations (which are all bijective themselves when they are calculated modulo 2^41):
xor with a fixed value
rotate left or right by a fixed value
reorder the bits by a fixed mapping (a generalization of the rotation above)
add or subtract a fixed value
When you are finished with that, you only need another function encode(number) to transform the number to base36. | 4 | 2 | 0 | I've been through answers to a few similar questions asked on SO, but could not find what I was looking for.
Is there a more efficient way to generate 8 character unique IDs, base 36 (0-9A-Z), than generating a unique ID and querying the DB to see if it already exists and repeating until you get a unique ID that has not been used?
Other solutions I found use time, but this is perhaps too easy to guess and may not work well in distributed systems. Consider these IDs to be promo codes. | 8 Character Random Code | 0 | 0 | 0 | 2,769 |
4,102,409 | 2010-11-04T23:41:00.000 | 10 | 0 | 0 | 0 | java,python,database,algorithm,unique | 4,102,438 | 8 | false | 0 | 0 | One option is to do it the other way round: generate a huge number of them in the database whenever you need to, then either fetch a single one from the DB when you need one, or reserve a whole bunch of them for your particular process (i.e. mark them as "potentially used" in the database) and then dole them out from memory. | 4 | 2 | 0 | I've been through answers to a few similar questions asked on SO, but could not find what I was looking for.
Is there a more efficient way to generate 8 character unique IDs, base 36 (0-9A-Z), than generating a unique ID and querying the DB to see if it already exists and repeating until you get a unique ID that has not been used?
Other solutions I found use time, but this is perhaps too easy to guess and may not work well in distributed systems. Consider these IDs to be promo codes. | 8 Character Random Code | 1 | 0 | 0 | 2,769 |
4,102,409 | 2010-11-04T23:41:00.000 | 7 | 0 | 0 | 0 | java,python,database,algorithm,unique | 4,103,718 | 8 | true | 0 | 0 | I question that your "inefficient" approach is actually inefficient. Consider this:
There are 36^8 == 2,821,109,907,456 (2.8 Trillion) possible IDs.
If you have N existing IDs, the chance of a new randomly generated ID colliding is N in ~2.8 trillion.
Unless N is in the hundreds of billions, you "generate a unique ID and querying the DB to see if it already exists" algorithm will almost always terminate in one cycle.
With careful design, you should be able to generate a guaranteed unique ID in one database request, almost all of the time ... unless you have an awfully large number of existing IDs. (And if you do, just add another couple of characters to the ID and the problem goes away again.)
If you want to, you can reduce the average number of database operations to less than one per ID by generating the IDs in batches, but their are potential complications, especially if you need to record the number of IDs that are actually in use.
But, if you have at most 150,000 IDs (I assume, generated over a long period of time) then creating the IDs in batches is not worth the effort ... unless you are doing a bulk upload operation. | 4 | 2 | 0 | I've been through answers to a few similar questions asked on SO, but could not find what I was looking for.
Is there a more efficient way to generate 8 character unique IDs, base 36 (0-9A-Z), than generating a unique ID and querying the DB to see if it already exists and repeating until you get a unique ID that has not been used?
Other solutions I found use time, but this is perhaps too easy to guess and may not work well in distributed systems. Consider these IDs to be promo codes. | 8 Character Random Code | 1.2 | 0 | 0 | 2,769 |
4,102,409 | 2010-11-04T23:41:00.000 | 1 | 0 | 0 | 0 | java,python,database,algorithm,unique | 4,102,452 | 8 | false | 0 | 0 | Unfortunately, 8 base 36 digits is a bit small. It's only 2 million million possible IDs, so if you generate 1.4 million randomly you have about a half chance of a collision.
You could possibly use a PRNG with a large period, and map its current state to your ID space via some bijection. A 41 bit LFSR wouldn't be uncrackable, but might be reasonably OK if the thing you're protecting isn't all that valuable. You could distribute somewhat without having to access the DB all the time, by providing different nodes with a different position to start the cycle.
The trouble with any such deterministic method, of course, is that once it's broken it's completely broken, and you can no longer trust any IDs. So doling numbers out of a database is probably the way to go, and distribute by doling them out in batches of a thousand or whatever.
If you had a larger ID space, then you could use more secure techniques, for example the ID could consist of something to identify the source, an incrementing serial number for that source, and an HMAC using a key unique to the source. | 4 | 2 | 0 | I've been through answers to a few similar questions asked on SO, but could not find what I was looking for.
Is there a more efficient way to generate 8 character unique IDs, base 36 (0-9A-Z), than generating a unique ID and querying the DB to see if it already exists and repeating until you get a unique ID that has not been used?
Other solutions I found use time, but this is perhaps too easy to guess and may not work well in distributed systems. Consider these IDs to be promo codes. | 8 Character Random Code | 0.024995 | 0 | 0 | 2,769 |
4,103,085 | 2010-11-05T02:15:00.000 | 15 | 1 | 0 | 1 | python,notepad++,nppexec | 4,106,339 | 2 | true | 0 | 0 | Notepad++ >nppexec >follow $(current directory) | 1 | 6 | 0 | Using windows for the first time in quite awhile and have picked up notepad++ and am using the nppexec plugin to run python scripts. However, I noticed that notepad++ doesn't pick up the directory that my script is saved in. For example, I place "script.py" in 'My Documents' however os.getcwd() prints "Program Files \ Notepad++"
Does anyone know how to change this behavior? Not exactly used to it in Mac. | Getting NppExec to understand path of the current file in Notepad++ (for Python scripts) | 1.2 | 0 | 0 | 4,494 |
4,103,107 | 2010-11-05T02:21:00.000 | 0 | 0 | 1 | 0 | python | 72,317,998 | 3 | false | 0 | 0 | try pandas!
select
C# my_collection.Select(my_object => my_object.my_property)
pandas my_collection['my_property']
or:
C# my_collection.Select(x => x.my_property + 2)
python my_collection['my_property'].apply(lambda x: x + 2)
where
C#: my_collection.Where(x => x.my_property == 1)
pandas: my_collection[my_collection['my_property']==1] | 1 | 25 | 0 | I'm quite new to python, and happen to have used C# for some time now. I saw that there was a filter method to use with the collections, which seems to be the equivalent of the LINQ's where clause.
I wondered, is there also an equivalent for the LINQ's select statement in python?
Example: my_collection.select(my_object => my_object.my_property) would return a collection of the my_property of each object in my_collection. | Python's equivalent of C# LINQ's select | 0 | 0 | 0 | 15,691 |
4,104,202 | 2010-11-05T07:44:00.000 | 8 | 1 | 1 | 0 | python,vim | 4,104,215 | 1 | false | 0 | 0 | vim supports scripting in various languages, Python being one of them. See :h python for more details. | 1 | 4 | 0 | I am going to build vim and see that it supports the pythoninterp feature by
--enable-pythoninterp. What is it? Since I am a big Python fan, I'd like to know more about it.
And also, what's the --with-python-config-dir=PATH for? | What is the vim feature: --enable-pythoninterp | 1 | 0 | 0 | 3,212 |
4,104,751 | 2010-11-05T09:36:00.000 | 6 | 0 | 0 | 1 | python,google-app-engine | 4,105,029 | 1 | true | 1 | 0 | Reduce the set of libraries you require in order to serve requests as much as you can.
For expensive libraries that are only used in some places, put the import statement inside the function that uses them. This way, the library is only imported the first time it's needed.
If your framework supports it, do just-in-time importing of handlers, so you don't have to import them all when your app starts up.
Look forward to reserved instances / warmup requests, coming soon! | 1 | 3 | 0 | After a period of inactivity the first request takes about 5 to 10 secs to come through.
Is there any best practice solutions to overcome this problem?
I'm using Python version of App Engine. | Best Practice for dealing with app engine cold start problem | 1.2 | 0 | 0 | 364 |
4,104,898 | 2010-11-05T09:59:00.000 | 2 | 1 | 1 | 0 | python | 4,105,223 | 7 | false | 0 | 0 | Assuming you do have to remember their order and that the numbers are in the range of 1 to 1,000,000, it would only take 20 bits or 2½ bytes to write each one since 1,000,000 is 0xF4240 in hexadecimal. You'd have to pack them together to not waste any space with this approach, but by doing so it would only take 2.5 * 1,000,000 bytes. | 4 | 4 | 0 | What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc? My answer is: 1,000,000 * 3 bytes using struct module, but it seems like interviewer expected another answer...
Edit. Numbers from 1 to 1,000,000 in random order (so transform like 5, 6, 7 -> 5-7 can be applied in rare case). You can use any writing method you know, but the resulting file should have minimum size. | Python: Write 1,000,000 ints to file | 0.057081 | 0 | 0 | 1,746 |
4,104,898 | 2010-11-05T09:59:00.000 | 0 | 1 | 1 | 0 | python | 4,105,172 | 7 | false | 0 | 0 | I would only write the start and end of the given range, in this case 1 and 1,000,000, because nowhere has the interviewer mentioned order is important. | 4 | 4 | 0 | What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc? My answer is: 1,000,000 * 3 bytes using struct module, but it seems like interviewer expected another answer...
Edit. Numbers from 1 to 1,000,000 in random order (so transform like 5, 6, 7 -> 5-7 can be applied in rare case). You can use any writing method you know, but the resulting file should have minimum size. | Python: Write 1,000,000 ints to file | 0 | 0 | 0 | 1,746 |
4,104,898 | 2010-11-05T09:59:00.000 | 2 | 1 | 1 | 0 | python | 4,105,217 | 7 | true | 0 | 0 | Well, your solution takes three bytes (= 24 bits) per integer. Theoretically, 20 bits are enough (since 2^19 < 1.000.000 < 2^20).
EDIT: Oops, just noticed Neil’s comment stating the same. I’m making this answer CW since it really belongs to him. | 4 | 4 | 0 | What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc? My answer is: 1,000,000 * 3 bytes using struct module, but it seems like interviewer expected another answer...
Edit. Numbers from 1 to 1,000,000 in random order (so transform like 5, 6, 7 -> 5-7 can be applied in rare case). You can use any writing method you know, but the resulting file should have minimum size. | Python: Write 1,000,000 ints to file | 1.2 | 0 | 0 | 1,746 |
4,104,898 | 2010-11-05T09:59:00.000 | 0 | 1 | 1 | 0 | python | 4,105,238 | 7 | false | 0 | 0 | What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc
If you interpret the 1,000,000 ints as "I didn't specify that they have to be different", you can just use a for loop to write 0 one million times. | 4 | 4 | 0 | What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc? My answer is: 1,000,000 * 3 bytes using struct module, but it seems like interviewer expected another answer...
Edit. Numbers from 1 to 1,000,000 in random order (so transform like 5, 6, 7 -> 5-7 can be applied in rare case). You can use any writing method you know, but the resulting file should have minimum size. | Python: Write 1,000,000 ints to file | 0 | 0 | 0 | 1,746 |
4,105,804 | 2010-11-05T12:25:00.000 | 1 | 1 | 0 | 0 | import,ironpython | 4,111,764 | 4 | false | 0 | 1 | You can re-direct all I/O to the database using the PlatformAdaptationLayer. To do this you'll need to implement a ScriptHost which provides the PAL. Then when you create the ScriptRuntime you set the HostType to your host type and it'll be used for the runtime. On the PAL you then override OpenInputFileStream and return a stream object which has the content from the database (you could just use a MemoryStream here after reading from the DB).
If you want to still provide access to file I/O you can always fall back to FileStream's for "files" you can't find. | 1 | 9 | 0 | I am loading an IronPython script from a database and executing it. This works fine for simple scripts, but imports are a problem. How can I intercept these import calls and then load the appropriate scripts from the database?
EDIT: My main application is written in C# and I'd like to intercept the calls on the C# side without editing the Python scripts.
EDIT: From the research I've done, it looks like creating your own PlatformAdaptationLayer is the way you're supposed to to implement this, but it doesn't work in this case. I've created my own PAL and in my testing, my FileExsists method gets called for every import in the script. But for some reason it never calls any overload of the OpenInputFileStream method. Digging through the IronPython source, once FileExists returns true, it tries to locate the file itself on the path. So this looks like a dead end. | Custom IronPython import resolution | 0.049958 | 0 | 0 | 2,738 |
4,106,178 | 2010-11-05T13:21:00.000 | 0 | 0 | 1 | 0 | python,serialization | 29,203,797 | 3 | false | 0 | 0 | A program that produces some statistics, but not too much of it so that using DB is overkill.
For example, benchmarking a program to choose the best algorithm.
Upon completion it draws a graph. Now you might not like the way the graph is drawn. You pickle the results, then unpickle in another script (perhaps after a couple of subsequent benchmark runs) and fine-tune the visualization as you wish. | 2 | 1 | 0 | In layman's terms, what is "Serialization", and why do I need it?
I read the Wikipedia entry, but still don't understand it.
Why do I need to convert data into a sequence of bits to store it in a file? I am specifically concerned with using Python's pickle module to do serialization.
Thanks for the time! | Pickle Python Serialization | 0 | 0 | 0 | 986 |
4,106,178 | 2010-11-05T13:21:00.000 | 2 | 0 | 1 | 0 | python,serialization | 4,106,205 | 3 | true | 0 | 0 | You can save the state of a programm (of specific objects). Imagine you have a program which runs for many hours or even days. Using pickle you can save the state of the calculation, kill the programm and resume the calculation later if you want to.
You could even email the saved objects to other people, who than can resume the calculation or view your results.
I sometimes pickle userpreferences or (in a quiz) what questions where asked the last time and what answers were given. | 2 | 1 | 0 | In layman's terms, what is "Serialization", and why do I need it?
I read the Wikipedia entry, but still don't understand it.
Why do I need to convert data into a sequence of bits to store it in a file? I am specifically concerned with using Python's pickle module to do serialization.
Thanks for the time! | Pickle Python Serialization | 1.2 | 0 | 0 | 986 |
4,107,576 | 2010-11-05T15:50:00.000 | 1 | 0 | 0 | 0 | javascript,python,ajax,dojo | 5,576,563 | 2 | false | 1 | 0 | My guess would be you are serving these two apps locally via 2 different ports, which is making dojo try to execute a cross-domain XHR call.
You need to be able to serve the JSON URL from the same URL (protocol, hostname, & port) to make a successful XHR call. I do this by using nginx locally, and configuring it to serve the database requests from my Dojo application by forwarding them to CouchDB. | 1 | 1 | 0 | I have exposed a simple RESTful JSON url via CherryPy (Python web framework). I have a second application (using Pylons) which needs to reach a URL exposed by CherryPy. Both are being served via localhost. Both URLs resolve just fine when using a browser directly.
But, when a DOJO script running from the initial Pylons request invokes the JSON url from CherryPy, it fails. I open LiveHeaders in Firefox and find that DOJO is first sending an HTTP "OPTIONS" request. CherryPy refuses the OPTIONS request with a 405, Method Not Allowed and it all stops.
If I drop this same page into the CherryPy application, all is well.
What is the best way to resolve this on my localhost dev platform? .... and will this occur in Prod? | DOJO AJAX Request asking for OPTIONS | 0.099668 | 0 | 1 | 2,060 |
4,107,644 | 2010-11-05T15:57:00.000 | 0 | 0 | 0 | 0 | python,django | 4,107,663 | 2 | false | 1 | 0 | Put the poll page in its own view, connect to the view via urls.py, and set up your frame or iframe to source from that URL. | 1 | 0 | 0 | Can you take things like the poll app from the tutorial and display them in an iframe or frameset? The tutorial is great and the app is very nice, but, how often do you go to a site with a whole page dedicated to a poll? I was trying to think about how you do it using the urls.py file, but couldn't wrap my head around it. Just wondering if anyone has done this or knows of any tutorials that cover this issue? Thanks. | Django Question | 0 | 0 | 0 | 101 |
4,107,888 | 2010-11-05T16:27:00.000 | 0 | 1 | 0 | 0 | python,xml,sha1 | 4,233,195 | 1 | false | 0 | 0 | Take a look at M2Crypto, it's probably the best and most complete crypto library for Python. | 1 | 0 | 0 | Can someone recommend a library for calculating SHA1WithRSAEncryption in Python?
Context: I'm trying to do some message authentication. I've looked at PyXMLDSig, but it seemed to expect the certificates as separate files. As a first step to better understanding the problem space, I wanted to calculate the digest values "by hand".
I've looked around and seen Java implementations, but not Python ones. (Jython isn't really an option for my environment.)
Thanks in advance. | sha1WithRSAEncryption in Python | 0 | 0 | 0 | 378 |
4,108,214 | 2010-11-05T17:04:00.000 | 1 | 0 | 0 | 0 | python,django,django-admin,django-nonrel | 6,106,025 | 2 | false | 1 | 0 | UPDATE - turn off the server, run python2.5 manage.py syncdb, and add a fresh superuser. Must already have included django.contrib.admin to INSTALLED_APPS
This is not at all the answer. Completely different symptoms. I will try to remember to post here when I figure it out. | 1 | 1 | 0 | Bit of a strange one. I've created a super user for django admin for my app, which is just a new django nonrel project with admin enabled. I try and access the /admin whilst running the development server, but when I type in the (correct) username and password it tells me they are not correct.
Deploying the project to Google App Engine, the login works fine. Why would it work fine on Googles servers, but not on the development server? | Django Nonrel - Can't log into Admin Panel on development server | 0.099668 | 0 | 0 | 1,404 |
4,108,341 | 2010-11-05T17:19:00.000 | 11 | 0 | 1 | 0 | python,list,range | 4,108,362 | 3 | false | 0 | 0 | range(10) is built in. | 1 | 17 | 0 | Is there a function I can call that returns a list of ascending numbers? I.e., function(10) would return [0,1,2,3,4,5,6,7,8,9]? | Generating an ascending list of numbers of arbitrary length in python | 1 | 0 | 0 | 69,702 |
4,108,852 | 2010-11-05T18:22:00.000 | 4 | 0 | 0 | 0 | python,django | 4,108,925 | 3 | true | 1 | 0 | With Django it's a bit more than introspection actually. The models use a metaclass to register themselves, I will spare you the complexities of everything involved but the admin does not introspect the models as you browse through it.
Instead, the registering process creates a _meta object on the model with all the data needed for the admin and ORM. You can see the ModelBase metaclass in django/db/models/base.py, as you can see in the __new__ function it walks through all the fields to add them to the _meta object. The _meta object itself is generated dynamically using the Meta class definition on the model.
You can see the result with print SomeModel._meta or print SomeModel._meta.fields | 1 | 0 | 0 | I don't really want to know Django I am actually more interested in the administrator. The thing that interests me is how they introspect the models to create the administrator back-end.
I browsed through the Django source code and found a little info but since it's such a big project I was wondering if there are smaller examples of how they do it?
This is just a personal project to get to understand Python better. I thought that learning about introspecting objects would be a good way to do this. | Django's introspecting administrator: How does it work? | 1.2 | 0 | 0 | 91 |
4,109,532 | 2010-11-05T19:57:00.000 | 1 | 0 | 0 | 0 | python,django | 4,109,559 | 3 | false | 1 | 0 | It's for any type of settings, but it's better to put local settings in a separate file so that version upgrades don't clobber them. Have the global settings file detect the presence of the local settings file and then either import everything from it or just execfile() it. | 1 | 1 | 0 | I'm fairly new to Python and Django, and I'm working on a webapp now that will be run on multiple servers. Each server has it's own little configuration details (commands, file paths, etc.) that I would like to just be able to store in a settings file, and then have a different copy of the file on each system.
I know that in Django, there's a settings file. However, is that only for Django-related things? Or am I supposed to put this type of stuff in there too? | Where should I put configuration details? [Python] | 0.066568 | 0 | 0 | 189 |
4,109,848 | 2010-11-05T20:36:00.000 | 1 | 0 | 1 | 0 | python,pickle | 4,110,149 | 3 | false | 0 | 0 | It sounds like you know you can't pickle the handle, and you're ok with that, you just want to pickle the part that can be pickled. As your object stands now, it can't be pickled because it has the handle. Do I have that right? If so, read on.
The pickle module will let your class describe its own state to pickle, for exactly these cases. You want to define your own __getstate__ method. The pickler will invoke it to get the state to be pickled, only if the method is missing does it go ahead and do the default thing of trying to pickle all the attributes. | 2 | 3 | 0 | I would like to create a class that describes a file resource and then pickle it. This part is straightforward. To be concrete, let's say that I have a class "A" that has methods to operate on a file. I can pickle this object if it does not contain a file handle. I want to be able to create a file handle in order to access the resource described by "A". If I have an "open()" method in class "A" that opens and stores the file handle for later use, then "A" is no longer pickleable. (I add here that opening the file includes some non-trivial indexing which cannot be cached--third party code--so closing and reopening when needed is not without expense). I could code class "A" as a factory that can generate file handles to the described file, but that could result in multiple file handles accessing the file contents simultaneously. I could use another class "B" to handle the opening of the file in class "A", including locking, etc. I am probably overthinking this, but any hints would be appreciated. | Design of a python pickleable object that describes a file | 0.066568 | 0 | 0 | 1,641 |
4,109,848 | 2010-11-05T20:36:00.000 | 1 | 0 | 1 | 0 | python,pickle | 4,109,958 | 3 | false | 0 | 0 | If you open a pointer to a file, pickle it, then attempt to reconstitute is later, there is no guarantee that file will still be available for opening.
To elaborate, the file pointer really represents a connection to the file. Just like a database connection, you can't "pickle" the other end of the connection, so this won't work.
Is it possible to keep the file pointer around in memory in its own process instead? | 2 | 3 | 0 | I would like to create a class that describes a file resource and then pickle it. This part is straightforward. To be concrete, let's say that I have a class "A" that has methods to operate on a file. I can pickle this object if it does not contain a file handle. I want to be able to create a file handle in order to access the resource described by "A". If I have an "open()" method in class "A" that opens and stores the file handle for later use, then "A" is no longer pickleable. (I add here that opening the file includes some non-trivial indexing which cannot be cached--third party code--so closing and reopening when needed is not without expense). I could code class "A" as a factory that can generate file handles to the described file, but that could result in multiple file handles accessing the file contents simultaneously. I could use another class "B" to handle the opening of the file in class "A", including locking, etc. I am probably overthinking this, but any hints would be appreciated. | Design of a python pickleable object that describes a file | 0.066568 | 0 | 0 | 1,641 |
4,110,992 | 2010-11-05T23:37:00.000 | 3 | 0 | 0 | 1 | python,ubuntu,urllib2,ubuntu-10.10 | 4,112,300 | 2 | true | 0 | 0 | 5 seconds sounds suspiciously like the DNS resolving timeout.
A hunch, It's possible that it's cycling through the DNS servers in your /etc/resolv.conf and if one of them is broken, the default timeout is 5 seconds on linux, after which it will try the next one, looping back to the top when it's tried them all.
If you have multiple DNS servers listed in resolv.conf, try removing all but one. If this fixes it; then after that see why you're being assigned incorrect resolving servers. | 1 | 1 | 0 | I am experiencing strange behavior with urllib2.urlopen() on Ubuntu 10.10. The first request to a url goes fast but the second takes a long time to connect. I think between 5 and 10 seconds. On windows this just works normal?
Does anybody have an idea what could cause this issue?
Thanks, Onno | Strange urllib2.urlopen() behavior on Ubuntu 10.10 | 1.2 | 0 | 1 | 427 |
4,111,049 | 2010-11-05T23:48:00.000 | 2 | 0 | 0 | 0 | python,tkinter,clipboard,toolbar | 4,111,218 | 2 | false | 0 | 1 | You don't have to maintain a big framework, you can create a single binding on the root widget for <FocusIn> and put all the logic in that binding. Or, use focus_class and bind to the class all.
Binding on the root will only affect children of the root, binding to all will affect all widgets in the entire app. That only matters if you have more than one toplevel widget. | 2 | 0 | 0 | I'm looking for suggestions on how one might implement a toolbar that provides edit cut, copy, paste commands using the Tkinter framework. I understand how to build a toolbar and bind the toolbar commands, but I'm confused over how the toolbar button bound commands will know which widget to apply the cut, copy, or paste action because the widget with edit activity will lose focus when the toolbar button is clicked. My first thought was to have each widget with potential edit activity set a global variable when the widget gains focus and have other widgets (without edit activity, eg. buttons, sliders, checkbox/radiobox, etc) clear this global variable. But this sounds complicated to maintain unless I build a framework of widgets that inherit this behavior.
Is there a simpler way to go about this or am I on the right track? | Python/Tkinter: Building a toolbar that provides edit cut, copy, paste commands | 0.197375 | 0 | 0 | 1,053 |
4,111,049 | 2010-11-05T23:48:00.000 | 1 | 0 | 0 | 0 | python,tkinter,clipboard,toolbar | 4,111,334 | 2 | true | 0 | 1 | You can tell the toolbar buttons to not take the focus; it's a configuration option and no UI guidelines I've ever seen have had toolbar buttons with focus. (Instead, the functionality is always available through some other keyboard-activatable mechanism, e.g., a hotkey combo.) | 2 | 0 | 0 | I'm looking for suggestions on how one might implement a toolbar that provides edit cut, copy, paste commands using the Tkinter framework. I understand how to build a toolbar and bind the toolbar commands, but I'm confused over how the toolbar button bound commands will know which widget to apply the cut, copy, or paste action because the widget with edit activity will lose focus when the toolbar button is clicked. My first thought was to have each widget with potential edit activity set a global variable when the widget gains focus and have other widgets (without edit activity, eg. buttons, sliders, checkbox/radiobox, etc) clear this global variable. But this sounds complicated to maintain unless I build a framework of widgets that inherit this behavior.
Is there a simpler way to go about this or am I on the right track? | Python/Tkinter: Building a toolbar that provides edit cut, copy, paste commands | 1.2 | 0 | 0 | 1,053 |
4,111,244 | 2010-11-06T00:39:00.000 | 9 | 0 | 1 | 0 | python,django,django-models | 4,111,263 | 7 | false | 1 | 0 | The list of installed applications is defined in settings.INSTALLED_APPS. It contains a tuple of strings, so you can iterate on it to access each application's name.
However, I'm not sure what you mean by each application's attributes and fields. | 1 | 48 | 0 | In my Django website, I'm creating a class that interact dynamically with other applications installed in the website. I have to do a manipulation on each field of each application.
So I want to save the name of all installed applications in a list and get the attributes of each one. There is a way to do that using an iterator or something else ? | Get a list of all installed applications in Django and their attributes | 1 | 0 | 0 | 53,093 |
4,112,235 | 2010-11-06T06:46:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 4,112,279 | 2 | false | 1 | 0 | I have been handling something similar by building a custom automatic retry dispatcher on the client. Whenever an ajax call to the server fails, the client will retry it.
This works very well if your page is ajaxy. If your app spits entire HTML pages then you can use a two pass process: first send an empty page containing only an ajax request. Then, when AppEngine receives that ajax request, it outputs the same HTML you had before. If the ajax call succeeds it fills the DOM with the result. If it fails, it retries once. | 2 | 1 | 0 | Sometimes, with requests that do a lot, Google AppEngine returns an error. I have been handling this by some trickery: memcaching intermediate processed data and just requesting the page again. This often works because the memcached data does not have to be recalculated and the request finishes in time.
However... this hack requires seeing an error, going back, and clicking again. Obviously less than ideal.
Any suggestions?
inb4: "optimize your process better", "split your page into sub-processes", and "use taskqueue".
Thanks for any thoughts.
Edit - To clarify:
Long wait for requests is ok because the function is administrative. I'm basically looking to run a data-mining function. I'm searching over my datastore and modifying a bunch of objects. I think the correct answer is that AppEngine may not be the right tool for this. I should be exporting the data to a computer where I can run functions like this on my own. It seems AppEngine is really intended for serving with lighter processing demands. Maybe the quota/pricing model should offer the option to increase processing timeouts and charge extra. | Better ways to handle AppEngine requests that time out? | 0.099668 | 0 | 0 | 264 |
4,112,235 | 2010-11-06T06:46:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 4,117,235 | 2 | true | 1 | 0 | If interactive user requests are hitting the 30 second deadline, you have bigger problems: your user has almost certainly given up and left anyway.
What you can do depends on what your code is doing. There's a lot to be optimized by batching datastore operations, or reducing them by changing how you model your data; you can offload work to the Task Queue; for URLFetches, you can execute them in parallel. Tell us more about what you're doing and we may be able to provide more concrete suggestions. | 2 | 1 | 0 | Sometimes, with requests that do a lot, Google AppEngine returns an error. I have been handling this by some trickery: memcaching intermediate processed data and just requesting the page again. This often works because the memcached data does not have to be recalculated and the request finishes in time.
However... this hack requires seeing an error, going back, and clicking again. Obviously less than ideal.
Any suggestions?
inb4: "optimize your process better", "split your page into sub-processes", and "use taskqueue".
Thanks for any thoughts.
Edit - To clarify:
Long wait for requests is ok because the function is administrative. I'm basically looking to run a data-mining function. I'm searching over my datastore and modifying a bunch of objects. I think the correct answer is that AppEngine may not be the right tool for this. I should be exporting the data to a computer where I can run functions like this on my own. It seems AppEngine is really intended for serving with lighter processing demands. Maybe the quota/pricing model should offer the option to increase processing timeouts and charge extra. | Better ways to handle AppEngine requests that time out? | 1.2 | 0 | 0 | 264 |
4,114,055 | 2010-11-06T16:47:00.000 | 1 | 0 | 1 | 0 | python,arguments | 4,114,069 | 2 | true | 0 | 0 | You better have two different methods if inner code is different in both cases. | 1 | 1 | 0 | For example, I want to have a method that, depending on the first argument passed, can take either an int or a char as the second argument.
The way I thought of doing it is to have an if right after the method it calls to check what the first argument is, it can be one of 4. At this point, if it's of say, type 1 or 2 that expects an int as the second argument, it completes the code within the if. I then have an elif checking if the first argument is of type 3 or 4, then it goes into that block and completes the code within that block. The else will throw an exception or handle the issue accordingly.
Is this the right way to do it? | What would be the proper way of handling arguments for a method that could be ints or chars in python? | 1.2 | 0 | 0 | 103 |
4,114,417 | 2010-11-06T18:11:00.000 | 4 | 0 | 1 | 0 | python,git,permissions,hook,restrict | 4,114,690 | 2 | true | 0 | 0 | I think it could be possible to use such script but it is not the right place for rights management, it should be rather done on git server side. For example in gitosis you do this configuration in gitosis-admin repository in file gitosis.conf.
Security managed by a hook can be easily broken, only server can keep track of this things, please check documentation of your server for details how to restrict access. | 1 | 3 | 0 | I'd like to restrict write access for the master branch to only several developers, while allowing others to pull everything and push to other non-master branches. Idea is that a developer would push to some other branch and then if code passes review, reviewer would merge it into the master branch.
I'm pretty sure that this can be easily done with a git commit hook, but I don't know python and this seems like such a generic problem, that somebody must have written it already. Do you know where I could find such a script? Or if you have one lying around, please paste it here, for lazy people like me. | How to write a git hook to restrict writing to branch? | 1.2 | 0 | 0 | 5,437 |
4,114,722 | 2010-11-06T19:17:00.000 | 5 | 0 | 0 | 0 | python,html,parsing | 4,115,108 | 5 | false | 1 | 0 | html5lib cannot parse half of what's "out there"
That sounds extremely implausible. html5lib uses exactly the same algorithm that's also implemented in recent versions of Firefox, Safari and Chrome. If that algorithm broke half the web, I think we would have heard. If you have particular problems with it, do file bugs. | 2 | 15 | 0 | I'm trying to parse some html in Python. There were some methods that actually worked before... but nowadays there's nothing I can actually use without workarounds.
beautifulsoup has problems after SGMLParser went away
html5lib cannot parse half of what's "out there"
lxml is trying to be "too correct" for typical html (attributes and tags cannot contain unknown namespaces, or an exception is thrown, which means almost no page with Facebook connect can be parsed)
What other options are there these days? (if they support xpath, that would be great) | Python html parsing that actually works | 0.197375 | 0 | 1 | 3,978 |
4,114,722 | 2010-11-06T19:17:00.000 | 1 | 0 | 0 | 0 | python,html,parsing | 4,114,746 | 5 | false | 1 | 0 | I think the problem is that most HTML is ill-formed. XHTML tried to fix that, but it never really caught on enough - especially as most browsers do "intelligent workarounds" for ill-formed code.
Even a few years ago I tried to parse HTML for a primitive spider-type app, and found the problems too difficult. I suspect writing your own might be on the cards, although we can't be the only people with this problem! | 2 | 15 | 0 | I'm trying to parse some html in Python. There were some methods that actually worked before... but nowadays there's nothing I can actually use without workarounds.
beautifulsoup has problems after SGMLParser went away
html5lib cannot parse half of what's "out there"
lxml is trying to be "too correct" for typical html (attributes and tags cannot contain unknown namespaces, or an exception is thrown, which means almost no page with Facebook connect can be parsed)
What other options are there these days? (if they support xpath, that would be great) | Python html parsing that actually works | 0.039979 | 0 | 1 | 3,978 |
4,115,033 | 2010-11-06T20:34:00.000 | 0 | 0 | 1 | 0 | python,thread-safety,queue | 4,115,106 | 3 | false | 0 | 0 | Why can't you just add the final step to the queue ? | 2 | 0 | 0 | how can i update a shared variable between different threading.Thread in python?
lets say that i have 5 threads working down a Queue.Queue(). after the queue is done i want to do an other operation but i want it to happen only once.
is it possible to share and update a variable betweeen the threads. so when Queue.empty() is True this event gets fired but if one of the threads is doing it i dont want the others to do do that too because i would get wrong results.
EDIT
i have a queue which reflects files on the filesystem.
the files are uploaded to a site by the threads and while each thread is uploading the file it updates a set() of keywords i got from the files.
when the queue is empty i need to contact the site and tell it to update the keyword counts. right now each thread does this and i get an update for each thread which is bad.
i also tried to empty the set but it doesnt work.
keywordset = set()
hkeywordset = set()
def worker():
while queue:
if queue.empty():
if len(keywordset) or len(hkeywordset):
# as soon as the queue is empty we send the keywords and hkeywords to the
# imageapp so it can start updating
apiurl = update_cols_url
if apiurl[-1] != '/':
apiurl = apiurl+'/'
try:
keywords = []
data = dict(keywords=list(keywordset), hkeywords=list(hkeywordset))
post = dict(data=simplejson.dumps(data))
post = urllib.urlencode(post)
urllib2.urlopen(apiurl, post)
hkeywordset.clear()
keywordset.clear()
print 'sent keywords and hkeywords to imageapp...'
except Exception, e: print e
# we get the task form the Queue and process the file based on the action
task = queue.get()
print str(task)
try:
reindex = task['reindex']
except:
reindex = False
data = updater.process_file(task['filename'], task['action'], task['fnamechange'], reindex)
# we parse the images keywords and hkeywords and add them to the sets above for later
# processing
try:
for keyword in data['keywords']:
keywordset.add(keyword)
except: pass
try:
for hkw in data['hkeywords']:
hkeywordset.add(hkw)
except:pass
queue.task_done()
for i in range(num_worker_threads):
t = threading.Thread(target=worker)
t.daemon = True
t.start()
while 1:
line = raw_input('type \'q\' to stop filewatcher... or \'qq\' to force quit...\n').strip()
this is what i was trying basically. but of course the part of queue.empty() gets exectued as many times as threads i have. | python threading and shared variables | 0 | 0 | 0 | 3,372 |
4,115,033 | 2010-11-06T20:34:00.000 | 0 | 0 | 1 | 0 | python,thread-safety,queue | 4,115,111 | 3 | false | 0 | 0 | Have another queue where you place this event after first queue is empty.
Or have special thread for this event. | 2 | 0 | 0 | how can i update a shared variable between different threading.Thread in python?
lets say that i have 5 threads working down a Queue.Queue(). after the queue is done i want to do an other operation but i want it to happen only once.
is it possible to share and update a variable betweeen the threads. so when Queue.empty() is True this event gets fired but if one of the threads is doing it i dont want the others to do do that too because i would get wrong results.
EDIT
i have a queue which reflects files on the filesystem.
the files are uploaded to a site by the threads and while each thread is uploading the file it updates a set() of keywords i got from the files.
when the queue is empty i need to contact the site and tell it to update the keyword counts. right now each thread does this and i get an update for each thread which is bad.
i also tried to empty the set but it doesnt work.
keywordset = set()
hkeywordset = set()
def worker():
while queue:
if queue.empty():
if len(keywordset) or len(hkeywordset):
# as soon as the queue is empty we send the keywords and hkeywords to the
# imageapp so it can start updating
apiurl = update_cols_url
if apiurl[-1] != '/':
apiurl = apiurl+'/'
try:
keywords = []
data = dict(keywords=list(keywordset), hkeywords=list(hkeywordset))
post = dict(data=simplejson.dumps(data))
post = urllib.urlencode(post)
urllib2.urlopen(apiurl, post)
hkeywordset.clear()
keywordset.clear()
print 'sent keywords and hkeywords to imageapp...'
except Exception, e: print e
# we get the task form the Queue and process the file based on the action
task = queue.get()
print str(task)
try:
reindex = task['reindex']
except:
reindex = False
data = updater.process_file(task['filename'], task['action'], task['fnamechange'], reindex)
# we parse the images keywords and hkeywords and add them to the sets above for later
# processing
try:
for keyword in data['keywords']:
keywordset.add(keyword)
except: pass
try:
for hkw in data['hkeywords']:
hkeywordset.add(hkw)
except:pass
queue.task_done()
for i in range(num_worker_threads):
t = threading.Thread(target=worker)
t.daemon = True
t.start()
while 1:
line = raw_input('type \'q\' to stop filewatcher... or \'qq\' to force quit...\n').strip()
this is what i was trying basically. but of course the part of queue.empty() gets exectued as many times as threads i have. | python threading and shared variables | 0 | 0 | 0 | 3,372 |
4,119,698 | 2010-11-07T21:07:00.000 | 2 | 0 | 1 | 0 | python,optimization,linked-list,set | 4,121,220 | 6 | false | 0 | 0 | Dictionary is the collection you want in this case because it has O(1) find and delete. There is a cost you will incur, which is generating a key for each object when you want to add/remove, but it'll be significantly faster than the O(n) approach of scanning a list. Generating a key for your objects is correct in this situation. If you have a primary key (did they come from a DB?) that will negate the hash function to a property lookup, and you'll achieve near perfect performance.
You seem to think that using a dictionary as a data structure in this case is a bad thing - it isn't at all. The purpose of a dictionary is to quickly find items in a collection. This is what you need, use it. | 4 | 4 | 0 | Suppose I have profiled my program, and the vast majority of runtime is spent in method 'remove' of 'list' objects. The program manipulates a collection of collections, and the collections do not need to be ordered. What would be the most straightforward way to implement these collections in python (preferably using standard python collections) so that collection.remove(item) is inexpensive both when collection is the outer collection and item is an inner collection and when collection is an inner collection and item is just an immutable object.
The problem with using sets here is that sets cannot contain mutable collections, so the inner sets would have to be frozensets, but then removing items is no longer so cheap.
The best solution I've come upon so far was suggested by someone as an answer here that apparently was deleted shortly after. They suggested using a dict. This would work, but you would have to generate arbitrary id's for each item then, so it's a bit awkward. Another alternative is to used a linked list, but that would be awkward too, since linked lists aren't part of the standard library. | What type of collection of mutable objects will allow me to quickly remove items in python? | 0.066568 | 0 | 0 | 200 |
4,119,698 | 2010-11-07T21:07:00.000 | 2 | 0 | 1 | 0 | python,optimization,linked-list,set | 4,121,026 | 6 | false | 0 | 0 | They suggested using a dict. This would work, but you would have to generate arbitrary id's for each item then, so it's a bit awkward.
You delete them by instance? Using a dict approach, you can always use id() as their "arbitrary" ID?
One dict for groups with their id() as key, inner dict for invidual's id(). And another global dict with individuals with their id() as key.
It's not clear if an individual can be in multiple groups... If so, you would need to verify if the invidual is in any group before deleting it. | 4 | 4 | 0 | Suppose I have profiled my program, and the vast majority of runtime is spent in method 'remove' of 'list' objects. The program manipulates a collection of collections, and the collections do not need to be ordered. What would be the most straightforward way to implement these collections in python (preferably using standard python collections) so that collection.remove(item) is inexpensive both when collection is the outer collection and item is an inner collection and when collection is an inner collection and item is just an immutable object.
The problem with using sets here is that sets cannot contain mutable collections, so the inner sets would have to be frozensets, but then removing items is no longer so cheap.
The best solution I've come upon so far was suggested by someone as an answer here that apparently was deleted shortly after. They suggested using a dict. This would work, but you would have to generate arbitrary id's for each item then, so it's a bit awkward. Another alternative is to used a linked list, but that would be awkward too, since linked lists aren't part of the standard library. | What type of collection of mutable objects will allow me to quickly remove items in python? | 0.066568 | 0 | 0 | 200 |
4,119,698 | 2010-11-07T21:07:00.000 | -1 | 0 | 1 | 0 | python,optimization,linked-list,set | 4,120,881 | 6 | false | 0 | 0 | Why not have something like a master list of sets and then another set that contains the indices to the list for the set you want to keep track of? Sure it might be a little extra work, but you should be able to abstract it out into a class. | 4 | 4 | 0 | Suppose I have profiled my program, and the vast majority of runtime is spent in method 'remove' of 'list' objects. The program manipulates a collection of collections, and the collections do not need to be ordered. What would be the most straightforward way to implement these collections in python (preferably using standard python collections) so that collection.remove(item) is inexpensive both when collection is the outer collection and item is an inner collection and when collection is an inner collection and item is just an immutable object.
The problem with using sets here is that sets cannot contain mutable collections, so the inner sets would have to be frozensets, but then removing items is no longer so cheap.
The best solution I've come upon so far was suggested by someone as an answer here that apparently was deleted shortly after. They suggested using a dict. This would work, but you would have to generate arbitrary id's for each item then, so it's a bit awkward. Another alternative is to used a linked list, but that would be awkward too, since linked lists aren't part of the standard library. | What type of collection of mutable objects will allow me to quickly remove items in python? | -0.033321 | 0 | 0 | 200 |
4,119,698 | 2010-11-07T21:07:00.000 | 1 | 0 | 1 | 0 | python,optimization,linked-list,set | 4,119,787 | 6 | false | 0 | 0 | If you are spending a lot of time remove-ing elements from a list, perhaps you should consider filtering it instead? In other words. make a large initial list and then subsequent generators consuming elements in the list. | 4 | 4 | 0 | Suppose I have profiled my program, and the vast majority of runtime is spent in method 'remove' of 'list' objects. The program manipulates a collection of collections, and the collections do not need to be ordered. What would be the most straightforward way to implement these collections in python (preferably using standard python collections) so that collection.remove(item) is inexpensive both when collection is the outer collection and item is an inner collection and when collection is an inner collection and item is just an immutable object.
The problem with using sets here is that sets cannot contain mutable collections, so the inner sets would have to be frozensets, but then removing items is no longer so cheap.
The best solution I've come upon so far was suggested by someone as an answer here that apparently was deleted shortly after. They suggested using a dict. This would work, but you would have to generate arbitrary id's for each item then, so it's a bit awkward. Another alternative is to used a linked list, but that would be awkward too, since linked lists aren't part of the standard library. | What type of collection of mutable objects will allow me to quickly remove items in python? | 0.033321 | 0 | 0 | 200 |
4,120,169 | 2010-11-07T22:58:00.000 | 0 | 1 | 1 | 0 | c++,visual-studio-2010,static,linker,boost-python | 4,121,910 | 2 | true | 0 | 0 | What libraries are linked depends on the settings of your project. There are two possibilities: You can build against
statically
dynamically
linked versions of the c-runtime libs. Depending on which option is selected, the boost sends a proper #pragma to the linker. These options need to be set consistently in all projects which constitute your program. So go to "properties -> c++ -> code generation" (or similar, I am just guessing, don't have VS up and running right now) and be sure that the right option is set (consistently). Of course, you must have compiled boost libraries in required format before... | 2 | 1 | 0 | I got a VS10 project. I want to build some C++ code so I can use it in python. I followed the boost tutorial and got it working. However VS keeps to link boost-python-vc100-mt-gd-1_44.lib but it's just a wrapper which calls boost-python-vc100-mt-gd-1_44.dll. That's why I need to copy the .dll with my .dll(.pyd) file. So I want to link boost:python statically to that .dll(.pyd) file. But I just can't find any configuration option in VS or in the compiler and linker manual. The weirdest thing is I've got one older project using boost::filesystem with the very same config but that project links against libboost-filesystem-*.lib which is static lib so it's ok. I've been googling for couple of hours without any success and it drivers me crazy.
Thanks for any help or suggestion. | MSVC - boost::python static linking to .dll (.pyd) | 1.2 | 0 | 0 | 4,303 |
4,120,169 | 2010-11-07T22:58:00.000 | 1 | 1 | 1 | 0 | c++,visual-studio-2010,static,linker,boost-python | 4,146,530 | 2 | false | 0 | 0 | You probably don't want to do that. Statically linked Boost python has a number of problems and quirks when there are more then one boost python based library imported. "But I only have one" you say. Can you guarantee that your users won't have another? That you might want to use another in the future? Stick with the DLL. Distributing another DLL is really not that big a deal. Just put it side-by-side in the same directory. | 2 | 1 | 0 | I got a VS10 project. I want to build some C++ code so I can use it in python. I followed the boost tutorial and got it working. However VS keeps to link boost-python-vc100-mt-gd-1_44.lib but it's just a wrapper which calls boost-python-vc100-mt-gd-1_44.dll. That's why I need to copy the .dll with my .dll(.pyd) file. So I want to link boost:python statically to that .dll(.pyd) file. But I just can't find any configuration option in VS or in the compiler and linker manual. The weirdest thing is I've got one older project using boost::filesystem with the very same config but that project links against libboost-filesystem-*.lib which is static lib so it's ok. I've been googling for couple of hours without any success and it drivers me crazy.
Thanks for any help or suggestion. | MSVC - boost::python static linking to .dll (.pyd) | 0.099668 | 0 | 0 | 4,303 |
4,120,705 | 2010-11-08T01:35:00.000 | -2 | 0 | 1 | 0 | python,list,dictionary | 4,120,742 | 7 | false | 0 | 0 | I know nothing about Python, but I guess you can traverse a list and remove entries by key from the dictionary? | 1 | 7 | 0 | How do you remove all elements from the dictionary whose key is a element of a list? | Remove all elements from the dictionary whose key is an element of a list | -0.057081 | 0 | 0 | 6,938 |
4,121,751 | 2010-11-08T06:38:00.000 | 1 | 0 | 1 | 0 | python,string,escaping,backslash | 4,121,817 | 4 | false | 0 | 0 | Prepending the string with r (stands for "raw") will escape all characters inside the string. For example:
print r'\b\n\\'
will output
\b\n\\
Have I understood the question correctly? | 1 | 1 | 0 | I'm getting some content from Twitter API, and I have a little problem, indeed I sometimes get a tweet ending with only one backslash.
More precisely, I'm using simplejson to parse Twitter stream.
How can I escape this backslash ?
From what I have read, such raw string shouldn't exist ...
Even if I add one backslash (with two in fact) I still get an error as I suspected (since I have a odd number of backslashes)
Any idea ?
I can just forget about these tweets too, but I'm still curious about that.
Thanks : ) | [Python]How to deal with a string ending with one backslash? | 0.049958 | 0 | 1 | 1,886 |
4,122,794 | 2010-11-08T10:00:00.000 | 34 | 0 | 0 | 0 | python,excel,csv | 4,122,980 | 2 | true | 0 | 0 | You're using open('file.csv', 'w')--try open('file.csv', 'wb').
The Python csv module requires output files be opened in binary mode. | 2 | 15 | 1 | I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python.
When I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks).
Does anyone know why this is happenning and how I can fix it?
Edit: I used the strip() function for all the attributes before writing a row.
Thanks. | Csv blank rows problem with Excel | 1.2 | 1 | 0 | 7,209 |
4,122,794 | 2010-11-08T10:00:00.000 | 0 | 0 | 0 | 0 | python,excel,csv | 4,122,816 | 2 | false | 0 | 0 | the first that comes into my mind (just an idea) is that you might have used "\r\n" as row delimiter (which is shown as one linebrak in notepad) but excel expects to get only "\n" or only "\r" and so it interprets this as two line-breaks. | 2 | 15 | 1 | I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python.
When I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks).
Does anyone know why this is happenning and how I can fix it?
Edit: I used the strip() function for all the attributes before writing a row.
Thanks. | Csv blank rows problem with Excel | 0 | 1 | 0 | 7,209 |
4,122,940 | 2010-11-08T10:17:00.000 | 1 | 0 | 0 | 0 | python,caching,postgresql,nlp,tokenize | 4,151,273 | 2 | false | 0 | 0 | I store tokenized text in a MySQL database. While I don't always like the overhead of communication with the database, I've found that there are lots of processing tasks that I can ask the database to do for me (like search the dependency parse tree for complex syntactic patterns). | 1 | 2 | 0 | I have a simple question. I'm doing some light crawling so new content arrives every few days. I've written a tokenizer and would like to use it for some text mining purposes. Specifically, I'm using Mallet's topic modeling tool and one of the pipe is to tokenize the text into tokens before further processing can be done. With the amount of text in my database, it takes a substantial amount of time tokenizing the text (I'm using regex here).
As such, is it a norm to store the tokenized text in the db so that tokenized data can be readily available and tokenizing can be skipped if I need them for other text mining purposes such as Topic modeling, POS tagging? What are the cons of this approach? | Storing tokenized text in the db? | 0.099668 | 1 | 0 | 894 |
4,126,041 | 2010-11-08T16:55:00.000 | 4 | 0 | 0 | 1 | python,scheduled-tasks | 4,126,085 | 2 | false | 0 | 0 | The OS provides a tool called 'cron' that's for exactly this purpose. You shouldn't need to modify your script at all to make use of it.
At a terminal command prompt, type man cron for more info. | 1 | 2 | 0 | I run MAC OS X.
So I have completed a python script that essentially parses a few sites online, and uploads a particular file to an online server. Essentially, I wish to run this script automatically from my computer about 20 times a day. Is there a solution to schedule this script to run at fixed time points everyday? Does this require compiling the python code into a .exe file?
Thanks a lot! | Scheduling Tasks | 0.379949 | 0 | 0 | 1,544 |
4,126,247 | 2010-11-08T17:19:00.000 | 2 | 0 | 0 | 0 | python,frameworks,zope,zope.interface | 4,126,734 | 1 | true | 0 | 0 | You can upload zope the same way you would upload anything else, but it's not suitable for deploying on many shared webhosts as many of them would not like you running the zope process due to the ammount of resources it can consume
You are best off trying to find a webhost that supports zope or to use a VPS | 1 | 0 | 0 | Hey, I'd like to know how to upload my zope site on my ftp. I have a domain, and I like to upload it, like a upload normal files on my ftp.
Thanks. | How to upload zope site on my ftp? | 1.2 | 0 | 0 | 175 |
4,128,555 | 2010-11-08T22:07:00.000 | 0 | 1 | 1 | 0 | java,c++,python,build,scons | 4,129,038 | 2 | false | 0 | 0 | I once checked out CMake (for C++), I liked it very much. It's easy to use yet powerful and quite similar to Make syntax. It also has Java support. | 1 | 2 | 0 | Let's say I have a bunch of small targets in different programming languages (C++, Java, Python, etc), with inter programming language dependencies (Java project depends on a C++, Python depends on C++). How can one build/compile them?
I tried scons and more recently gyp. I don't remember what issues I had with scons. Gyp has a very ugly language definition plus I had to hack ant scripts in order to build my java targets. | How to build/compile C++, Java and Python projects? | 0 | 0 | 0 | 664 |
4,129,658 | 2010-11-09T01:19:00.000 | 3 | 1 | 0 | 0 | python,unicode | 4,129,665 | 3 | true | 0 | 0 | That's URL-encoded UTF-8. URL-decode it, then decode it as UTF-8. | 2 | 0 | 0 | What would be this encoding's name?
smb://nas/music/_lib/v/voivod/voivod-rrr%C3%B6%C3%B6%C3%B6aaarrr/01%20-%20voivod%20-%20rrr%C3%B6%C3%B6%C3%B6aaarrr%20-%20korg%C3%BCll_the_exterminator.mp3
I would like to convert such string to unicode using Python. How would I do that? | unknown encoding to unicode | 1.2 | 0 | 0 | 791 |
4,129,658 | 2010-11-09T01:19:00.000 | 0 | 1 | 0 | 0 | python,unicode | 4,129,716 | 3 | false | 0 | 0 | Try urllib.unquote(). | 2 | 0 | 0 | What would be this encoding's name?
smb://nas/music/_lib/v/voivod/voivod-rrr%C3%B6%C3%B6%C3%B6aaarrr/01%20-%20voivod%20-%20rrr%C3%B6%C3%B6%C3%B6aaarrr%20-%20korg%C3%BCll_the_exterminator.mp3
I would like to convert such string to unicode using Python. How would I do that? | unknown encoding to unicode | 0 | 0 | 0 | 791 |
4,129,697 | 2010-11-09T01:28:00.000 | 1 | 0 | 0 | 0 | python,matplotlib,pyqt,vispy | 4,129,787 | 4 | false | 0 | 1 | I recommend using matplotlib in interactive mode, if you call .show once then it will pop up in its own window, if you don't then it exists only in memory and can be written to a file when you're done with it. | 1 | 20 | 0 | I have a complicated algorithm that updates 3 histograms that are stored in arrays. I want to debug my algorithm, so I was thinking of showing the arrays as histograms in a user interface. What is the easiest way to do this. (Rapid application development is more important than optimized code.)
I have some experience with Qt (in C++) and some experience with matplotlib.
(I'm going to leave this question open for a day or two because it's hard for me to evaluate the solutions without a lot more experience that I don't have. Hopefully, the community's votes will help choose the best answer.) | How do I display real-time graphs in a simple UI for a python program? | 0.049958 | 0 | 0 | 37,324 |
4,129,858 | 2010-11-09T02:01:00.000 | 1 | 0 | 0 | 0 | python,flash,html5-video,screen-capture,image-capture | 4,134,375 | 2 | false | 1 | 0 | What about capturing it inside Flash and sending it as BiteArray to the server? | 1 | 4 | 0 | Part of a web application I am developing requires the ability to capture still images from a Flash or HTML5 video playing with in a browser.
Is there a Python library out there that could help me along with this task?
UPDATE
Actually, users of this web app will also have to have the ability to
Draw a crop box on top of the Flash/HTML5 video player
Be able to resize that box if necessary
Capture the image with in the crop box frame
Have that image be saves and sent to the server
Also, this video image crop/capture tool will also have to be restricted to the perimeter of the video frame. I don't want users getting confused and potentially capturing an image outside of the video frame because all we are concerned about is the content of the video. | Recommendations for a Python library that can capture still images from a Flash/HTML5 video? | 0.099668 | 0 | 0 | 835 |
4,130,199 | 2010-11-09T03:14:00.000 | 0 | 0 | 1 | 0 | python,excel,worksheet-function | 7,522,661 | 2 | false | 0 | 0 | Excel functions i dont' know, but to Excel Add-ins it's possible.
Add a add-in to your excel application, and go to Visual Basic Editor, double click in add-in and use this passwords.
Add-in -- Password
Internet Assistant VBA -- Weezaarde!?
Autosave Addin -- Wildebeest!!
Analysis Toolpak VBA -- Wildebeest!!
Solver Add-in -- Wildebeest!!
All forms be design in excel forms sheets used in older versions of Excel.
[]'s | 2 | 0 | 0 | I wish to manually code VLOOKUP in python, so is it possible to see the VBA code behind VLOOKUP?
I remember a presentation in my school by a guest speaker showing that the Excel Functions are just macro/vba codes. Can someone please show me the way to view the code for Excel Worksheet functions? | Accessing Vlookup Macro Code in Excel? | 0 | 0 | 0 | 1,169 |
4,130,199 | 2010-11-09T03:14:00.000 | 0 | 0 | 1 | 0 | python,excel,worksheet-function | 4,131,708 | 2 | false | 0 | 0 | I do not know of a way to view the XL code for VLOOKUP (its almost certainly written in C). The basic algorithms used in VLOOKUP are Linear Search for unsorted data, and Binary Search for sorted data. There are many examples available on the web of implementing these algorithms. | 2 | 0 | 0 | I wish to manually code VLOOKUP in python, so is it possible to see the VBA code behind VLOOKUP?
I remember a presentation in my school by a guest speaker showing that the Excel Functions are just macro/vba codes. Can someone please show me the way to view the code for Excel Worksheet functions? | Accessing Vlookup Macro Code in Excel? | 0 | 0 | 0 | 1,169 |
4,130,355 | 2010-11-09T03:49:00.000 | 0 | 0 | 0 | 0 | python,macos,matplotlib,fink | 53,662,588 | 11 | false | 0 | 0 | Simply aliasing a new command to launch python in ~/.bash_profile will do the trick.
alias vpython3=/Library/Frameworks/Python.framework/Versions/3.6(replace with your own python version)/bin/python3
then 'source ~/.bash_profile' and use vpython3 to launch python3.
Explanation: Python is actually by default installed as framework on mac, but using virtualenv would link your python3 command under the created virtual environment, instead of the above framework directory ('which python3' in terminal and you'll see that). Perhaps Matplotlib has to find the bin/ include/ lib/,etc in the python framework. | 3 | 62 | 1 | I am getting this error:
/sw/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py:235:
UserWarning: Python is not installed as a framework. The MacOSX
backend may not work correctly if Python is not installed as a
framework. Please see the Python documentation for more information on
installing Python as a framework on Mac OS X
I installed python27 using fink and it's using the default matplotlib is using macosx framework. | python matplotlib framework under macosx? | 0 | 0 | 0 | 25,060 |
4,130,355 | 2010-11-09T03:49:00.000 | 18 | 0 | 0 | 0 | python,macos,matplotlib,fink | 4,131,726 | 11 | true | 0 | 0 | There are two ways Python can be built and installed on Mac OS X. One is as a traditional flat Unix-y shared library. The other is known as a framework install, a file layout similar to other frameworks on OS X where all of the component directories (include, lib, bin) for the product are installed as subdirectories under the main framework directory. The Fink project installs Pythons using the Unix shared library method. Most other distributors, including the Apple-supplied Pythons in OS X, the python.org installers, and the MacPorts project, install framework versions of Python. One of the advantages of a framework installation is that it will work properly with various OS X API calls that require a window manager connection (generally GUI-related interfaces) because the Python interpreter is packaged as an app bundle within the framework.
If you do need the functions in matplotlib that require the GUI functions, the simplest approach may be to switch to MacPorts which also packages matplotlib (port py27-matplotlib) and its dependencies. If so, be careful not to mix packages between Fink and MacPorts. It's best to stick with one or the other unless you are really careful. Adjust your shell path accordingly; it would be safest to remove all of the Fink packages and install MacPorts versions. | 3 | 62 | 1 | I am getting this error:
/sw/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py:235:
UserWarning: Python is not installed as a framework. The MacOSX
backend may not work correctly if Python is not installed as a
framework. Please see the Python documentation for more information on
installing Python as a framework on Mac OS X
I installed python27 using fink and it's using the default matplotlib is using macosx framework. | python matplotlib framework under macosx? | 1.2 | 0 | 0 | 25,060 |
4,130,355 | 2010-11-09T03:49:00.000 | 31 | 0 | 0 | 0 | python,macos,matplotlib,fink | 33,873,802 | 11 | false | 0 | 0 | Optionally you could use the Agg backend which requires no extra installation of anything. Just put backend : Agg into ~/.matplotlib/matplotlibrc | 3 | 62 | 1 | I am getting this error:
/sw/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py:235:
UserWarning: Python is not installed as a framework. The MacOSX
backend may not work correctly if Python is not installed as a
framework. Please see the Python documentation for more information on
installing Python as a framework on Mac OS X
I installed python27 using fink and it's using the default matplotlib is using macosx framework. | python matplotlib framework under macosx? | 1 | 0 | 0 | 25,060 |
4,130,813 | 2010-11-09T05:36:00.000 | 0 | 1 | 0 | 1 | python,google-app-engine,search,full-text-search,full-text-indexing | 5,072,790 | 2 | true | 1 | 0 | GAE has announced plans to offer full-text searching natively in the Datastore soon. | 1 | 7 | 0 | What should I do for fast, full-text searching on App Engine with as little work as possible (and as little Java — I’m doing Python.)? | How should I do full-text searching on App Engine? | 1.2 | 0 | 0 | 1,246 |
4,131,096 | 2010-11-09T06:38:00.000 | 3 | 0 | 0 | 0 | python,django | 4,131,110 | 3 | false | 1 | 0 | I'm partial to the #django channel on Freenode's IRC server. A few big names in the Django community hang around there. ( irc://irc.freenode.net/#django since SO's Markdown processor doesn't like irc:// in URLs) | 1 | 1 | 0 | Are there communities where expert Django developers (ideally looking for jobs) like to hang out? Stackoverflow excluded :) | Where are some good places to reach great Django developers? | 0.197375 | 0 | 0 | 179 |
4,131,120 | 2010-11-09T06:42:00.000 | 2 | 0 | 0 | 0 | python,django,admin | 4,131,152 | 2 | false | 1 | 0 | Let's say you create a model called Entry. IE an extremely simple blog. You write a view to show all the entries on the front page. Now how do you put those entries on the webpage? How do you edit them?
Enter the admin. You register your model with the admin, create a superuser and log in to your running webapp. It's there, with a fully functional interface for creating the entries. | 2 | 0 | 0 | On the website, it says this:
One of the most powerful parts of
Django is the automatic admin
interface. It reads metadata in your
model to provide a powerful and
production-ready interface that
content producers can immediately use
to start adding content to the site.
In this document, we discuss how to
activate, use and customize Django’s
admin interface.admin interface.
So what? I still don't understand what the Admin interface is used for. Is it like a PHPMYADMIN? Why would I ever need this? | What are the best uses for the Django Admin app? | 0.197375 | 0 | 0 | 170 |
4,131,120 | 2010-11-09T06:42:00.000 | 1 | 0 | 0 | 0 | python,django,admin | 4,131,237 | 2 | false | 1 | 0 | Some of the uses I can think of -
Editing data or Adding data. If you have any sort of data entry tasks, the admin app handles it like a breeze. Django’s admin especially shines when non-technical users need to be able to enter data.
If you have understood above point, then this makes it possible for programmers to work along with designers and content producers!
Permissions - An admin interface can be used to give permissions, create groups with similar permissions, make more than one administrators etc. (i.e. if you have a login kinda site).
Inspecting data models - when I have defined a new model, I call it up in the admin and enter some dummy data.
Managing acquired data - basically what a moderator does in case of auto-generated content sites.
Block out buggy features - Also if you tweak it a little, you can create an interface wherein say some new feature you coded is buggy. You could disable it from admin interface.
Think of the power this gives in a big organization where everyone need not know programming. | 2 | 0 | 0 | On the website, it says this:
One of the most powerful parts of
Django is the automatic admin
interface. It reads metadata in your
model to provide a powerful and
production-ready interface that
content producers can immediately use
to start adding content to the site.
In this document, we discuss how to
activate, use and customize Django’s
admin interface.admin interface.
So what? I still don't understand what the Admin interface is used for. Is it like a PHPMYADMIN? Why would I ever need this? | What are the best uses for the Django Admin app? | 0.099668 | 0 | 0 | 170 |
4,131,327 | 2010-11-09T07:24:00.000 | 2 | 0 | 0 | 0 | python,ajax,dojo,flask | 4,131,349 | 1 | true | 1 | 0 | No any special secure actions required. Consider ajax request as any other client request. | 1 | 1 | 0 | Hi I am trying to secure a server function being used for an Ajax request, so that the function is not accessed for any sort of malicious activity. I have done the following till now:-
I am checking whether a valid session is present while the function is being called.
I am using POST rather than GET
I look for specific headers by using request.is_xhr else I induce a redirect.
I have compressed the javascript using dojo shrinksafe(..i am using dojo..)
What else can and should be done here. Need your expert advice on this.
(NB-I am using Flask and Dojo) | Handling and securing server functions in an ajax request..python | 1.2 | 0 | 1 | 280 |
4,131,582 | 2010-11-09T08:12:00.000 | 5 | 0 | 1 | 0 | python | 4,153,989 | 3 | false | 0 | 0 | Because self is just a parameter to a function, like any other parameter. For example, the following call:
a = A()
a.x()
essentially gets converted to:
a = A()
A.x(a)
Not making self a reserved word has had the fortunate result as well that for class methods, you can rename the first parameter to something else (normally cls). And of course for static methods, the first parameter has no relationship to the instance it is called on e.g.:
class A:
def method(self):
pass
@classmethod
def class_method(cls):
pass
@staticmethod
def static_method():
pass
class B(A):
pass
b = B()
b.method() # self is b
b.class_method() # cls is B
b.static_method() # no parameter passed | 1 | 13 | 0 | As far as I know, self is just a very powerful convention and it's not really a reserved keyword in Python. Java and C# have this as a keyword. I really find it weird that they didn't make a reserved keyword for it in Python. Is there any reason behind this? | Why is self only a convention and not a real Python keyword? | 0.321513 | 0 | 0 | 2,989 |
4,131,768 | 2010-11-09T08:37:00.000 | 0 | 0 | 1 | 0 | python,persistence,repr | 4,131,799 | 2 | false | 0 | 0 | How to get it from the database is generally uninteresting. Return the way to recreate the object from scratch, e.g. SomeModel(field1, field2, ...). | 1 | 2 | 0 | What would be the best way to handle the __repr__() function for an object that is made persistent? For example, one that is representing a row in a database (relational or object).
According to Python docs, __repr__() should return a string that would re-create the object with eval() such that (roughly) eval(repr(obj)) == obj, or bracket notation for inexact representations. Usually this would mean dumping all the data that can't be regenerated by the object into the string. However, for persistent objects recreating the object could be as simple as retrieving the data from the database.
So, for such objects then, all object data or just the primary key in the __repr__() string? | Persistent objects and __repr__ | 0 | 0 | 0 | 130 |
4,132,207 | 2010-11-09T09:37:00.000 | 1 | 0 | 1 | 0 | python,set,memory-management | 4,132,256 | 6 | false | 0 | 0 | Try to use array module. | 1 | 4 | 0 | Setup
Python 2.6
Ubuntu x64
I have a set of unique integers with values between 1 and 50 million. New integers are added at random e.g. numberset.add(random.randint(1, 50000000)). I need to be able to quickly add new integers and quickly check if an integer is already present.
Problem
After a while, the set grows too large for my low memory system and I experience MemoryErrors.
Question
How can I achieve this while using less memory? What's the fastest way to do this using the disk without reconfiguring the system e.g. swapfiles? Should I use a database file like sqlite? Is there a library that will compress the integers in memory? | Massive Python set of integers using too much memory | 0.033321 | 0 | 0 | 2,296 |
4,132,493 | 2010-11-09T10:09:00.000 | 3 | 1 | 1 | 1 | python,performance | 4,132,499 | 4 | false | 0 | 0 | Sure. Use one of the variants that uses a JITer, such as IronPython, Jython, or PyPy. | 2 | 2 | 0 | There was the Unladen Swallow project that aims to get a faster python, but it seems to be stopped :
Is there a way to get a faster python, I mean faster than C-Python, without the use of psyco ? | When a faster python? | 0.148885 | 0 | 0 | 312 |
4,132,493 | 2010-11-09T10:09:00.000 | 1 | 1 | 1 | 1 | python,performance | 4,132,523 | 4 | false | 0 | 0 | I saw pypy to be very fast on some tests : have a look | 2 | 2 | 0 | There was the Unladen Swallow project that aims to get a faster python, but it seems to be stopped :
Is there a way to get a faster python, I mean faster than C-Python, without the use of psyco ? | When a faster python? | 0.049958 | 0 | 0 | 312 |
4,133,327 | 2010-11-09T11:51:00.000 | 0 | 0 | 0 | 0 | python,c,numpy,hdf5,pytables | 30,009,455 | 3 | false | 0 | 0 | HDF5 takes care of binary compatibility of structures for you. You simply have to tell it what your structs consist of (dtype) and you'll have no problems saving/reading record arrays - this is because the type system is basically 1:1 between numpy and HDF5. If you use H5py I'm confident to say the IO should be fast enough provided you use all native types and large batched reads/writes - the entire dataset of allowable. After that it depends on chunking and what filters (shuffle, compression for example) - it's also worth noting sometimes those can speed up by greatly reducing file size so always look at benchmarks. Note that the the type and filter choices are made on the end creating the HDF5 document.
If you're trying to parse HDF5 yourself, you're doing it wrong. Use the C++ and C apis if you're working in C++/C. There are examples of so called "compound types" on the HDF5 groups website. | 1 | 2 | 1 | when I used NumPy I stored it's data in the native format *.npy. It's very fast and gave me some benefits, like this one
I could read *.npy from C code as
simple binary data(I mean *.npy are
binary-compatibly with C structures)
Now I'm dealing with HDF5 (PyTables at this moment). As I read in the tutorial, they are using NumPy serializer to store NumPy data, so I can read these data from C as from simple *.npy files?
Does HDF5's numpy are binary-compatibly with C structures too?
UPD :
I have matlab client reading from hdf5, but don't want to read hdf5 from C++ because reading binary data from *.npy is times faster, so I really have a need in reading hdf5 from C++ (binary-compatibility)
So I'm already using two ways for transferring data - *.npy (read from C++ as bytes,from Python natively) and hdf5 (access from Matlab)
And if it's possible,want to use the only one way - hdf5, but to do this I have to find a way to make hdf5 binary-compatibly with C++ structures, pls help, If there is some way to turn-off compression in hdf5 or something else to make hdf5 binary-compatibly with C++ structures - tell me where i can read about it... | HDF5 : storing NumPy data | 0 | 0 | 0 | 6,260 |
4,133,816 | 2010-11-09T12:50:00.000 | 7 | 0 | 1 | 0 | python,md5,hashlib | 4,133,902 | 2 | false | 0 | 0 | Why do you think it's inefficient to make a new one? It's a small object, and objects are created and destroyed all the time. Use a new one, and don't worry about it. | 1 | 21 | 0 | How do you flush (or reset) and reuse an instance of hashlib.md5 in python? If I am performing multiple hashing operations in a script, it seems inefficient to use a new instance of hashlib.md5 each time, but from the python documentation I don't see any way to flush or reset the instance. | How to reuse an instance of hashlib.md5 | 1 | 0 | 0 | 8,207 |
4,135,836 | 2010-11-09T16:10:00.000 | 0 | 0 | 0 | 0 | python,xml,serialization,pickle | 4,136,375 | 2 | true | 0 | 0 | So what you're looking for is a python library that spits out arbitrary XML for your objects? You don't need to control the format, so you can't be bothered to actually write something that iterates over the relevant properties of your data and generates the XML using one of the existing tools?
This seems like a bad idea. Arbitrary XML serialization doesn't sound like a good way to move forward. Any format that includes all of pickle's features is going to be ugly, verbose, and very nasty to use. It will not be simple. It will not translate well into Java.
What does your data look like?
If you tell us precisely what aspects of pickle you need (and why lxml.objectify doesn't fulfill those), we will be better able to help you.
Have you considered using JSON for your serialization? It's easy to parse, natively supports python-like data structures, and has wide-reaching support. As an added bonus, it doesn't open your code to all kinds of evil exploits the way the native pickle module does.
Honestly, you need to bite the bullet and define a format, and build a serializer using the standard XML tools, if you absolutely must use XML. Consider JSON. | 1 | 4 | 0 | For a while I've been using a package called "gnosis-utils" which provides an XML pickling service for Python. This class works reasonably well, however it seems to have been neglected by it's developer for the last four years.
At the time we originally selected gnosis it was the only XML serization tool for Python. The advantage of Gnosis was that it provided a set of classes whose function was very similar to the built-in Python XML pickler. It produced XML which python-developers found easy to read, but non-python developers found confusing.
Now that the proejct has grown we have a new requirement: We need to be able to exchange XML with our colleagues who prefer Java or .Net. These non-python developers will not be using Python - they intend to produce XML directly, hence we have a need to simplify the format of the XML.
So are there any alternatives to Gnosis. Our requirements:
Must work on Python 2.4 / Windows x86 32bit
Output must be XML, as simple as possible
API must resemble Pickle as closely as possible
Performance is not hugely important
Of course we could simply adapt Gnosis, however we'd prefer to simply use a component which already provides the functions we requrie (assuming that it exists). | XML object serialization in python, are there any alternatives to Gnosis? | 1.2 | 0 | 1 | 2,017 |
4,136,800 | 2010-11-09T17:49:00.000 | 5 | 0 | 0 | 0 | python,performance,sqlite | 4,136,841 | 3 | true | 0 | 0 | SQLite does not run in a separate process. So you don't actually have any extra overhead from IPC. But IPC overhead isn't that big, anyway, especially over e.g., UNIX sockets. If you need multiple writers (more than one process/thread writing to the database simultaneously), the locking overhead is probably worse, and MySQL or PostgreSQL would perform better, especially if running on the same machine. The basic SQL supported by all three of these databases is the same, so benchmarking isn't that painful.
You generally don't have to do the same type of debugging on SQL statements as you do on your own implementation. SQLite works, and is fairly well debugged already. It is very unlikely that you'll ever have to debug "OK, that row exists, why doesn't the database find it?" and track down a bug in index updating. Debugging SQL is completely different than procedural code, and really only ever happens for pretty complicated queries.
As for debugging your code, you can fairly easily centralize your SQL calls and add tracing to log the queries you are running, the results you get back, etc. The Python SQLite interface may already have this (not sure, I normally use Perl). It'll probably be easiest to just make your existing Table class a wrapper around SQLite.
I would strongly recommend not reinventing the wheel. SQLite will have far fewer bugs, and save you a bunch of time. (You may also want to look into Firefox's fairly recent switch to using SQLite to store history, etc., I think they got some pretty significant speedups from doing so.)
Also, SQLite's well-optimized C implementation is probably quite a bit faster than any pure Python implementation. | 3 | 4 | 0 | I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.
I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.
Here's my thinking so far:
Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).
With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).
I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.
The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.
Any corrections to the above, or anything else I should think about? | Pros and cons of using sqlite3 vs custom table implementation | 1.2 | 1 | 0 | 903 |
4,136,800 | 2010-11-09T17:49:00.000 | 4 | 0 | 0 | 0 | python,performance,sqlite | 4,136,876 | 3 | false | 0 | 0 | You could try to make a sqlite wrapper with the same interface as your class Table, so that you keep your code clean and you get the sqlite performences. | 3 | 4 | 0 | I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.
I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.
Here's my thinking so far:
Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).
With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).
I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.
The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.
Any corrections to the above, or anything else I should think about? | Pros and cons of using sqlite3 vs custom table implementation | 0.26052 | 1 | 0 | 903 |
4,136,800 | 2010-11-09T17:49:00.000 | 0 | 0 | 0 | 0 | python,performance,sqlite | 4,136,862 | 3 | false | 0 | 0 | If you're doing database work, use a database, if your not, then don't. Using tables, it sound's like you are. I'd recommend using an ORM to make it more pythonic. SQLAlchemy is the most flexible (though it's not strictly just an ORM). | 3 | 4 | 0 | I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.
I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.
Here's my thinking so far:
Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).
With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).
I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.
The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.
Any corrections to the above, or anything else I should think about? | Pros and cons of using sqlite3 vs custom table implementation | 0 | 1 | 0 | 903 |
4,137,509 | 2010-11-09T19:17:00.000 | 0 | 0 | 0 | 0 | python-3.x,readlines | 14,191,615 | 3 | false | 0 | 0 | As it is about two years from asking the question, you probably already know the reason. Basically, Python 3 strings are Unicode strings. To make them abstract you need to tell Python what encoding is used for the file.
Python 2 strings are actually byte sequences and Python feels fine to read whatever bytes from the file. Some of the characters are interpreted (newlines, tabs,...), but the rest is left untouched.
Python 3 open() is similar to Python 2 codecs.open().
... the time has come ... to close the question by accepting one of the answers. | 1 | 2 | 0 | Python 3.1.2 (r312:79147, Nov 9 2010, 09:41:54)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> open("/home/madsc13ntist/test_file.txt", "r").readlines()[6]
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.1/codecs.py", line 300, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xae in position 2230: unexpected code byte
and yet...
Python 2.4.3 (#1, Sep 8 2010, 11:37:47)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> open("/home/madsc13ntist/test_file.txt", "r").readlines()[6]
'2010-06-14 21:14:43 613 xxx.xxx.xxx.xxx 200 TCP_NC_MISS 4198 635 GET http www.thelegendssportscomplex.com 80 /thumbnails/t/sponsors/145x138/007.gif - - - DIRECT www.thelegendssportscomplex.com image/gif http://www.thelegendssportscomplex.com/ "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; InfoPath.1; MS-RTC LM 8)" OBSERVED "Sports/Recreation" - xxx.xxx.xxx.xxx xxx.xxx.xxx.xxx\r\n'
does anyone have any idea why .readlines()[6] doesn't work for python-3 but does work in 2.4?
also... I thought 0xAE was ® | python3: readlines() indices issue? | 0 | 0 | 0 | 1,413 |
4,137,859 | 2010-11-09T19:52:00.000 | 1 | 0 | 1 | 0 | php,python,programming-languages | 4,137,916 | 3 | false | 0 | 0 | It is a design decision the authors of the languages chose to make. Generally spoken (and this is of course not always the case) the nicer the syntax a language has the slower it tends to be. Case in point: Ruby. | 2 | 0 | 0 | I don't mean for this question to be about Python vs PHP but about languages in general. I use Python and PHP as examples because I know them.
In Python we can do mytoken = mystring.split(mydelimiter)[1], accessing the list returned by str.split without ever assigning it to a list.
In PHP we must put the array in memory before accessing it, as in $mytokenarray = explode($mydelimiter, $mystring); $mytoken = $mytokenarray[1];. As far as I know it is not possible to do this in one statement as in Python.
What is going on behind this difference? | Why are functions indexable in Python but not PHP? | 0.066568 | 0 | 0 | 203 |
4,137,859 | 2010-11-09T19:52:00.000 | 1 | 0 | 1 | 0 | php,python,programming-languages | 24,936,359 | 3 | false | 0 | 0 | This feature is now possible with PHP as of version 5.4. | 2 | 0 | 0 | I don't mean for this question to be about Python vs PHP but about languages in general. I use Python and PHP as examples because I know them.
In Python we can do mytoken = mystring.split(mydelimiter)[1], accessing the list returned by str.split without ever assigning it to a list.
In PHP we must put the array in memory before accessing it, as in $mytokenarray = explode($mydelimiter, $mystring); $mytoken = $mytokenarray[1];. As far as I know it is not possible to do this in one statement as in Python.
What is going on behind this difference? | Why are functions indexable in Python but not PHP? | 0.066568 | 0 | 0 | 203 |
4,137,917 | 2010-11-09T19:57:00.000 | 1 | 1 | 1 | 0 | python | 4,138,407 | 1 | true | 0 | 0 | A quick fix to your problem should be by adding the full path of the egg to a .pth file which should exist in the sys-path (in your case site-packages). | 1 | 0 | 0 | I needed to make a change to a 3rd party library, so I edited the files in the egg (which is not zipped). The egg lives in site-packages in a virtualenv. Everything works fine on my dev machine, but when I copied the egg to another machine, the module can longer be found to import.
I'm sure I went about this the wrong way, but I'm hoping there's a way to fix it. | Copied python egg no longer works | 1.2 | 0 | 0 | 89 |
4,138,504 | 2010-11-09T20:52:00.000 | 0 | 0 | 0 | 1 | python,mysql,permissions,configuration-files | 4,139,191 | 2 | false | 0 | 0 | Are you sure that file isn't hardcoded in some other portion of the build process? Why not just add it to you $PATH for the duration of the build?
Does the script need to write that file for some reason? Does the build script use su or sudo to attempt to become some other user? Are you absolutely sure about both the permissions and the fact that you ran the script as root?
It's a really weird thing if you still can't get to it. Are you using a chroot or a virtualenv? | 2 | 3 | 0 | The context: I'm working on some Python scripts on an Ubuntu server. I need to use some code written in Python 2.7 but our server has Python 2.5. We installed 2.7 as a second instance of Python so we wouldn't break anything reliant on 2.5. Now I need to install the MySQLdb package. I assume I can't do this the easy way by running apt-get install python-mysqldb because it will likely just reinstall to python 2.5, so I am just trying to install it manually.
The Problem: In the MySQL-python-1.2.3 directory I try to run python2.7 setup.py build and get an error that states:
sh: /etc/mysql/my.cnf: Permission denied
along with a Traceback that says setup.py couldn't find the file.
Note that the setup.py script looks for a mysql_config file in the $PATH directories by default, but the mysql config file for our server is /etc/mysql/my.cnf, so I changed the package's site.cfg file to match. I checked the permissions for the file, which are -rw-r--r--. I tried running the script as root and got the same error.
Any suggestions? | Trouble installing MySQLdb for second version of Python | 0 | 1 | 0 | 429 |
4,138,504 | 2010-11-09T20:52:00.000 | 2 | 0 | 0 | 1 | python,mysql,permissions,configuration-files | 4,139,563 | 2 | false | 0 | 0 | As far as I'm aware, there is a very significant difference between "mysql_config" and "my.cnf".
"mysql_config" is usually located in the "bin" folder of your MySQL install and when executed, spits out various filesystem location information about your install.
"my.cnf" is a configuration script used by MySQL itself.
In short, when the script asks for "mysql_config", it should be taken to literally mean the executable file with a name of "mysql_config" and not the textual configuration file you're feeding it. MYSQLdb needs the "mysql_config" file so that it knows which libraries to use. That's it. It does not read your MySQL configuration directly.
The errors you are experiencing can be put down to;
It's trying to open the wrong file and running into permission trouble.
Even after it has tried to open that file, it still can't find the "mysql_config" file.
From here, you need to locate your MySQL installation's "bin" folder and check it contains "mysql_config". Then you can edit the folder path into the "site.cnf" file and you should be good to go. | 2 | 3 | 0 | The context: I'm working on some Python scripts on an Ubuntu server. I need to use some code written in Python 2.7 but our server has Python 2.5. We installed 2.7 as a second instance of Python so we wouldn't break anything reliant on 2.5. Now I need to install the MySQLdb package. I assume I can't do this the easy way by running apt-get install python-mysqldb because it will likely just reinstall to python 2.5, so I am just trying to install it manually.
The Problem: In the MySQL-python-1.2.3 directory I try to run python2.7 setup.py build and get an error that states:
sh: /etc/mysql/my.cnf: Permission denied
along with a Traceback that says setup.py couldn't find the file.
Note that the setup.py script looks for a mysql_config file in the $PATH directories by default, but the mysql config file for our server is /etc/mysql/my.cnf, so I changed the package's site.cfg file to match. I checked the permissions for the file, which are -rw-r--r--. I tried running the script as root and got the same error.
Any suggestions? | Trouble installing MySQLdb for second version of Python | 0.197375 | 1 | 0 | 429 |
4,138,886 | 2010-11-09T21:35:00.000 | 4 | 0 | 0 | 1 | python,linux,unix,cross-compiling | 4,139,691 | 1 | true | 0 | 0 | The standard freeze tool (from Tools/freeze) can be used to make fully-standalone binaries on Unix, including all extension modules and builtins (and omitting anything that is not directly or indirectly imported). | 1 | 2 | 0 | Anybody know how this can be done? I took a look at cx_Freeze, but it seems that it doesn't compile everything necessary into one binary (i.e., the python builtins aren't present). | How to create Unix and Linux binaries from Python code | 1.2 | 0 | 0 | 3,222 |
4,139,167 | 2010-11-09T22:06:00.000 | 0 | 1 | 1 | 0 | python,module | 30,371,608 | 4 | false | 0 | 0 | Before importing any user defined module,specify the path to the directory containg that module
sys.path.append("path to your directory") | 1 | 0 | 0 | I'm going through the Python tutorial, and I got to the section on modules.
I created a fibo.py file in Users/Me/code/Python (s
Now I'm back in the interpreter and I can't seem to import the module, because I don't understand how to import a relative (or absolute) path.
I'm also thoroughly confused by how and if to modify PYTHONPATH and/or sys.path.
All the other 'import module' questions on here seem to be | Importing a module in Python | 0 | 0 | 0 | 6,865 |
4,140,396 | 2010-11-10T01:15:00.000 | 5 | 0 | 0 | 0 | python,user-interface,listbox,tkinter,ttk | 4,141,054 | 1 | true | 0 | 1 | you cannot. The widget doesn't support that.
you can't disable certain items, the widget doesn't support a state attribute. That being said, you can monitor the selection and do the appropriate thing if the user selects something that is disabled, and use the item foreground to denote disabled-ness.
You will need to bind to keypress events and manage the behavior yourself. It's not particularly difficult, just a little tedious.
the text widget might be your best bet, though you'll have to add bindings to mimic the default bindings of the listbox.
Bottom line: Tkinter provides nothing that directly supports what you want to do, but the building blocks are all there. You'll just have to build it yourself. | 1 | 4 | 0 | I'm studying the Tkinter Listbox widget and have been unable to find solutions for the following functionality:
How can I create non-selectable horizontal separator items, eg. separators equivalent to the Tkinter Menu widget's .add_separator()? (Using chars like dashes and underscores looks awful).
How can I disable a specific item? I tried using .itemconfig( index, state='disabled' ) without success.
How can I enable keyboard navigation, eg. when a user's keyboard input automatically scrolls one forward to the closest item that begins with the text the user typed? Must I bind(<KeyPress>, ...) and manage this behavior myself?
Would some of the above functionality be easier to implement using a Text widget or the ttk.Treeview widget? | Tkinter: Listbox separators, disabled items, keyboard navigation? | 1.2 | 0 | 0 | 2,343 |
4,140,619 | 2010-11-10T02:05:00.000 | 1 | 0 | 0 | 1 | python,perl,bash | 4,140,643 | 9 | false | 0 | 0 | Read in the text file, create a hash with the current file name, so files['1500000704'] = 'SH103239' and so on. Then go through the files in the current directory, grab the new filename from the hash, and rename it. | 1 | 7 | 0 | I have a folder full of image files such as
1500000704_full.jpg
1500000705_full.jpg
1500000711_full.jpg
1500000712_full.jpg
1500000714_full.jpg
1500000744_full.jpg
1500000745_full.jpg
1500000802_full.jpg
1500000803_full.jpg
I need to rename the files based on a lookup from a text file which has entries such as,
SH103239 1500000704
SH103240 1500000705
SH103241 1500000711
SH103242 1500000712
SH103243 1500000714
SH103244 1500000744
SH103245 1500000745
SH103252 1500000802
SH103253 1500000803
SH103254 1500000804
So, I want the image files to be renamed,
SH103239_full.jpg
SH103240_full.jpg
SH103241_full.jpg
SH103242_full.jpg
SH103243_full.jpg
SH103244_full.jpg
SH103245_full.jpg
SH103252_full.jpg
SH103253_full.jpg
SH103254_full.jpg
How can I do this job the easiest? Any one can write me a quick command or script which can do this for me please? I have a lot of these image files and manual change isnt feasible.
I am on ubuntu but depending on the tool I can switch to windows if need be. Ideally I would love to have it in bash script so that I can learn more or simple perl or python.
Thanks
EDIT: Had to Change the file names | Bulk renaming of files based on lookup | 0.022219 | 0 | 0 | 5,950 |
4,140,656 | 2010-11-10T02:13:00.000 | 1 | 1 | 1 | 1 | python,asynchronous,libevent,gevent | 4,140,840 | 2 | true | 0 | 0 | Biggest issue is that without threads, a block for one client will cause a block for all client. For example, if one client requests a resource (file on disk, paged-out memory, etc) that requires the OS to block the requesting process, then all clients will have to wait. A multithreaded server can block just the one client and continue to serve others.
That said, if the above scenario is unlikely (that is, all clients will request the same resources), then event-driven is the way to go. | 2 | 9 | 0 | I am writing now writing some evented code (In python using gevent) and I use the nginx as a web server and I feel both are great. I was told that there is a trade off with events but was unable to see it. Can someone please shed some light?
James | Why shouldn't I use async (evented) IO | 1.2 | 0 | 0 | 1,703 |
4,140,656 | 2010-11-10T02:13:00.000 | 9 | 1 | 1 | 1 | python,asynchronous,libevent,gevent | 4,291,204 | 2 | false | 0 | 0 | The only difficulty of evented programming is that you mustn't block, ever. This can be hard to achieve if you use some libraries that were designed with threads in mind. If you don't control these libraries, a fork() + message ipc is the way to go. | 2 | 9 | 0 | I am writing now writing some evented code (In python using gevent) and I use the nginx as a web server and I feel both are great. I was told that there is a trade off with events but was unable to see it. Can someone please shed some light?
James | Why shouldn't I use async (evented) IO | 1 | 0 | 0 | 1,703 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.