content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Python data structure/object to model static multidimensional table I'm just getting back into coding after a few year hiatus and I'm trying to model multi-tiered static forms in a way that lets me grab and perform operations on a specific form level or an entire sub-tree. Example Form hierarchy: MyForm Question 1 Part 1 Question 1.1 Part 2 Question 2.1 SubPart 1 Question 2.1.1 Question 2.1.2 Question 2 Each Question will have multiple attributes (question text, whether it's a required field, etc.) and Questions can be at any level of the hierarchy. I'd like to be able to do something like this: >>> MyForm.getQuestionObjects() [Question1, Question1_1, Question2_1, Question2_1_1, Question2_1_2, Question2] >>> MyForm.Part2.getQuestionObjects() [Question2_1, Question2_1_1, Question2_1_2] and/or stuff like: >>> # Get questions (return class members) >>> MyForm.SubPart1.getQuestions() (('2.1.1 text', otherAttributes), ('2.1.2 text', otherAttributes)) >>> # Get questions -- but replace an attribute on 2.1.2 >>> MyForm.Part2.getQuestions(replace_attr('Question_2_1_2', 'text', 'New text')) (('2.1.1 text', otherAttributes), ('New text', otherAttributes)) I keep trying to do this with nested/inner classes, which are a big headache and not well-supported in python. But even if I can figure out a solution using nested classes, I keep wondering whether there's a much better way of storing this form info somewhere to make it easier for non-coders to edit (probably a plain text template), and then loading the data at run-time since it's static and I'll need it in memory quite often. The form data won't be updated more than say once per month. Regardless how I store the data, I'd like to figure out a good data structure to represent, traverse, and operate on it. Is there a way to make a tiered-attributes object like this? Could I do something like multidimensional named tuples? Any other ideas? Thanks for any comments. A: I'd store such hierarchical data in XML on the storage. You can use the xml.etree.ElementTree standard module to load such an XML file into a hierarchical data structure in Python, make changes to it, then save it back to a file. This way you don't have to bother with the actual data structure, since it is built by ElementTree automatically. See xml.etree.ElementTree in the Python Manual. More information can be found here: (There're other mature solutions in Python to load an XML file into various data structures. Just pick one which is the easiest to use for your task. Google is your friend. :-) ) A: There's nothing headachey or ill-supported about nested classes in Python, it's just they don't do anything. Don't expect to get a Java-inner-class-style link back to an owner instance automatically: nested classes are nothing but normal classes whose class object happens to be stored in the property of another class. They don't help you here. Is there a way to make a tiered-attributes object like this? Certainly, but you'd probably be better off extending Python's existing sequence classes to get the benefits of all the existing operations on them. For example, a form ‘part’ might simply be a list which also has a title: class FormPart(list): def __init__(self, title, *args): list.__init__(self, *args) self.title= title def __repr__(self): return 'FormPart(%r, %s)' % (self.title, list.__repr__(self)) Now you can say form= FormPart('My form', [question, formpart...]) and access the questions and formparts inside it using normal list indexing and slicing. Next, a question might be an immutable thing like a tuple, but perhaps you want the items in it to have nice property names. So add that to tuple: class FormQuestion(tuple): def __new__(cls, title, details= '', answers= ()): return tuple.__new__(cls, (title, details, answers)) def __repr__(self): return 'FormQuestion%s' % tuple.__repr__(self) title= property(operator.itemgetter(0)) details= property(operator.itemgetter(1)) answers= property(operator.itemgetter(2)) Now you can define your data like: form= FormPart('MyForm', [ FormQuestion('Question 1', 'Why?', ('Because', 'Why not?')), FormPart('Part 1', [ FormQuestion('Question 1.1', details= 'just guess'), ]), FormPart('Part 2', [ FormQuestion('Question 2.1'), FormPart('SubPart 1', [ FormQuestion('Question 2.1.1', answers= ('Yes')), ]), ]), FormQuestion('Question 2'), ]) And access it: >>> form[0] FormQuestion('Question 1', 'Why?', ('Because', 'Why not?')) >>> form[1].title 'Part 1' >>> form[2][1] FormPart('SubPart 1', [FormQuestion('Question 2.1.1', '', 'Yes')]) Now for your hierarchy-walking you can define on FormPart: def getQuestions(self): for child in self: for descendant in child.getQuestions(): yield descendant and on FormQuestion: def getQuestions(self): yield self Now you've got a descendant generator returning FormQuestions: >>> list(form[1].getQuestions()) [FormQuestion('Question 1.1', 'just guess', ())] >>> list(form.getQuestions()) [FormQuestion('Question 1', 'Why?', ('Because', 'Why not?')), FormQuestion('Question 1.1', 'just guess', ()), FormQuestion('Question 2.1', '', ()), FormQuestion('Question 2.1.1', '', 'Yes'), FormQuestion('Question 2', '', ())] A: Thought I'd share a bit of what I've learned from doing this using ElementTree, specifically the lxml implementation of ElementTree and lxml.objectify with some XPath. The XML could also be simplified to <part> and <question> tags with names stored as attributes. questions.xml <myform> <question1>Question 1</question1> <part1 name="Part 1"> <question1_1>Question 1.1</question1_1> </part1> <part2 name="Part 2"> <question2_1 attribute="stuff">Question 2.1</question2_1> <subpart1 name="SubPart 1"> <question2_1_1>Question 2.1.1</question2_1_1> <question2_1_2>Question 2.1.2</question2_1_2> </subpart1> </part2> <question2>Question 2</question2> </myform> questions.py from lxml import etree from lxml import objectify # Objectify adds some python object-like syntax and other features. # Important note: find()/findall() in objectify uses ETXPath, which supports # any XPath expression. The union operator, starts-with(), and local-name() # expressions below don't work with etree.findall. # Using etree features tree = objectify.parse('questions.xml') root = tree.getroot() # Dump root to see nodes and attributes print etree.dump(root) # Pretty print XML print etree.tostring(root, pretty_print=True) # Get part2 & all of its children part2_and_children = root.findall(".//part2 | //part2//*") # Get all Part 2 children part2_children = root.findall(".//*[@name='Part 2']//*[starts-with(local-name(), 'question')]") # Get dictionary of attributes for Question 2.1 list_of_dict_of_attributes = root.find(".//question2_1")[0].items() # Access nodes like python objects # Get all part2 question children part2_question_children = root.part2.findall(".//*[starts-with(local-name(), 'question')]") # Get text of question 2.1 text2_1 = root.part2.question2_1.text # Get dictionary of attributes for Question 2.1 q2_1_attrs = root.part2.question2_1[0].items()
Python data structure/object to model static multidimensional table
I'm just getting back into coding after a few year hiatus and I'm trying to model multi-tiered static forms in a way that lets me grab and perform operations on a specific form level or an entire sub-tree. Example Form hierarchy: MyForm Question 1 Part 1 Question 1.1 Part 2 Question 2.1 SubPart 1 Question 2.1.1 Question 2.1.2 Question 2 Each Question will have multiple attributes (question text, whether it's a required field, etc.) and Questions can be at any level of the hierarchy. I'd like to be able to do something like this: >>> MyForm.getQuestionObjects() [Question1, Question1_1, Question2_1, Question2_1_1, Question2_1_2, Question2] >>> MyForm.Part2.getQuestionObjects() [Question2_1, Question2_1_1, Question2_1_2] and/or stuff like: >>> # Get questions (return class members) >>> MyForm.SubPart1.getQuestions() (('2.1.1 text', otherAttributes), ('2.1.2 text', otherAttributes)) >>> # Get questions -- but replace an attribute on 2.1.2 >>> MyForm.Part2.getQuestions(replace_attr('Question_2_1_2', 'text', 'New text')) (('2.1.1 text', otherAttributes), ('New text', otherAttributes)) I keep trying to do this with nested/inner classes, which are a big headache and not well-supported in python. But even if I can figure out a solution using nested classes, I keep wondering whether there's a much better way of storing this form info somewhere to make it easier for non-coders to edit (probably a plain text template), and then loading the data at run-time since it's static and I'll need it in memory quite often. The form data won't be updated more than say once per month. Regardless how I store the data, I'd like to figure out a good data structure to represent, traverse, and operate on it. Is there a way to make a tiered-attributes object like this? Could I do something like multidimensional named tuples? Any other ideas? Thanks for any comments.
[ "I'd store such hierarchical data in XML on the storage. You can use the xml.etree.ElementTree standard module to load such an XML file into a hierarchical data structure in Python, make changes to it, then save it back to a file. This way you don't have to bother with the actual data structure, since it is built by ElementTree automatically.\nSee xml.etree.ElementTree in the Python Manual. More information can be found here:\n(There're other mature solutions in Python to load an XML file into various data structures. Just pick one which is the easiest to use for your task. Google is your friend. :-) )\n", "There's nothing headachey or ill-supported about nested classes in Python, it's just they don't do anything. Don't expect to get a Java-inner-class-style link back to an owner instance automatically: nested classes are nothing but normal classes whose class object happens to be stored in the property of another class. They don't help you here.\n\nIs there a way to make a tiered-attributes object like this?\n\nCertainly, but you'd probably be better off extending Python's existing sequence classes to get the benefits of all the existing operations on them. For example, a form ‘part’ might simply be a list which also has a title:\nclass FormPart(list):\n def __init__(self, title, *args):\n list.__init__(self, *args)\n self.title= title\n def __repr__(self):\n return 'FormPart(%r, %s)' % (self.title, list.__repr__(self))\n\nNow you can say form= FormPart('My form', [question, formpart...]) and access the questions and formparts inside it using normal list indexing and slicing.\nNext, a question might be an immutable thing like a tuple, but perhaps you want the items in it to have nice property names. So add that to tuple:\nclass FormQuestion(tuple):\n def __new__(cls, title, details= '', answers= ()):\n return tuple.__new__(cls, (title, details, answers))\n def __repr__(self):\n return 'FormQuestion%s' % tuple.__repr__(self)\n\n title= property(operator.itemgetter(0))\n details= property(operator.itemgetter(1))\n answers= property(operator.itemgetter(2))\n\nNow you can define your data like:\nform= FormPart('MyForm', [\n FormQuestion('Question 1', 'Why?', ('Because', 'Why not?')),\n FormPart('Part 1', [\n FormQuestion('Question 1.1', details= 'just guess'),\n ]),\n FormPart('Part 2', [\n FormQuestion('Question 2.1'),\n FormPart('SubPart 1', [\n FormQuestion('Question 2.1.1', answers= ('Yes')),\n ]),\n ]),\n FormQuestion('Question 2'),\n])\n\nAnd access it:\n>>> form[0]\nFormQuestion('Question 1', 'Why?', ('Because', 'Why not?'))\n>>> form[1].title\n'Part 1'\n>>> form[2][1]\nFormPart('SubPart 1', [FormQuestion('Question 2.1.1', '', 'Yes')])\n\nNow for your hierarchy-walking you can define on FormPart:\n def getQuestions(self):\n for child in self:\n for descendant in child.getQuestions():\n yield descendant\n\nand on FormQuestion:\n def getQuestions(self):\n yield self\n\nNow you've got a descendant generator returning FormQuestions:\n>>> list(form[1].getQuestions())\n[FormQuestion('Question 1.1', 'just guess', ())]\n>>> list(form.getQuestions())\n[FormQuestion('Question 1', 'Why?', ('Because', 'Why not?')), FormQuestion('Question 1.1', 'just guess', ()), FormQuestion('Question 2.1', '', ()), FormQuestion('Question 2.1.1', '', 'Yes'), FormQuestion('Question 2', '', ())]\n\n", "Thought I'd share a bit of what I've learned from doing this using ElementTree, specifically the lxml implementation of ElementTree and lxml.objectify with some XPath. The XML could also be simplified to <part> and <question> tags with names stored as attributes.\nquestions.xml\n<myform>\n <question1>Question 1</question1>\n <part1 name=\"Part 1\">\n <question1_1>Question 1.1</question1_1>\n </part1>\n <part2 name=\"Part 2\">\n <question2_1 attribute=\"stuff\">Question 2.1</question2_1>\n <subpart1 name=\"SubPart 1\">\n <question2_1_1>Question 2.1.1</question2_1_1>\n <question2_1_2>Question 2.1.2</question2_1_2>\n </subpart1>\n </part2>\n <question2>Question 2</question2>\n</myform>\n\nquestions.py\nfrom lxml import etree\nfrom lxml import objectify\n# Objectify adds some python object-like syntax and other features.\n# Important note: find()/findall() in objectify uses ETXPath, which supports\n# any XPath expression. The union operator, starts-with(), and local-name()\n# expressions below don't work with etree.findall.\n\n# Using etree features\ntree = objectify.parse('questions.xml')\nroot = tree.getroot()\n\n# Dump root to see nodes and attributes\nprint etree.dump(root)\n\n# Pretty print XML\nprint etree.tostring(root, pretty_print=True)\n\n# Get part2 & all of its children\npart2_and_children = root.findall(\".//part2 | //part2//*\")\n\n# Get all Part 2 children\npart2_children = root.findall(\".//*[@name='Part 2']//*[starts-with(local-name(), 'question')]\")\n\n# Get dictionary of attributes for Question 2.1\nlist_of_dict_of_attributes = root.find(\".//question2_1\")[0].items()\n\n# Access nodes like python objects\n# Get all part2 question children\npart2_question_children = root.part2.findall(\".//*[starts-with(local-name(), 'question')]\")\n\n# Get text of question 2.1\ntext2_1 = root.part2.question2_1.text\n\n# Get dictionary of attributes for Question 2.1\nq2_1_attrs = root.part2.question2_1[0].items()\n\n" ]
[ 2, 2, 0 ]
[]
[]
[ "class", "data_modeling", "data_structures", "inner_classes", "python" ]
stackoverflow_0001781207_class_data_modeling_data_structures_inner_classes_python.txt
Q: how to simulate the concept of object identity in Haskell I am considering the design of an interpreter for Python like object oriented language in Haskell. One particular problem I am facing is related to the concept of object identity. If we consider Python's id(object) function, the definition suggests that it returns the "identity" of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. (Implementation note: this is the address of the object.) What is the general approach to implement a concept like this in Haskell? A: I assume that your interpreter will work within the State monad. Probably the state will consist of a collection of live objects and whatnot. What you can do, is keep track of a list of available (non-used) ids (represented as Ints) and annotate every Object with an Int, namely its id. This id was taken from the list of available ids and assigned to it at creation time. Thus it cannot be that two Object instances have the same id. Observe that I talked about Ints. You could go for Integers, but that may eventually become less efficient. Going with Ints does mean that you'll have to add the id of destroyed objects back to the pool (list) of available ids (for otherwise you'd run out, eventually). Thus I envision something like this: data Object a = Obj Int a instance Eq (Object a) where Obj i _ == Obj j _ = i == j type InterpreterState a = State [Int] a createObject :: a -> InterpreterState (Object a) createObject a = do (i:is) <- get put is return $ Obj i a destroyObject :: Object a -> InterpreterState () destroyObject (Obj i a) = do modify (i:) (Note that InterpreterState will be much more complex in your case, and createObject and destroyObject should have the object added to/removed from the state. But that's beside the point here.) A: The answer to this question depends on how you are going to implement these objects you want to get id's from. How is it going to look if two variables contain the same object? You basically need to store references to "mutable" objects in the variables, the question is how exactly you do this. If you would just associate simple values to variable names changes in one variable would never be reflected in another variable, and there would be no such thing as object identity. So the variables need to hold references to the actual current values of the objects. This could look like this: data VariableContent = Int | String | ObjRef Int | ... data ObjStore = ObjStore [(Int, Object)] data ProgramState = ProgramState ObjStore VariableStore ... Here each ObjRef refers to a value in ObjStore that can be accessed by the Int id stored in ObjRef. And this Int would be the right thing to be returned by the id(object) function. In general the id function strongly depends on how you actually implement object references.
how to simulate the concept of object identity in Haskell
I am considering the design of an interpreter for Python like object oriented language in Haskell. One particular problem I am facing is related to the concept of object identity. If we consider Python's id(object) function, the definition suggests that it returns the "identity" of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. (Implementation note: this is the address of the object.) What is the general approach to implement a concept like this in Haskell?
[ "I assume that your interpreter will work within the State monad. Probably the state will consist of a collection of live objects and whatnot. What you can do, is keep track of a list of available (non-used) ids (represented as Ints) and annotate every Object with an Int, namely its id. This id was taken from the list of available ids and assigned to it at creation time. Thus it cannot be that two Object instances have the same id.\nObserve that I talked about Ints. You could go for Integers, but that may eventually become less efficient. Going with Ints does mean that you'll have to add the id of destroyed objects back to the pool (list) of available ids (for otherwise you'd run out, eventually). Thus I envision something like this:\ndata Object a = Obj Int a\n\ninstance Eq (Object a) where\n Obj i _ == Obj j _ = i == j\n\ntype InterpreterState a = State [Int] a\n\ncreateObject :: a -> InterpreterState (Object a)\ncreateObject a = do\n (i:is) <- get \n put is\n return $ Obj i a \n\ndestroyObject :: Object a -> InterpreterState ()\ndestroyObject (Obj i a) = do\n modify (i:)\n\n(Note that InterpreterState will be much more complex in your case, and createObject and destroyObject should have the object added to/removed from the state. But that's beside the point here.)\n", "The answer to this question depends on how you are going to implement these objects you want to get id's from. How is it going to look if two variables contain the same object? You basically need to store references to \"mutable\" objects in the variables, the question is how exactly you do this. If you would just associate simple values to variable names changes in one variable would never be reflected in another variable, and there would be no such thing as object identity.\nSo the variables need to hold references to the actual current values of the objects. This could look like this:\ndata VariableContent = Int | String | ObjRef Int | ...\ndata ObjStore = ObjStore [(Int, Object)]\ndata ProgramState = ProgramState ObjStore VariableStore ...\n\nHere each ObjRef refers to a value in ObjStore that can be accessed by the Int id stored in ObjRef. And this Int would be the right thing to be returned by the id(object) function.\nIn general the id function strongly depends on how you actually implement object references.\n" ]
[ 4, 3 ]
[]
[]
[ "haskell", "interpreter", "python" ]
stackoverflow_0001795186_haskell_interpreter_python.txt
Q: paste.httpserver and slowdown with HTTP/1.1 Keep-alive; tested with httperf and ab I have a web server based on paste.httpserver as an adapater between HTTP and WSGI. When I do performance measurements with httperf, I can do over 1,000 requests per second if I start a new request each time using --num-conn. If I instead reuse the connection using --num-call then I get about 11 requests per second, 1/100th of the speed. If I try ab I get a timeout. My tests are % ./httperf --server localhost --port 8080 --num-conn 100 ... Request rate: 1320.4 req/s (0.8 ms/req) ... and % ./httperf --server localhost --port 8080 --num-call 100 ... Request rate: 11.2 req/s (89.4 ms/req) ... Here's a simple reproducible server from paste import httpserver def echo_app(environ, start_response): n = 10000 start_response("200 Ok", [("Content-Type", "text/plain"), ("Content-Length", str(n))]) return ["*" * n] httpserver.serve(echo_app, protocol_version="HTTP/1.1") It's a multi-threaded server, which is hard to profile. Here's a variation which is single threaded: from paste import httpserver class MyHandler(httpserver.WSGIHandler): sys_version = None server_version = "MyServer/0.0" protocol_version = "HTTP/1.1" def log_request(self, *args, **kwargs): pass def echo_app(environ, start_response): n = 10000 start_response("200 Ok", [("Content-Type", "text/plain"), ("Content-Length", str(n))]) return ["*" * n] # WSGIServerBase is single-threaded server = httpserver.WSGIServerBase(echo_app, ("localhost", 8080), MyHandler) server.handle_request() Profiling that with % python2.6 -m cProfile -o paste.prof paste_slowdown.py and hitting it with %httperf --client=0/1 --server=localhost --port=8080 --uri=/ \ --send-buffer=4096 --recv-buffer=16384 --num-conns=1 --num-calls=500 I get a profile like >>> p=pstats.Stats("paste.prof") >>> p.strip_dirs().sort_stats("cumulative").print_stats() Sun Nov 22 21:31:57 2009 paste.prof 109749 function calls in 46.570 CPU seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 46.571 46.571 {execfile} 1 0.001 0.001 46.570 46.570 paste_slowdown.py:2(<module>) 1 0.000 0.000 46.115 46.115 SocketServer.py:250(handle_request) 1 0.000 0.000 44.675 44.675 SocketServer.py:268(_handle_request_noblock) 1 0.000 0.000 44.675 44.675 SocketServer.py:301(process_request) 1 0.000 0.000 44.675 44.675 SocketServer.py:318(finish_request) 1 0.000 0.000 44.675 44.675 SocketServer.py:609(__init__) 1 0.000 0.000 44.675 44.675 httpserver.py:456(handle) 1 0.001 0.001 44.675 44.675 BaseHTTPServer.py:325(handle) 501 0.006 0.000 44.674 0.089 httpserver.py:440(handle_one_request) 2001 0.020 0.000 44.383 0.022 socket.py:373(readline) 501 44.354 0.089 44.354 0.089 {method 'recv' of '_socket.socket' objects} 1 1.440 1.440 1.440 1.440 {select.select} .... You can see that nearly all the time is in a recv. I decided to bail on httpref and write my own HTTP/1.1-with-keep-alive request and send it using netcat: GET / HTTP/1.1 Location: localhost Connection: Keep-Alive Content-Length: 0 GET / HTTP/1.1 Location: localhost Connection: Keep-Alive Content-Length: 0 ... repeat 97 more times, to have 99 keep-alives in total ... GET / HTTP/1.1 Location: localhost Connection: Close Content-Length: 0 which I sent with nc localhost 8080 < ~/src/send_to_paste.txt Total time for 100 requests was 0.03 seconds, so it's very good performance. This suggests that httperf is doing something wrong (but it's a widely used and respected piece of code), so I tried 'ab' % ab -n 100 -k localhost:8080/ This is ApacheBench, Version 1.3d <$Revision: 1.73 $> apache-1.3 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient)... Server timed out : Operation now in progress Instrumenting the server, it handles one request and is waiting for the second. Any idea of what's going on? A: After some effort, it seems to be either Nagle's algorithm or the delayed ACK, or the interactions between them. It goes away if I do something like server.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) How did I track it down? First, I instrumented every 'recv' in socket.py, so I could figure out which recv was waiting. I would see about 5 recv's out of 11 had a delay of almost 200ms. I couldn't figure out why there was any delay. I then used Wireshark to watch the messages and noticed that the it was actually the send from the server to the client which had the delay. That meant something in the TCP layer in the outgoing messages from my client. A friend suggested the obvious, and I searched for "200ms socket delay" and found descriptions of this problem. The paste trac report is at http://trac.pythonpaste.org/pythonpaste/ticket/392 along with a patch which enables TCP_NODELAY when the handler uses HTTP/1.1.
paste.httpserver and slowdown with HTTP/1.1 Keep-alive; tested with httperf and ab
I have a web server based on paste.httpserver as an adapater between HTTP and WSGI. When I do performance measurements with httperf, I can do over 1,000 requests per second if I start a new request each time using --num-conn. If I instead reuse the connection using --num-call then I get about 11 requests per second, 1/100th of the speed. If I try ab I get a timeout. My tests are % ./httperf --server localhost --port 8080 --num-conn 100 ... Request rate: 1320.4 req/s (0.8 ms/req) ... and % ./httperf --server localhost --port 8080 --num-call 100 ... Request rate: 11.2 req/s (89.4 ms/req) ... Here's a simple reproducible server from paste import httpserver def echo_app(environ, start_response): n = 10000 start_response("200 Ok", [("Content-Type", "text/plain"), ("Content-Length", str(n))]) return ["*" * n] httpserver.serve(echo_app, protocol_version="HTTP/1.1") It's a multi-threaded server, which is hard to profile. Here's a variation which is single threaded: from paste import httpserver class MyHandler(httpserver.WSGIHandler): sys_version = None server_version = "MyServer/0.0" protocol_version = "HTTP/1.1" def log_request(self, *args, **kwargs): pass def echo_app(environ, start_response): n = 10000 start_response("200 Ok", [("Content-Type", "text/plain"), ("Content-Length", str(n))]) return ["*" * n] # WSGIServerBase is single-threaded server = httpserver.WSGIServerBase(echo_app, ("localhost", 8080), MyHandler) server.handle_request() Profiling that with % python2.6 -m cProfile -o paste.prof paste_slowdown.py and hitting it with %httperf --client=0/1 --server=localhost --port=8080 --uri=/ \ --send-buffer=4096 --recv-buffer=16384 --num-conns=1 --num-calls=500 I get a profile like >>> p=pstats.Stats("paste.prof") >>> p.strip_dirs().sort_stats("cumulative").print_stats() Sun Nov 22 21:31:57 2009 paste.prof 109749 function calls in 46.570 CPU seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 46.571 46.571 {execfile} 1 0.001 0.001 46.570 46.570 paste_slowdown.py:2(<module>) 1 0.000 0.000 46.115 46.115 SocketServer.py:250(handle_request) 1 0.000 0.000 44.675 44.675 SocketServer.py:268(_handle_request_noblock) 1 0.000 0.000 44.675 44.675 SocketServer.py:301(process_request) 1 0.000 0.000 44.675 44.675 SocketServer.py:318(finish_request) 1 0.000 0.000 44.675 44.675 SocketServer.py:609(__init__) 1 0.000 0.000 44.675 44.675 httpserver.py:456(handle) 1 0.001 0.001 44.675 44.675 BaseHTTPServer.py:325(handle) 501 0.006 0.000 44.674 0.089 httpserver.py:440(handle_one_request) 2001 0.020 0.000 44.383 0.022 socket.py:373(readline) 501 44.354 0.089 44.354 0.089 {method 'recv' of '_socket.socket' objects} 1 1.440 1.440 1.440 1.440 {select.select} .... You can see that nearly all the time is in a recv. I decided to bail on httpref and write my own HTTP/1.1-with-keep-alive request and send it using netcat: GET / HTTP/1.1 Location: localhost Connection: Keep-Alive Content-Length: 0 GET / HTTP/1.1 Location: localhost Connection: Keep-Alive Content-Length: 0 ... repeat 97 more times, to have 99 keep-alives in total ... GET / HTTP/1.1 Location: localhost Connection: Close Content-Length: 0 which I sent with nc localhost 8080 < ~/src/send_to_paste.txt Total time for 100 requests was 0.03 seconds, so it's very good performance. This suggests that httperf is doing something wrong (but it's a widely used and respected piece of code), so I tried 'ab' % ab -n 100 -k localhost:8080/ This is ApacheBench, Version 1.3d <$Revision: 1.73 $> apache-1.3 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient)... Server timed out : Operation now in progress Instrumenting the server, it handles one request and is waiting for the second. Any idea of what's going on?
[ "After some effort, it seems to be either Nagle's algorithm or the delayed ACK, or the interactions between them. It goes away if I do something like\nserver.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)\n\nHow did I track it down? First, I instrumented every 'recv' in socket.py, so I could figure out which recv was waiting. I would see about 5 recv's out of 11 had a delay of almost 200ms. I couldn't figure out why there was any delay. I then used Wireshark to watch the messages and noticed that the it was actually the send from the server to the client which had the delay. That meant something in the TCP layer in the outgoing messages from my client.\nA friend suggested the obvious, and I searched for \"200ms socket delay\" and found descriptions of this problem.\nThe paste trac report is at http://trac.pythonpaste.org/pythonpaste/ticket/392 along with a patch which enables TCP_NODELAY when the handler uses HTTP/1.1.\n" ]
[ 6 ]
[]
[]
[ "httperf", "httpserver", "keep_alive", "paste", "python" ]
stackoverflow_0001781766_httperf_httpserver_keep_alive_paste_python.txt
Q: Accessing xrange internal structure I'm trying to use ctypes to extract data from internal python structures. Namely, I'm trying to read the 4 fields in an xrange: typedef struct { PyObject_HEAD long start; long step; long len; } rangeobject; Is there any standard way of getting at such fields within python itself? A: You can access data you need without ctypes: >>> obj = xrange(1,11,2) >>> obj.__reduce__()[1] (1, 11, 2) >>> len(obj) 5 Note, that __reduce__() method is exactly for serialization. Read this chapter in documentation for more information. Update: But sure you can access internal data with ctypes too: from ctypes import * PyObject_HEAD = [ ('ob_refcnt', c_size_t), ('ob_type', c_void_p), ] class XRangeType(Structure): _fields_ = PyObject_HEAD + [ ('start', c_long), ('step', c_long), ('len', c_long), ] range_obj = xrange(1, 11, 2) c_range_obj = cast(c_void_p(id(range_obj)), POINTER(XRangeType)).contents print c_range_obj.start, c_range_obj.step, c_range_obj.len A: The ctypes module isn't meant for accessing Python internals. ctypes lets you deal with C libraries in C terms, but coding in Python. You probably want a C extension, which in many ways is the opposite of ctypes. With a C extension, you deal with Python code in Python terms, but code in C. UPDATED: Since you want pure Python, why do you need to access the internals of a built-in xrange object? xrange is very simple: create your own in Python, and do what you want with it.
Accessing xrange internal structure
I'm trying to use ctypes to extract data from internal python structures. Namely, I'm trying to read the 4 fields in an xrange: typedef struct { PyObject_HEAD long start; long step; long len; } rangeobject; Is there any standard way of getting at such fields within python itself?
[ "You can access data you need without ctypes:\n>>> obj = xrange(1,11,2)\n>>> obj.__reduce__()[1]\n(1, 11, 2)\n>>> len(obj)\n5\n\nNote, that __reduce__() method is exactly for serialization. Read this chapter in documentation for more information.\nUpdate: But sure you can access internal data with ctypes too:\nfrom ctypes import *\n\nPyObject_HEAD = [\n ('ob_refcnt', c_size_t),\n ('ob_type', c_void_p),\n]\n\nclass XRangeType(Structure):\n _fields_ = PyObject_HEAD + [\n ('start', c_long),\n ('step', c_long),\n ('len', c_long),\n ]\n\nrange_obj = xrange(1, 11, 2)\n\nc_range_obj = cast(c_void_p(id(range_obj)), POINTER(XRangeType)).contents\nprint c_range_obj.start, c_range_obj.step, c_range_obj.len\n\n", "The ctypes module isn't meant for accessing Python internals. ctypes lets you deal with C libraries in C terms, but coding in Python.\nYou probably want a C extension, which in many ways is the opposite of ctypes. With a C extension, you deal with Python code in Python terms, but code in C.\nUPDATED: Since you want pure Python, why do you need to access the internals of a built-in xrange object? xrange is very simple: create your own in Python, and do what you want with it.\n" ]
[ 5, 0 ]
[]
[]
[ "cpython", "ctypes", "python", "xrange" ]
stackoverflow_0001794346_cpython_ctypes_python_xrange.txt
Q: python code to convert mail from pst to eml format Is there any python code to convert outlook pst mails to eml format. Please also suggest for any such code in some other language. Thank you. A: import subprocess def convert_pst_to_mbox(pstfilename, outputfolder): subprocess.call(['readpst', '-o', outputfolder, '-r', pstfilename]) Of course, you must install libpst utilities for that to work.
python code to convert mail from pst to eml format
Is there any python code to convert outlook pst mails to eml format. Please also suggest for any such code in some other language. Thank you.
[ "import subprocess\n\ndef convert_pst_to_mbox(pstfilename, outputfolder):\n subprocess.call(['readpst', '-o', outputfolder, '-r', pstfilename])\n\nOf course, you must install libpst utilities for that to work.\n" ]
[ 1 ]
[]
[]
[ "eml", "pst", "python" ]
stackoverflow_0001795202_eml_pst_python.txt
Q: Python nested lists and recursion problem I posted this question under an alter yesterday not realising my account was still active after 9 months, sorry for the double post, i've fixed an error in my example pointed out by jellybean and i'll elaborate further on the context of the problem. I'm trying to process a first order logic formula represented as nested lists and strings in python so that that its in disjunctive normal form, i.e [ '&', ['|', 'a', 'b'], ['|', 'c', 'd'] ] turns into [ '|', [ '|', ['&', 'a', 'c'], ['&', 'b', 'c'] ], [ '|', ['&', 'a', 'd'], ['&', 'b', 'd'] ] ]` where | is or and & is and. currently im using a recursive implementation which does several passes over a formula until it can't find any nested 'or' symbols inside a list argument for 'ands'. Its being used to process a set of nested formulas represented as strings and lists for universal computational tree logic so it will not only have |s and &s but temporal operators. This is my implementation, performDNF(form) is the entry point. Right now it performs a single pass over the formula with dnfDistributivity() which works for smaller inputs but when you use larger inputs the while loop checking function (checkDistributivity()) finds no |s inside of &s and terminates. Help anyone, this is driving me mad. def dnfDistributivity(self, form): if isinstance(form, type([])): if len(form) == 3: if form[0] == '&': if form[1][0] == '|': form = [ '|', ['&', form[2], form[1][1]], ['&', form[2], form[1][2]] ] elif form[2][0] == '|': form = [ '|', ['&', form[1], form[2][1]], ['&', form[1], form[2][2]] ] form[1] = self.dnfDistributivity(form[1]) form[2] = self.dnfDistributivity(form[2]) elif len(form) == 2: form[1] = self.dnfDistributivity(form[1]) return form def checkDistributivity(self, form, result = 0): if isinstance(form, type([])): if len(form) == 3: if form[0] == '&': print "found &" if isinstance(form[1], type([])): if form[1][0] == '|': return 1 elif isinstance(form[2], type([])): if form[2][0] == '|': return 1 else: result = self.checkDistributivity(form[1], result) print result if result != 1: result = self.checkDistributivity(form[2], result) print result elif len(form) == 2: result = self.checkDistributivity(form[1], result) print result return result def performDNF(self, form): while self.checkDistributivity(form): form = self.dnfDistributivity(self.dnfDistributivity(form)) return form A: First, two general remarks about your code: Use return True instead of return 1. Use isinstance(form, list) instead of isinstance(form, type([])). Second, some other observations: I assume you also want to get rid of double negations. Currently your code doesn't do that. Likewise, you'll need to apply one of DeMorgan's laws to push negations to the leaves. Aside from that, I think the readability of this code can be improved greatly. I'll give an alternative implementation which I believe to be correct. Let me know whether the code below works for you; I didn't go crazy with creating expressions, so I may have missed an edge case. Lastly, I will focus only on the regular propositional connectives. It should be clear how to apply transformations involving CTL-specific connectives. Create a class Op which represents an operator (connective): class Op(list): def __init__(self, *args): super().__init__(args) The arguments to __init__ are the operands. This code uses super as defined in PEP 3135 and works only in Python 3.x In Python 2.x, you'll have to use super as defined in PEP 367: class Op(list): def __init__(self, *args): super(Op, self).__init__(args) Create simple subclasses of Op for each operator. For debugging purposes you may want to implement a custom __str__ method: class Neg(Op): def __str__(self): return '!(%s)' % tuple(self) class And(Op): def __str__(self): return '(%s) & (%s)' % tuple(self) class Or(Op): def __str__(self): return '(%s) | (%s)' % tuple(self) class AX(Op): def __str__(self): return 'AX (%s)' % tuple(self) ... Now the formula !(a & b) can be created as Neg(And('a', 'b')). Create very simple functions which apply a certain transformation once. This will keep the implementation clean. Annotate these functions which some information on how they should be applied. A preorder function should be applied from top to bottom: first transform the root of the expression tree, then recurse. A postorder function should be applied to an expression after it has been recursively applied to subexpressions. Use isinstance to check the type of connectives. We start easy: the function removeDoubleNeg removes double negations: @expressionTransformation('postorder') def removeDoubleNeg(expr): if isinstance(expr, Neg) and isinstance(expr[0], Neg): return expr[0][0] Next, let's define one of DeMorgan's laws: @expressionTransformation('preorder') def deMorgan(expr): if isinstance(expr, Neg) and isinstance(expr[0], And): return Or(Neg(expr[0][0]), Neg(expr[0][1])) And now the function which this question is all about: @expressionTransformation('preorder', 'postorder') def distribute(expr): if isinstance(expr, And): if isinstance(expr[0], Or): return Or(And(expr[0][0], expr[1]), And(expr[0][1], expr[1])) if isinstance(expr[1], Or): return Or(And(expr[0], expr[1][0]), And(expr[0], expr[1][1])) Wow! That's a lot less code! Okay, so how does this work? Observe that any naive implementation of an expression transformation f will involve boilerplate code: Test whether the argument is a connective (as opposed to a constant or variable). Attempt to apply f to the root of the expression tree. Recurse. Return the result. Depending on f, step 1 and 2 may need to be reversed (postorder instead of preorder). Still, every implementation of f will look alike. You will want to avoid boilerplate code, especially if you plan to define many more transformations. It is the lack of this boilerplate that made the functions defined in the previous step so concise (and thus easy to debug!). The decorators returned by the function expressionTransformation solved this problem. Its implementation is as follows: from functools import wraps def expressionTransformation(*args): def wrap(f): @wraps(f) def recursiveTransformation(expr): if not isinstance(expr, Op): return expr if 'postorder' in args: expr[:] = map(recursiveTransformation, expr) res = f(expr) expr = expr if res is None else res if 'preorder' in args: expr[:] = map(recursiveTransformation, expr) return expr return recursiveTransformation return wrap What happens here is the following: The function expressionTransformation returns a decorator (named wrap) which receives the transformation function f as its argument. wrap returns a recursive function recursiveTransformation which applies f to its argument expr only if this argument is a connective. Depending on the arguments args supplied to expressionTransformation, f will be applied before or after (or before and after) applying f to the subexpressions. The assumption is made that f may return None if no transformation is made. The function functools.wraps is used to copy certain properties of f, such as its name, to recursiveTransformation. This functionality is non-essential. (Note that there are more efficient way to create preorder and postorder transformations than using the tests 'postorder' in args and 'preorder' in args over and over, but I chose this for clarity.) That's all. We can now easily combine these functions (note that this function should not be decorated): def toDNF(expr): return distribute(removeDoubleNeg(deMorgan(expr))) You can test the code with statements like these: toDNF(AX(And(Or('a', 'b'), And(Or('c', 'd'), Or('e', 'f'))))) toDNF(Neg(And(Or(Neg(Neg('a')), 'b'), And(Or('c', 'd'), Or('e', 'f'))))) A: You have: elif len(form) == 2: result = self.checkDistributivity(form[1], result) print result Shouldn't that be: elif len(form) == 2: result_1 = self.checkDistributivity(form[1], result) result_2 = self.checkDistributivity(form[2], result) if result_1 or result_2: return 1
Python nested lists and recursion problem
I posted this question under an alter yesterday not realising my account was still active after 9 months, sorry for the double post, i've fixed an error in my example pointed out by jellybean and i'll elaborate further on the context of the problem. I'm trying to process a first order logic formula represented as nested lists and strings in python so that that its in disjunctive normal form, i.e [ '&', ['|', 'a', 'b'], ['|', 'c', 'd'] ] turns into [ '|', [ '|', ['&', 'a', 'c'], ['&', 'b', 'c'] ], [ '|', ['&', 'a', 'd'], ['&', 'b', 'd'] ] ]` where | is or and & is and. currently im using a recursive implementation which does several passes over a formula until it can't find any nested 'or' symbols inside a list argument for 'ands'. Its being used to process a set of nested formulas represented as strings and lists for universal computational tree logic so it will not only have |s and &s but temporal operators. This is my implementation, performDNF(form) is the entry point. Right now it performs a single pass over the formula with dnfDistributivity() which works for smaller inputs but when you use larger inputs the while loop checking function (checkDistributivity()) finds no |s inside of &s and terminates. Help anyone, this is driving me mad. def dnfDistributivity(self, form): if isinstance(form, type([])): if len(form) == 3: if form[0] == '&': if form[1][0] == '|': form = [ '|', ['&', form[2], form[1][1]], ['&', form[2], form[1][2]] ] elif form[2][0] == '|': form = [ '|', ['&', form[1], form[2][1]], ['&', form[1], form[2][2]] ] form[1] = self.dnfDistributivity(form[1]) form[2] = self.dnfDistributivity(form[2]) elif len(form) == 2: form[1] = self.dnfDistributivity(form[1]) return form def checkDistributivity(self, form, result = 0): if isinstance(form, type([])): if len(form) == 3: if form[0] == '&': print "found &" if isinstance(form[1], type([])): if form[1][0] == '|': return 1 elif isinstance(form[2], type([])): if form[2][0] == '|': return 1 else: result = self.checkDistributivity(form[1], result) print result if result != 1: result = self.checkDistributivity(form[2], result) print result elif len(form) == 2: result = self.checkDistributivity(form[1], result) print result return result def performDNF(self, form): while self.checkDistributivity(form): form = self.dnfDistributivity(self.dnfDistributivity(form)) return form
[ "First, two general remarks about your code:\n\nUse return True instead of return 1.\nUse isinstance(form, list) instead of isinstance(form, type([])).\n\nSecond, some other observations:\n\nI assume you also want to get rid of double negations. Currently your code doesn't do that.\nLikewise, you'll need to apply one of DeMorgan's laws to push negations to the leaves.\n\nAside from that, I think the readability of this code can be improved greatly. I'll give an alternative implementation which I believe to be correct. Let me know whether the code below works for you; I didn't go crazy with creating expressions, so I may have missed an edge case. Lastly, I will focus only on the regular propositional connectives. It should be clear how to apply transformations involving CTL-specific connectives.\n\nCreate a class Op which represents an operator (connective):\nclass Op(list):\n def __init__(self, *args):\n super().__init__(args)\n\nThe arguments to __init__ are the operands. This code uses super as defined in PEP 3135 and works only in Python 3.x In Python 2.x, you'll have to use super as defined in PEP 367:\nclass Op(list):\n def __init__(self, *args):\n super(Op, self).__init__(args)\n\nCreate simple subclasses of Op for each operator. For debugging purposes you may want to implement a custom __str__ method:\nclass Neg(Op):\n def __str__(self):\n return '!(%s)' % tuple(self)\nclass And(Op):\n def __str__(self):\n return '(%s) & (%s)' % tuple(self)\nclass Or(Op):\n def __str__(self):\n return '(%s) | (%s)' % tuple(self)\nclass AX(Op):\n def __str__(self):\n return 'AX (%s)' % tuple(self)\n...\n\nNow the formula !(a & b) can be created as Neg(And('a', 'b')).\nCreate very simple functions which apply a certain transformation once. This will keep the implementation clean. Annotate these functions which some information on how they should be applied. A preorder function should be applied from top to bottom: first transform the root of the expression tree, then recurse. A postorder function should be applied to an expression after it has been recursively applied to subexpressions. Use isinstance to check the type of connectives.\n\nWe start easy: the function removeDoubleNeg removes double negations:\n@expressionTransformation('postorder')\ndef removeDoubleNeg(expr):\n if isinstance(expr, Neg) and isinstance(expr[0], Neg):\n return expr[0][0]\n\nNext, let's define one of DeMorgan's laws:\n@expressionTransformation('preorder')\ndef deMorgan(expr):\n if isinstance(expr, Neg) and isinstance(expr[0], And):\n return Or(Neg(expr[0][0]), Neg(expr[0][1]))\n\nAnd now the function which this question is all about:\n@expressionTransformation('preorder', 'postorder')\ndef distribute(expr):\n if isinstance(expr, And):\n if isinstance(expr[0], Or):\n return Or(And(expr[0][0], expr[1]), And(expr[0][1], expr[1]))\n if isinstance(expr[1], Or):\n return Or(And(expr[0], expr[1][0]), And(expr[0], expr[1][1]))\n\nWow! That's a lot less code!\n\nOkay, so how does this work? Observe that any naive implementation of an expression transformation f will involve boilerplate code:\n\nTest whether the argument is a connective (as opposed to a constant or variable).\nAttempt to apply f to the root of the expression tree.\nRecurse.\nReturn the result.\n\nDepending on f, step 1 and 2 may need to be reversed (postorder instead of preorder). Still, every implementation of f will look alike. You will want to avoid boilerplate code, especially if you plan to define many more transformations. It is the lack of this boilerplate that made the functions defined in the previous step so concise (and thus easy to debug!). The decorators returned by the function expressionTransformation solved this problem. Its implementation is as follows:\nfrom functools import wraps\ndef expressionTransformation(*args):\n def wrap(f):\n @wraps(f)\n def recursiveTransformation(expr):\n if not isinstance(expr, Op):\n return expr\n if 'postorder' in args:\n expr[:] = map(recursiveTransformation, expr)\n res = f(expr)\n expr = expr if res is None else res \n if 'preorder' in args:\n expr[:] = map(recursiveTransformation, expr)\n return expr\n return recursiveTransformation\n return wrap\n\nWhat happens here is the following:\n\nThe function expressionTransformation returns a decorator (named wrap) which receives the transformation function f as its argument.\nwrap returns a recursive function recursiveTransformation which applies f to its argument expr only if this argument is a connective.\nDepending on the arguments args supplied to expressionTransformation, f will be applied before or after (or before and after) applying f to the subexpressions.\nThe assumption is made that f may return None if no transformation is made.\n\nThe function functools.wraps is used to copy certain properties of f, such as its name, to recursiveTransformation. This functionality is non-essential.\n(Note that there are more efficient way to create preorder and postorder transformations than using the tests 'postorder' in args and 'preorder' in args over and over, but I chose this for clarity.)\nThat's all. We can now easily combine these functions (note that this function should not be decorated):\ndef toDNF(expr):\n return distribute(removeDoubleNeg(deMorgan(expr)))\n\nYou can test the code with statements like these:\ntoDNF(AX(And(Or('a', 'b'), And(Or('c', 'd'), Or('e', 'f')))))\ntoDNF(Neg(And(Or(Neg(Neg('a')), 'b'), And(Or('c', 'd'), Or('e', 'f')))))\n\n\n", "You have:\n elif len(form) == 2:\n result = self.checkDistributivity(form[1], result)\n print result\n\nShouldn't that be:\n elif len(form) == 2:\n result_1 = self.checkDistributivity(form[1], result)\n result_2 = self.checkDistributivity(form[2], result) \n if result_1 or result_2:\n return 1\n\n" ]
[ 3, 0 ]
[]
[]
[ "boolean_logic", "python", "recursion" ]
stackoverflow_0001793603_boolean_logic_python_recursion.txt
Q: When I send a post request in python...how do I Have a "check"? For example... I would have a dictionary with the parameters to sent to the POST. params = {'text':'how are you?', 'subject':'hi'} then I would have opener.open('theurl',urllib.urlencode(params)) The question is...those parameters work well with text-boxes, since I just put the value in there. How about radio buttons? How do I signify which is "checked"? A: Radio buttons has values too <input type="radio" name="music" value="Rock" checked="checked"> Rock<br> <input type="radio" name="music" value="Pop"> Pop<br> <input type="radio" name="music" value="Metal"> Metal<br> for that case {"music":"Rock"} in params
When I send a post request in python...how do I
Have a "check"? For example... I would have a dictionary with the parameters to sent to the POST. params = {'text':'how are you?', 'subject':'hi'} then I would have opener.open('theurl',urllib.urlencode(params)) The question is...those parameters work well with text-boxes, since I just put the value in there. How about radio buttons? How do I signify which is "checked"?
[ "Radio buttons has values too\n<input type=\"radio\" name=\"music\" value=\"Rock\" checked=\"checked\"> Rock<br>\n<input type=\"radio\" name=\"music\" value=\"Pop\"> Pop<br>\n<input type=\"radio\" name=\"music\" value=\"Metal\"> Metal<br>\n\nfor that case {\"music\":\"Rock\"} in params\n" ]
[ 2 ]
[]
[]
[ "http", "post", "python", "send" ]
stackoverflow_0001796109_http_post_python_send.txt
Q: In Google App Engine, what happens when I change the Class related to a persisted object? My model class looks as follows: from google.appengine.ext import db class SnapShotBase(db.Model): ''' The base class from which all entity snapshots will inherit. ''' version = db.IntegerProperty() def __init__(self): pass Imagine I already have persisted instances of this class in my bigtable datastore. If I were to ADD a field to this class, would it break deserialization? Or would the new properties simply remain empty? A: Model instances aren't stored using standard serialization such as Pickle. The properties (such as 'version' in your example) are encoded and stored as a Protocol Buffer, and when you load an entity from the datastore, the Protocol Buffer is decoded and used to build a new Model instance. As a result, you can modify your object however you like. Adding new properties will cause them to have their default value for any entities that were stored before they were added, or to throw an error if the new property is required and no default is supplied. Removing fields will simply cause them to no longer show up on your model instances. One warning, however: You shouldn't be overriding init in your model classes, as you do above. Doing so is likely to break construction of entities from the datastore. If you need to modify the construction behaviour, I'd suggest using a factory method (or function) instead.
In Google App Engine, what happens when I change the Class related to a persisted object?
My model class looks as follows: from google.appengine.ext import db class SnapShotBase(db.Model): ''' The base class from which all entity snapshots will inherit. ''' version = db.IntegerProperty() def __init__(self): pass Imagine I already have persisted instances of this class in my bigtable datastore. If I were to ADD a field to this class, would it break deserialization? Or would the new properties simply remain empty?
[ "Model instances aren't stored using standard serialization such as Pickle. The properties (such as 'version' in your example) are encoded and stored as a Protocol Buffer, and when you load an entity from the datastore, the Protocol Buffer is decoded and used to build a new Model instance.\nAs a result, you can modify your object however you like. Adding new properties will cause them to have their default value for any entities that were stored before they were added, or to throw an error if the new property is required and no default is supplied. Removing fields will simply cause them to no longer show up on your model instances.\nOne warning, however: You shouldn't be overriding init in your model classes, as you do above. Doing so is likely to break construction of entities from the datastore. If you need to modify the construction behaviour, I'd suggest using a factory method (or function) instead.\n" ]
[ 3 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001795668_google_app_engine_python.txt
Q: Validating Uploaded Files in Django A Django app that I am working has an Event model. An Event may have associated photos, static html files and pdf files. I would like to allow trusted users to upload these files, but I am wary about security, especially having read the following in the Django docs (link). Note that whenever you deal with uploaded files, you should pay close attention to where you're uploading them and what type of files they are, to avoid security holes. Validate all uploaded files so that you're sure the files are what you think they are. For example, if you blindly let somebody upload files, without validation, to a directory that's within your Web server's document root, then somebody could upload a CGI or PHP script and execute that script by visiting its URL on your site. Don't allow that. How can I validate the different types of files? I would be interested to hear anyone's experience of dealing with this kind of thing, or links for further reading. I have a gut feeling that html files may be too risky, in which case I'll restrict upload permissions to the administrator. A: All the answers are focusing on validating files. This is pretty much impossible. The Django devs aren't asking you to validate whether files can be executed as cgi files. They are just telling you not to put them in a place where they will be executed. You should put all Django stuff in a specially Django directory. That Django code directory should not contain static content. Don't put user files in the Django source repository. If you are using Apache2, check out the basic cgi tutorial: http://httpd.apache.org/docs/2.0/howto/cgi.html Apache2 might be setup to run any files in the ScriptAlias folder. Don't put user files in the /cgi-bin/ or /usr/local/apache2/cgi-bin/ folders. Apache2 might be set to server cgi files, depending on the AddHandler cgi-script settings. Don't let the users submit files with extensions like .cgi or .pl. However, you do need to sanitize user submitted files so they are safe to run on other clients' machines. Submitted HTML is unsafe to other users. It won't hurt your server. Your server will just spit it back at whoever requests it. Get a HTML sanitizer. Also, SVG may be unsafe. It's had bugs in the past. SVG is an XML document with javascript in it, so it can be malicious. PDF is ... tricky. You could convert it to an image (if you really had to), or provide an image preview (and let users download at their own risk), but it would be a pain for people trying to use it. Consider a white-list of files that are OK. A virus embedded in a gif, jpeg or png file will just look like a corrupt picture (or fail to display). If you want to be paranoid, convert them all to a standard format using PIL (hey, you could also check sizes). Sanitized HTML should be OK (stripping out script tags isn't rocket science). If the sanitization is sucking cycles (or you're just cautious), you could put it on a separate server, I guess. A: For images you might be able to just use Python Imaging Library (PIL). Image.open(filepath) If the file is not an image, an exception will be thrown. I'm pretty new to Python/Django so someone else might have a better way of validating images. A: The first thing you want to do with the uploaded content is store it in a directory which is not directly accessible for downloading. If your app exists in ~/www/ consider putting your data in '~/data/`. The second thing, you need to determine what kind of file the user uploaded, and then create rules for each file type. You can't trust the file based on the extension, so use something like Fileinfo. Then for each mime type, create a validator. ImageMagick can validate image files. For higher security, you may have to run a virus scanner over files like pdf's and flash files. For html, you may want to consider limit to a subset of tags. I can't find a Python equivalent of the Fileinfo module, though it's always possible to exec /usr/bin/file -i. Most system that allow uploads then create a content name or id. They then use mod_rewrite to parse the URL, and find the content on disk. Once the content is found, it's returned to the user using sendfile, or something similar. For example, until the content is approved, maybe only the user who uploaded it is allowed to view it. A: This is a little bit specific to your hosting environment, but here is what I do: Serve all user uploaded content with Nginx instead of apache, and serve it all as static content (it will not run any of the php or cgi, even if the users upload it) A: you can validate html files with BeautifulSoup A: 'trusted users' is a subjective term. Is it people that you know in person or only someone who has created an account on your app? Don't give access to your filesystem to people that you don't know in person. Giving the ability to someone to upload a file is in any case a bit dangerous and I think that it should be avoided. I was facing a similar problem last week with the automatic upload of html code and I've decided to store it in the database. I think that in most cases, you can use the database rather than the file system. One problem with the validation is that you'll have to write a new validator for any type of files. It can be a limitation in the future and be a big task in some cases. So, I would recommend to reconsider a database-based design.
Validating Uploaded Files in Django
A Django app that I am working has an Event model. An Event may have associated photos, static html files and pdf files. I would like to allow trusted users to upload these files, but I am wary about security, especially having read the following in the Django docs (link). Note that whenever you deal with uploaded files, you should pay close attention to where you're uploading them and what type of files they are, to avoid security holes. Validate all uploaded files so that you're sure the files are what you think they are. For example, if you blindly let somebody upload files, without validation, to a directory that's within your Web server's document root, then somebody could upload a CGI or PHP script and execute that script by visiting its URL on your site. Don't allow that. How can I validate the different types of files? I would be interested to hear anyone's experience of dealing with this kind of thing, or links for further reading. I have a gut feeling that html files may be too risky, in which case I'll restrict upload permissions to the administrator.
[ "All the answers are focusing on validating files. This is pretty much impossible.\nThe Django devs aren't asking you to validate whether files can be executed as cgi files. They are just telling you not to put them in a place where they will be executed.\nYou should put all Django stuff in a specially Django directory. That Django code directory should not contain static content. Don't put user files in the Django source repository.\nIf you are using Apache2, check out the basic cgi tutorial: http://httpd.apache.org/docs/2.0/howto/cgi.html\nApache2 might be setup to run any files in the ScriptAlias folder. Don't put user files in the /cgi-bin/ or /usr/local/apache2/cgi-bin/ folders.\nApache2 might be set to server cgi files, depending on the AddHandler cgi-script settings. Don't let the users submit files with extensions like .cgi or .pl.\nHowever, you do need to sanitize user submitted files so they are safe to run on other clients' machines. Submitted HTML is unsafe to other users. It won't hurt your server. Your server will just spit it back at whoever requests it. Get a HTML sanitizer.\nAlso, SVG may be unsafe. It's had bugs in the past. SVG is an XML document with javascript in it, so it can be malicious. \nPDF is ... tricky. You could convert it to an image (if you really had to), or provide an image preview (and let users download at their own risk), but it would be a pain for people trying to use it. \nConsider a white-list of files that are OK. A virus embedded in a gif, jpeg or png file will just look like a corrupt picture (or fail to display). If you want to be paranoid, convert them all to a standard format using PIL (hey, you could also check sizes). Sanitized HTML should be OK (stripping out script tags isn't rocket science). If the sanitization is sucking cycles (or you're just cautious), you could put it on a separate server, I guess.\n", "For images you might be able to just use Python Imaging Library (PIL).\nImage.open(filepath)\n\nIf the file is not an image, an exception will be thrown. I'm pretty new to Python/Django so someone else might have a better way of validating images.\n", "The first thing you want to do with the uploaded content is store it in a directory which is not directly accessible for downloading. If your app exists in ~/www/ consider putting your data in '~/data/`.\nThe second thing, you need to determine what kind of file the user uploaded, and then create rules for each file type. \nYou can't trust the file based on the extension, so use something like Fileinfo. Then for each mime type, create a validator. ImageMagick can validate image files. For higher security, you may have to run a virus scanner over files like pdf's and flash files. For html, you may want to consider limit to a subset of tags.\nI can't find a Python equivalent of the Fileinfo module, though it's always possible to exec /usr/bin/file -i. Most system that allow uploads then create a content name or id. They then use mod_rewrite to parse the URL, and find the content on disk. Once the content is found, it's returned to the user using sendfile, or something similar. For example, until the content is approved, maybe only the user who uploaded it is allowed to view it.\n", "This is a little bit specific to your hosting environment, but here is what I do:\nServe all user uploaded content with Nginx instead of apache, and serve it all as static content (it will not run any of the php or cgi, even if the users upload it)\n", "you can validate html files with BeautifulSoup\n", "'trusted users' is a subjective term. Is it people that you know in person or only someone who has created an account on your app? Don't give access to your filesystem to people that you don't know in person.\nGiving the ability to someone to upload a file is in any case a bit dangerous and I think that it should be avoided. I was facing a similar problem last week with the automatic upload of html code and I've decided to store it in the database. I think that in most cases, you can use the database rather than the file system.\nOne problem with the validation is that you'll have to write a new validator for any type of files. It can be a limitation in the future and be a big task in some cases.\nSo, I would recommend to reconsider a database-based design.\n" ]
[ 19, 14, 6, 5, 1, 1 ]
[]
[]
[ "django", "file_upload", "python", "security" ]
stackoverflow_0001745743_django_file_upload_python_security.txt
Q: Point Django at different Python version Django application requires a later version of Python. I just installed it to 2.5 (from 2.4) and now when I do a python at the command line, it says 2.5.2. Having said that, Django still says Python Version: 2.4.3. How do I correct this? I've rebooted / restarted / redeployed to no avail. A: You have it backwards. Django is added to Python's environment. When you install a new Python, you must reinstall everything -- including Django -- for the new Python. Once you have the new Django in the new Python, your PATH settings determine which Python you're using. The version of Python (and the PYTHONPATH) determine which Django you'll use. A: How are you running Django? If it's via the development server, you can explicitly choose the version of Python you're using - python2.5 manage.py runserver. However, if you're using mod_python or mod_wsgi these are fixed to the version of Python that was used when they were compiled. If you need to change this, you'll need to recompile, or find a packaged version for your distribution that uses the updated Python. A: django uses /usr/bin/env python IIRC. So, wherever your path points (which python) is the python that gets used. dependent upon your system, you can point your path to the python that you want to use. However, on some OS (CentOS for example) this is a bad idea.
Point Django at different Python version
Django application requires a later version of Python. I just installed it to 2.5 (from 2.4) and now when I do a python at the command line, it says 2.5.2. Having said that, Django still says Python Version: 2.4.3. How do I correct this? I've rebooted / restarted / redeployed to no avail.
[ "You have it backwards.\nDjango is added to Python's environment.\nWhen you install a new Python, you must reinstall everything -- including Django -- for the new Python.\nOnce you have the new Django in the new Python, your PATH settings determine which Python you're using. \nThe version of Python (and the PYTHONPATH) determine which Django you'll use.\n", "How are you running Django? If it's via the development server, you can explicitly choose the version of Python you're using - python2.5 manage.py runserver.\nHowever, if you're using mod_python or mod_wsgi these are fixed to the version of Python that was used when they were compiled. If you need to change this, you'll need to recompile, or find a packaged version for your distribution that uses the updated Python.\n", "django uses /usr/bin/env python IIRC.\nSo, wherever your path points (which python) is the python that gets used. dependent upon your system, you can point your path to the python that you want to use. However, on some OS (CentOS for example) this is a bad idea.\n" ]
[ 6, 4, 0 ]
[]
[]
[ "django", "path", "python" ]
stackoverflow_0001796105_django_path_python.txt
Q: Python SSH paramiko issue - ssh from inside of ssh session import paramiko client = paramiko.SSHClient() client.load_system_host_keys() ip = '192.168.100.6' client.connect(ip, username='root', password='mima') i, o, e = client.exec_command('apt-get install sl -y --force-yes') print o.read(), e.read() client.close() i used this example.. it is working fine but i want after login server1 to login server2 i mean nested ssh . A: can't you call the ssh command from inside of your client.exec_command? like: client.exec_command('ssh user@host2 "apt-get install sl -y --force-yes"') A: You exec the command "ssh" in the client, and not apt-get. You can't really start a paramiko session on the client as long as your python program isn't there. The software you start using ssh must live on that machine. Perhaps first scp a copy of your software, and start that using a parameter like -recursive_lvl = 1 ?
Python SSH paramiko issue - ssh from inside of ssh session
import paramiko client = paramiko.SSHClient() client.load_system_host_keys() ip = '192.168.100.6' client.connect(ip, username='root', password='mima') i, o, e = client.exec_command('apt-get install sl -y --force-yes') print o.read(), e.read() client.close() i used this example.. it is working fine but i want after login server1 to login server2 i mean nested ssh .
[ "can't you call the ssh command from inside of your client.exec_command?\nlike:\nclient.exec_command('ssh user@host2 \"apt-get install sl -y --force-yes\"')\n\n", "You exec the command \"ssh\" in the client, and not apt-get.\nYou can't really start a paramiko session on the client as long as your python program isn't there. The software you start using ssh must live on that machine.\nPerhaps first scp a copy of your software, and start that using a parameter like -recursive_lvl = 1 ?\n" ]
[ 4, 0 ]
[]
[]
[ "paramiko", "python", "ssh" ]
stackoverflow_0001796441_paramiko_python_ssh.txt
Q: sorl.thumbnail : 'thumbnail' is not a valid tag library? I am trying to install sorl.thumbnail but am getting the following error message: 'thumbnail' is not a valid tag library: Could not load template library from django.templatetags.thumbnail, No module named PIL This error popped up in this question as well need help solving sorl-thumbnail error: "'thumbnail' is not a valid tag library:" but the solution offered there is no good for me. The solution was to append the project folder to all imports in the sorl files. I want to keep my apps separate from the project they are in for obvious reasons. I have placed the sorl folder in my project folder I have placed 'sorl.thumbnaills' under installed apps and finally placed {% load thumbnail %} in base.html $python2.5 >>>import PIL >>>import sorl These work. Using python2.5, on ubuntu 9.04 with django 1.1 with appengine-patch To try some other things out i placed in settings.py file: import sys sys.path.append("/home/danielle/bu3/mysite/sorl/thumbnail") But that didnt work either. Some more help would be appreciated ... how should i change my path? current path (without above mentioned import): ['/home/danielle/bu3/mysite', '/home/danielle/bu3/mysite/common', '/home/danielle/bu3/mysite/common/appenginepatch/appenginepatcher/lib', '/home/danielle/bu3/mysite/common/zip-packages/django-1.1.zip', '/home/danielle/bu3/mysite/common/appenginepatch', '/usr/local/google_appengine', '/usr/local/google_appengine/lib/antlr3', '/usr/local/google_appengine/lib/yaml/lib', '/usr/local/google_appengine/lib/django', '/usr/local/google_appengine/lib/webob', '/home/danielle/bu3/mysite', '/usr/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg', '/usr/lib/python2.5/site-packages/ZopeSkel-2.10-py2.5.egg', '/usr/lib/python2.5/site-packages/virtualenv-1.3.2-py2.5.egg', '/usr/lib/python2.5/site-packages/pip-0.3.1-py2.5.egg', '/usr/lib/python2.5/site-packages/virtualenvwrapper-1.12-py2.5.egg', '/usr/lib/python2.5/site-packages/PyYAML-3.08-py2.5-linux-i686.egg', '/usr/lib/python2.5/site-packages/xlutils-1.3.0-py2.5.egg', '/usr/lib/python2.5/site-packages/errorhandler-1.0.0-py2.5.egg', '/usr/lib/python2.5/site-packages/xlwt-0.7.1-py2.5.egg', '/usr/lib/python2.5/site-packages/xlrd-0.7.0-py2.5.egg', '/usr/lib/python2.5/site-packages/Fabric-0.0.9-py2.5.egg', '/usr/lib/python2.5/site-packages/multitask-0.2.0-py2.5.egg', '/usr/lib/python2.5/site-packages/logilab.pylintinstaller-0.15.2-py2.5.egg', '/usr/lib/python2.5/site-packages/pylint-0.15.2-py2.5.egg', '/usr/lib/python2.5/site-packages/clonedigger-1.0.9_beta-py2.5.egg', '/usr/lib/python2.5/site-packages/yolk-0.4.1-py2.5.egg', '/usr/lib/python2.5/site-packages/MySQL_python-1.2.3c1-py2.5-linux-i686.egg', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload', '/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages/Numeric', '/usr/lib/python2.5/site-packages/PIL', '/usr/lib/python2.5/site-packages/gst-0.10', '/var/lib/python-support/python2.5', '/usr/lib/python2.5/site-packages/gtk-2.0', '/var/lib/python-support/python2.5/gtk-2.0', '/usr/lib/python2.5/site-packages/wx-2.8-gtk2-unicode'] A: Is is a typo in your question? You have mis-spelled 'thumbnails' - for the installed apps you have two l's, i.e. 'sorl.thumbnaills' rather than 'sorl.thumbnails' if you run sync.db does it return an error? A: (Editing this, since I didn't read carefully enough) django.templatetags.thumbnail is not, I think, where your thumbnail templatetags should be loading from ... I would think, if you put it in your project folder, it would be myproject.sorl.thumbnail.templatetags.thumbnail. As for the the: No module named PIL Seems that it can't load PIL, even though import PIL works, did you manually install the Python Imaging Library (PIL) - which is usually not present by default on most systems I know. Have you tried creating a symlink to on your /usr/lib/python2.6/site-packages/ path and attempting to utilize sorl that way? I am using it on Ubuntu without a problem. A: It seems i only made the typo here on stackoverflow, in settings i have: INSTALLED_APPS = ( 'jquery', 'blueprintcss', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.admin', 'django.contrib.webdesign', 'django.contrib.flatpages', 'django.contrib.redirects', 'django.contrib.sites', 'appenginepatcher', 'ragendja', 'myapp', 'registration', 'mediautils', 'site_nav', 'pages', 'sorl.thumbnail', ) I assume the order doesn't matter. I'm trying to run this on app engine so i haven't needed to do a syncdb as that doesn't do anything on app engine.
sorl.thumbnail : 'thumbnail' is not a valid tag library?
I am trying to install sorl.thumbnail but am getting the following error message: 'thumbnail' is not a valid tag library: Could not load template library from django.templatetags.thumbnail, No module named PIL This error popped up in this question as well need help solving sorl-thumbnail error: "'thumbnail' is not a valid tag library:" but the solution offered there is no good for me. The solution was to append the project folder to all imports in the sorl files. I want to keep my apps separate from the project they are in for obvious reasons. I have placed the sorl folder in my project folder I have placed 'sorl.thumbnaills' under installed apps and finally placed {% load thumbnail %} in base.html $python2.5 >>>import PIL >>>import sorl These work. Using python2.5, on ubuntu 9.04 with django 1.1 with appengine-patch To try some other things out i placed in settings.py file: import sys sys.path.append("/home/danielle/bu3/mysite/sorl/thumbnail") But that didnt work either. Some more help would be appreciated ... how should i change my path? current path (without above mentioned import): ['/home/danielle/bu3/mysite', '/home/danielle/bu3/mysite/common', '/home/danielle/bu3/mysite/common/appenginepatch/appenginepatcher/lib', '/home/danielle/bu3/mysite/common/zip-packages/django-1.1.zip', '/home/danielle/bu3/mysite/common/appenginepatch', '/usr/local/google_appengine', '/usr/local/google_appengine/lib/antlr3', '/usr/local/google_appengine/lib/yaml/lib', '/usr/local/google_appengine/lib/django', '/usr/local/google_appengine/lib/webob', '/home/danielle/bu3/mysite', '/usr/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg', '/usr/lib/python2.5/site-packages/ZopeSkel-2.10-py2.5.egg', '/usr/lib/python2.5/site-packages/virtualenv-1.3.2-py2.5.egg', '/usr/lib/python2.5/site-packages/pip-0.3.1-py2.5.egg', '/usr/lib/python2.5/site-packages/virtualenvwrapper-1.12-py2.5.egg', '/usr/lib/python2.5/site-packages/PyYAML-3.08-py2.5-linux-i686.egg', '/usr/lib/python2.5/site-packages/xlutils-1.3.0-py2.5.egg', '/usr/lib/python2.5/site-packages/errorhandler-1.0.0-py2.5.egg', '/usr/lib/python2.5/site-packages/xlwt-0.7.1-py2.5.egg', '/usr/lib/python2.5/site-packages/xlrd-0.7.0-py2.5.egg', '/usr/lib/python2.5/site-packages/Fabric-0.0.9-py2.5.egg', '/usr/lib/python2.5/site-packages/multitask-0.2.0-py2.5.egg', '/usr/lib/python2.5/site-packages/logilab.pylintinstaller-0.15.2-py2.5.egg', '/usr/lib/python2.5/site-packages/pylint-0.15.2-py2.5.egg', '/usr/lib/python2.5/site-packages/clonedigger-1.0.9_beta-py2.5.egg', '/usr/lib/python2.5/site-packages/yolk-0.4.1-py2.5.egg', '/usr/lib/python2.5/site-packages/MySQL_python-1.2.3c1-py2.5-linux-i686.egg', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload', '/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages/Numeric', '/usr/lib/python2.5/site-packages/PIL', '/usr/lib/python2.5/site-packages/gst-0.10', '/var/lib/python-support/python2.5', '/usr/lib/python2.5/site-packages/gtk-2.0', '/var/lib/python-support/python2.5/gtk-2.0', '/usr/lib/python2.5/site-packages/wx-2.8-gtk2-unicode']
[ "Is is a typo in your question? You have mis-spelled 'thumbnails' - for the installed apps you have two l's, i.e.\n'sorl.thumbnaills'\nrather than \n'sorl.thumbnails'\n\nif you run sync.db does it return an error?\n", "(Editing this, since I didn't read carefully enough)\ndjango.templatetags.thumbnail is not, I think, where your thumbnail templatetags should be loading from ... I would think, if you put it in your project folder, it would be myproject.sorl.thumbnail.templatetags.thumbnail. \nAs for the the:\nNo module named PIL\nSeems that it can't load PIL, even though import PIL works, did you manually install the Python Imaging Library (PIL) - which is usually not present by default on most systems I know.\nHave you tried creating a symlink to on your /usr/lib/python2.6/site-packages/ path and attempting to utilize sorl that way? I am using it on Ubuntu without a problem. \n", "It seems i only made the typo here on stackoverflow, in settings i have:\nINSTALLED_APPS = (\n'jquery',\n'blueprintcss',\n'django.contrib.auth',\n'django.contrib.sessions',\n'django.contrib.admin',\n'django.contrib.webdesign',\n'django.contrib.flatpages',\n'django.contrib.redirects',\n'django.contrib.sites',\n'appenginepatcher',\n'ragendja',\n'myapp',\n'registration',\n'mediautils',\n'site_nav',\n'pages',\n'sorl.thumbnail',\n)\n\nI assume the order doesn't matter. I'm trying to run this on app engine so i haven't needed to do a syncdb as that doesn't do anything on app engine.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "django", "google_app_engine", "python", "sorl_thumbnail" ]
stackoverflow_0001687530_django_google_app_engine_python_sorl_thumbnail.txt
Q: How create jinja2 extension? I try to make extension for jinja2. I has written such code: http://dumpz.org/12996/ But I receive exception: 'NoneType' object is not iterable. Where is a bug? That should return parse. Also what should accept and return _media? A: You're using a CallBlock, which indicates that you want your extension to act as a block. E.g. {% mytest arg1 arg2 %} stuff in here {% endmytest %} nodes.CallBlock expects that you pass it a list of nodes representing the body (the inner statements) for your extension. Currently this is where you're passing None - hence your error. Once you've parsed your arguments, you need to proceed to parse the body of the block. Fortunately, it's easy. You can simply do: body = parser.parse_statements(['name:endmytest'], drop_needle=True) and then return a new node. The CallBlock receives a method to be called (in this case _mytestfunc) that provides the logic for your extension. body = parser.parse_statements(['name:endmytest'], drop_needle=True) return nodes.CallBlock(self.call_method('_mytestfunc', args),[], [], body).set_lineno(lineno) Alternatively, if you don't want your extension to be a block tag, e.g. {% mytest arg1 arg2 %} you shouldn't use nodes.CallBlock, you should just use nodes.Call instead, which doesn't take a body parameter. So just do: return self.call_method('_mytestfunc', args) self.call_method is simply a handy wrapper function that creates a Call node for you. I've spent a few days writing Jinja2 extensions and it's tricky. There's not much documentation (other than the code). The coffin GitHub project has a few examples of extensions here.
How create jinja2 extension?
I try to make extension for jinja2. I has written such code: http://dumpz.org/12996/ But I receive exception: 'NoneType' object is not iterable. Where is a bug? That should return parse. Also what should accept and return _media?
[ "You're using a CallBlock, which indicates that you want your extension to act as a block. E.g.\n{% mytest arg1 arg2 %}\nstuff\nin\nhere\n{% endmytest %}\n\nnodes.CallBlock expects that you pass it a list of nodes representing the body (the inner statements) for your extension. Currently this is where you're passing None - hence your error.\nOnce you've parsed your arguments, you need to proceed to parse the body of the block. Fortunately, it's easy. You can simply do:\nbody = parser.parse_statements(['name:endmytest'], drop_needle=True) \n\nand then return a new node. The CallBlock receives a method to be called (in this case _mytestfunc) that provides the logic for your extension.\nbody = parser.parse_statements(['name:endmytest'], drop_needle=True) \nreturn nodes.CallBlock(self.call_method('_mytestfunc', args),[], [], body).set_lineno(lineno)\n\nAlternatively, if you don't want your extension to be a block tag, e.g.\n{% mytest arg1 arg2 %}\n\nyou shouldn't use nodes.CallBlock, you should just use nodes.Call instead, which doesn't take a body parameter. So just do: \nreturn self.call_method('_mytestfunc', args) \n\nself.call_method is simply a handy wrapper function that creates a Call node for you.\nI've spent a few days writing Jinja2 extensions and it's tricky. There's not much documentation (other than the code). The coffin GitHub project has a few examples of extensions here.\n" ]
[ 11 ]
[]
[]
[ "jinja2", "python", "templates" ]
stackoverflow_0001521909_jinja2_python_templates.txt
Q: Interpolation in SciPy: Finding X that produces Y Is there a better way to find which X gives me the Y I am looking for in SciPy? I just began using SciPy and I am not too familiar with each function. import numpy as np import matplotlib.pyplot as plt from scipy import interpolate x = [70, 80, 90, 100, 110] y = [49.7, 80.6, 122.5, 153.8, 163.0] tck = interpolate.splrep(x,y,s=0) xnew = np.arange(70,111,1) ynew = interpolate.splev(xnew,tck,der=0) plt.plot(x,y,'x',xnew,ynew) plt.show() t,c,k=tck yToFind = 140 print interpolate.sproot((t,c-yToFind,k)) #Lowers the spline at the abscissa A: The UnivariateSpline class in scipy makes doing splines much more pythonic. x = [70, 80, 90, 100, 110] y = [49.7, 80.6, 122.5, 153.8, 163.0] f = interpolate.UnivariateSpline(x, y, s=0) xnew = np.arange(70,111,1) plt.plot(x,y,'x',xnew,f(xnew)) To find x at y then do: yToFind = 140 yreduced = np.array(y) - yToFind freduced = interpolate.UnivariateSpline(x, yreduced, s=0) freduced.roots() I thought interpolating x in terms of y might work but it takes a somewhat different route. It might be closer with more points. A: If all you need is linear interpolation, you could use the interp function in numpy. A: I may have misunderstood your question, if so I'm sorry. I don't think you need to use SciPy. NumPy has a least squares function. #!/usr/bin/env python from numpy.linalg.linalg import lstsq def find_coefficients(data, exponents): X = tuple((tuple((pow(x,p) for p in exponents)) for (x,y) in data)) y = tuple(((y) for (x,y) in data)) x, resids, rank, s = lstsq(X,y) return x if __name__ == "__main__": data = tuple(( (1.47, 52.21), (1.50, 53.12), (1.52, 54.48), (1.55, 55.84), (1.57, 57.20), (1.60, 58.57), (1.63, 59.93), (1.65, 61.29), (1.68, 63.11), (1.70, 64.47), (1.73, 66.28), (1.75, 68.10), (1.78, 69.92), (1.80, 72.19), (1.83, 74.46) )) print find_coefficients(data, range(3)) This will return [ 128.81280358 -143.16202286 61.96032544]. >>> x=1.47 # the first of the input data >>> 128.81280358 + -143.16202286*x + 61.96032544*(x**2) 52.254697219095988 0.04 out, not bad
Interpolation in SciPy: Finding X that produces Y
Is there a better way to find which X gives me the Y I am looking for in SciPy? I just began using SciPy and I am not too familiar with each function. import numpy as np import matplotlib.pyplot as plt from scipy import interpolate x = [70, 80, 90, 100, 110] y = [49.7, 80.6, 122.5, 153.8, 163.0] tck = interpolate.splrep(x,y,s=0) xnew = np.arange(70,111,1) ynew = interpolate.splev(xnew,tck,der=0) plt.plot(x,y,'x',xnew,ynew) plt.show() t,c,k=tck yToFind = 140 print interpolate.sproot((t,c-yToFind,k)) #Lowers the spline at the abscissa
[ "The UnivariateSpline class in scipy makes doing splines much more pythonic.\nx = [70, 80, 90, 100, 110]\ny = [49.7, 80.6, 122.5, 153.8, 163.0]\nf = interpolate.UnivariateSpline(x, y, s=0)\nxnew = np.arange(70,111,1)\n\nplt.plot(x,y,'x',xnew,f(xnew))\n\nTo find x at y then do:\nyToFind = 140\nyreduced = np.array(y) - yToFind\nfreduced = interpolate.UnivariateSpline(x, yreduced, s=0)\nfreduced.roots()\n\nI thought interpolating x in terms of y might work but it takes a somewhat different route. It might be closer with more points.\n", "If all you need is linear interpolation, you could use the interp function in numpy.\n", "I may have misunderstood your question, if so I'm sorry. I don't think you need to use SciPy. NumPy has a least squares function.\n#!/usr/bin/env python\n\nfrom numpy.linalg.linalg import lstsq\n\n\n\ndef find_coefficients(data, exponents):\n X = tuple((tuple((pow(x,p) for p in exponents)) for (x,y) in data))\n y = tuple(((y) for (x,y) in data))\n x, resids, rank, s = lstsq(X,y)\n return x\n\nif __name__ == \"__main__\":\n data = tuple((\n (1.47, 52.21),\n (1.50, 53.12),\n (1.52, 54.48),\n (1.55, 55.84),\n (1.57, 57.20),\n (1.60, 58.57),\n (1.63, 59.93),\n (1.65, 61.29),\n (1.68, 63.11),\n (1.70, 64.47),\n (1.73, 66.28),\n (1.75, 68.10),\n (1.78, 69.92),\n (1.80, 72.19),\n (1.83, 74.46)\n ))\n print find_coefficients(data, range(3))\n\nThis will return [ 128.81280358 -143.16202286 61.96032544].\n>>> x=1.47 # the first of the input data\n>>> 128.81280358 + -143.16202286*x + 61.96032544*(x**2)\n52.254697219095988\n\n0.04 out, not bad\n" ]
[ 18, 3, 0 ]
[]
[]
[ "interpolation", "numpy", "python", "scientific_computing", "scipy" ]
stackoverflow_0001029207_interpolation_numpy_python_scientific_computing_scipy.txt
Q: How to make these dynamically typed functions type-safe? Is there any programming language (or type system) in which you could express the following Python-functions in a statically typed and type-safe way (without having to use casts, runtime-checks etc)? #1: # My function - What would its type be? def Apply(x): return x(x) # Example usage print Apply(lambda _: 42) #2: white = None black = None def White(): for x in xrange(1, 10): print ("White move #%s" % x) yield black def Black(): for x in xrange(1, 10): print ("Black move #%s" % x) yield white white = White() black = Black() # What would the type of the iterator objects be? for it in white: it = it.next() A: 1# This is not typeable with a finite type. This means that very few (if any) programming languages will be able to type this. However, as you have demonstrated, there is a specific type for x that allows the function to be typed: x :: t -> B Where B is some concrete type. This results in apply being typed as: apply :: (t -> B) -> B Note that Hindley-Milner will not derive this type. 2# This is easy to represent in Haskell (left as an exercise to the reader...) A: I found a Haskell solution for #1 using Rank-N-Types (just for GHCi) {-# LANGUAGE RankNTypes #-} apply :: (forall a . a -> r) -> r apply x = x x apply $ const 42 -- Yields 42 A: When it comes to example #1, you would have to specify the return type of Apply(), and then all functions x that you pass also must return this. Most statically typed languages would not be able to do that safely without checks, as the x function you pass in can return whatever. In example #2 the type of the iterator objects are that they are iterators. If you mean what they return, they return iterators. I don't see why that wouldn't be possible in a static system, but maybe I'm missing something.
How to make these dynamically typed functions type-safe?
Is there any programming language (or type system) in which you could express the following Python-functions in a statically typed and type-safe way (without having to use casts, runtime-checks etc)? #1: # My function - What would its type be? def Apply(x): return x(x) # Example usage print Apply(lambda _: 42) #2: white = None black = None def White(): for x in xrange(1, 10): print ("White move #%s" % x) yield black def Black(): for x in xrange(1, 10): print ("Black move #%s" % x) yield white white = White() black = Black() # What would the type of the iterator objects be? for it in white: it = it.next()
[ "1#\nThis is not typeable with a finite type. This means that very few (if any) programming languages will be able to type this.\nHowever, as you have demonstrated, there is a specific type for x that allows the function to be typed:\nx :: t -> B\n\nWhere B is some concrete type. This results in apply being typed as:\napply :: (t -> B) -> B\n\nNote that Hindley-Milner will not derive this type.\n2#\nThis is easy to represent in Haskell (left as an exercise to the reader...)\n", "I found a Haskell solution for #1 using Rank-N-Types (just for GHCi)\n{-# LANGUAGE RankNTypes #-}\napply :: (forall a . a -> r) -> r\napply x = x x\n\napply $ const 42 -- Yields 42\n\n", "When it comes to example #1, you would have to specify the return type of Apply(), and then all functions x that you pass also must return this. Most statically typed languages would not be able to do that safely without checks, as the x function you pass in can return whatever.\nIn example #2 the type of the iterator objects are that they are iterators. If you mean what they return, they return iterators. I don't see why that wouldn't be possible in a static system, but maybe I'm missing something.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "language_agnostic", "python", "type_theory" ]
stackoverflow_0001079120_language_agnostic_python_type_theory.txt
Q: Strange numpy.float96 behaviour What am I missing: In [66]: import numpy as np In [67]: np.float(7.0 / 8) Out[67]: 0.875 #OK In [68]: np.float32(7.0 / 8) Out[68]: 0.875 #OK In [69]: np.float96(7.0 / 8) Out[69]: -2.6815615859885194e+154 #WTF In [70]: sys.version Out[70]: '2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)]' Edit. On cygwin the above code works OK: $ python Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14) [GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.float(7.0 / 8) 0.875 >>> np.float96(7.0 / 8) 0.875 For the completeness, I checked this code in plain python (not Ipython): C:\temp>python Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.float(7.0 / 8) 0.875 >>> np.float96(7.0 / 8) -2.6815615859885194e+154 >>> EDIT I saw three bug reports on Numpy's trac site (976, 902, and 884), but this one doesn't seem to be related to string representation. Therefore I have opened a new bug (1263). Will update here the progress A: This works fine for me: In [1]: import numpy as np In [2]: np.float(7.0/8) Out[2]: 0.875 In [3]: np.float96(7.0/8) Out[3]: 0.875 What Numpy are you using? I'm using Python 2.6.2 and Numpy 1.3.0 and I'm on 64 bit Vista. I tried this same thing on another computer that is running 32 bit XP with Python 2.5.2 and Numpy 1.2.1 and to my surprise I get: In [2]: np.float96(7.0/8) Out[2]: -2.6815615859885194e+154 After some investigation, installing Python 2.6.3 and Numpy 1.3.0 on 32 bit XP, I've found: In [2]: np.float96(7.0/8) Out[2]: 0.875 So it must be a bug in either the old version of Numpy or a bug in the old version of Python... A: The problem is caused by incompatibilities between mingw compiler (the one used for the official numpy binary) and the MS runtime (the one printf is coming from). MS compiler consider long double and double to be equivalent types, and so does the MS C runtime (printf included). Mingw, for some reason, define long double as big enough to hold 80 bits extended precision numbers, but of course the MS printf does not know about it, and cannot print long double correctly. We circumvented around some problems by using our own formatting functions, but I think the real fix is to force long double to be a synonym to double when built with mingw. This will be done for numpy 1.5.0, I think. A: There were a few fixes for long double formatting issues on Windows in 1.3.0; at least http://projects.scipy.org/numpy/changeset/6219 http://projects.scipy.org/numpy/changeset/6218 http://projects.scipy.org/numpy/changeset/6217
Strange numpy.float96 behaviour
What am I missing: In [66]: import numpy as np In [67]: np.float(7.0 / 8) Out[67]: 0.875 #OK In [68]: np.float32(7.0 / 8) Out[68]: 0.875 #OK In [69]: np.float96(7.0 / 8) Out[69]: -2.6815615859885194e+154 #WTF In [70]: sys.version Out[70]: '2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)]' Edit. On cygwin the above code works OK: $ python Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14) [GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.float(7.0 / 8) 0.875 >>> np.float96(7.0 / 8) 0.875 For the completeness, I checked this code in plain python (not Ipython): C:\temp>python Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.float(7.0 / 8) 0.875 >>> np.float96(7.0 / 8) -2.6815615859885194e+154 >>> EDIT I saw three bug reports on Numpy's trac site (976, 902, and 884), but this one doesn't seem to be related to string representation. Therefore I have opened a new bug (1263). Will update here the progress
[ "This works fine for me:\nIn [1]: import numpy as np\n\nIn [2]: np.float(7.0/8)\nOut[2]: 0.875\n\nIn [3]: np.float96(7.0/8)\nOut[3]: 0.875\n\nWhat Numpy are you using? I'm using Python 2.6.2 and Numpy 1.3.0 and I'm on 64 bit Vista.\nI tried this same thing on another computer that is running 32 bit XP with Python 2.5.2 and Numpy 1.2.1 and to my surprise I get:\nIn [2]: np.float96(7.0/8)\nOut[2]: -2.6815615859885194e+154\n\nAfter some investigation, installing Python 2.6.3 and Numpy 1.3.0 on 32 bit XP, I've found:\nIn [2]: np.float96(7.0/8)\nOut[2]: 0.875\n\nSo it must be a bug in either the old version of Numpy or a bug in the old version of Python...\n", "The problem is caused by incompatibilities between mingw compiler (the one used for the official numpy binary) and the MS runtime (the one printf is coming from).\nMS compiler consider long double and double to be equivalent types, and so does the MS C runtime (printf included). Mingw, for some reason, define long double as big enough to hold 80 bits extended precision numbers, but of course the MS printf does not know about it, and cannot print long double correctly.\nWe circumvented around some problems by using our own formatting functions, but I think the real fix is to force long double to be a synonym to double when built with mingw. This will be done for numpy 1.5.0, I think.\n", "There were a few fixes for long double formatting issues on Windows in 1.3.0; at least\nhttp://projects.scipy.org/numpy/changeset/6219\nhttp://projects.scipy.org/numpy/changeset/6218\nhttp://projects.scipy.org/numpy/changeset/6217\n" ]
[ 2, 2, 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001565731_numpy_python.txt
Q: recursive function for a django model instance i want to make message view show all other messages that led up to that message. the original message will not have a response_to value and should terminate the recursion. is there a better way to do this? (i'm looking at memory over speed, because a thread shouldn't typically be more than 10 - 20 messages long). def get_thread(msg,msg_set=[]): """ This will get all the messages that led up to any particular message it takes only a message, but if the message isn't the first message in a thread it appends it to a message list to be returned. the last message in the list should be the first message created """ if msg.response_to: return get_thread(msg.response_to, msg_set+[msg]) return msg_set+[msg] # Create your models here. class Message(models.Model): body = models.TextField() sender = models.ForeignKey(User,related_name='sender') recipients = models.ManyToManyField(User,related_name='recipients') timestamp = models.DateTimeField(default=datetime.datetime.now) response_to = models.ForeignKey(Message,related_name='response_to') def thread(self): return get_thread(self) A: Yes. not using recursion. def get_thread(msg): messages = [] # empty message set while msg.response_to: messages.append(msg) msg = msg.response_to messages.append(msg) # will append the original message return messages A: If you want to limit recursion depth, add a decrementing counter: class Message(Model): def get_thread(self, max_length = 10): if self.response_to: thread = response_to.get_thread(max_length-1) else: thread = [] thread.append(self) return thread Recursion is usually slower than a loop, and usually consumes more memory (as you need to do funny things with stacks to implement it), it's not a huge deal if you are only going 1000 deep (or so).
recursive function for a django model instance
i want to make message view show all other messages that led up to that message. the original message will not have a response_to value and should terminate the recursion. is there a better way to do this? (i'm looking at memory over speed, because a thread shouldn't typically be more than 10 - 20 messages long). def get_thread(msg,msg_set=[]): """ This will get all the messages that led up to any particular message it takes only a message, but if the message isn't the first message in a thread it appends it to a message list to be returned. the last message in the list should be the first message created """ if msg.response_to: return get_thread(msg.response_to, msg_set+[msg]) return msg_set+[msg] # Create your models here. class Message(models.Model): body = models.TextField() sender = models.ForeignKey(User,related_name='sender') recipients = models.ManyToManyField(User,related_name='recipients') timestamp = models.DateTimeField(default=datetime.datetime.now) response_to = models.ForeignKey(Message,related_name='response_to') def thread(self): return get_thread(self)
[ "Yes. not using recursion.\ndef get_thread(msg):\n messages = [] # empty message set\n\n while msg.response_to: \n messages.append(msg)\n msg = msg.response_to\n\n messages.append(msg) # will append the original message\n\n return messages\n\n", "If you want to limit recursion depth, add a decrementing counter:\nclass Message(Model):\n\n def get_thread(self, max_length = 10):\n if self.response_to:\n thread = response_to.get_thread(max_length-1)\n else:\n thread = []\n thread.append(self)\n return thread\n\nRecursion is usually slower than a loop, and usually consumes more memory (as you need to do funny things with stacks to implement it), it's not a huge deal if you are only going 1000 deep (or so).\n" ]
[ 4, 0 ]
[]
[]
[ "django", "python", "recursion" ]
stackoverflow_0001797586_django_python_recursion.txt
Q: Python and BeautifulSoup, not finding 'a' Here's a piece of HTML code (from delicious): <h4> <a rel="nofollow" class="taggedlink " href="http://imfy.us/" >Generate Secure Links with Anonymous Referers &amp; Anti-Bot Protection</a> <span class="saverem"> <em class="bookmark-actions"> <strong><a class="inlinesave action" href="/save?url=http%3A%2F%2Fimfy.us%2F&amp;title=Generate%20Secure%20Links%20with%20Anonymous%20Referers%20%26%20Anti-Bot%20Protection&amp;jump=%2Fdux&amp;key=fFS4QzJW2lBf4gAtcrbuekRQfTY-&amp;original_user=dux&amp;copyuser=dux&amp;copytags=web+apps+url+security+generator+shortener+anonymous+links">SAVE</a></strong> </em> </span> </h4> I'm trying to find all the links where class="inlinesave action". Here's the code: sock = urllib2.urlopen('http://delicious.com/theuser') html = sock.read() soup = BeautifulSoup(html) tags = soup.findAll('a', attrs={'class':'inlinesave action'}) print len(tags) But it doesn't find anything! Any thoughts? Thanks A: If you want to look for an anchor with exactly those two classes you'd, have to use a regexp, I think: tags = soup.findAll('a', attrs={'class': re.compile(r'\binlinesave\b.*\baction\b')}) Keep in mind that this regexp won't work if the ordering of the class names is reversed (class="action inlinesave"). The following statement should work for all cases (even though it looks ugly imo.): soup.findAll('a', attrs={'class': re.compile(r'\baction\b.*\binlinesave\b|\binlinesave\b.*\baction\b') }) A: Python string methods html=open("file").read() for item in html.split("<strong>"): if "class" in item and "inlinesave action" in item: url_with_junk = item.split('href="')[1] m = url_with_junk.index('">') print url_with_junk[:m] A: May be that issue is fixed in verion 3.1.0, I could parse yours, >>> html="""<h4> ... <a rel="nofollow" class="taggedlink " href="http://imfy.us/" >Generate Secure Links with Anony ... <span class="saverem"> ... <em class="bookmark-actions"> ... <strong><a class="inlinesave action" href="/save?url=http%3A%2F%2Fimfy.us%2F&amp;title=Gen ... </em> ... </span> ... </h4>""" >>> >>> from BeautifulSoup import BeautifulSoup >>> soup = BeautifulSoup(html) >>> tags = soup.findAll('a', attrs={'class':'inlinesave action'}) >>> print len(tags) 1 >>> tags [<a class="inlinesave action" href="/save?url=http%3A%2F%2Fimfy.us%2F&amp;title=Generate%20Secure% >>> I have tried with BeautifulSoup 2.1.1 also, its does not work at all. A: You might make some forward progress using pyparsing: from pyparsing import makeHTMLTags, withAttribute htmlsrc="""<h4>... etc.""" atag = makeHTMLTags("a")[0] atag.setParseAction(withAttribute(("class","inlinesave action"))) for result in atag.searchString(htmlsrc): print result.href Gives (long result output snipped at '...'): /save?url=http%3A%2F%2Fimfy.us%2F&amp;title=Genera...+anonymous+links
Python and BeautifulSoup, not finding 'a'
Here's a piece of HTML code (from delicious): <h4> <a rel="nofollow" class="taggedlink " href="http://imfy.us/" >Generate Secure Links with Anonymous Referers &amp; Anti-Bot Protection</a> <span class="saverem"> <em class="bookmark-actions"> <strong><a class="inlinesave action" href="/save?url=http%3A%2F%2Fimfy.us%2F&amp;title=Generate%20Secure%20Links%20with%20Anonymous%20Referers%20%26%20Anti-Bot%20Protection&amp;jump=%2Fdux&amp;key=fFS4QzJW2lBf4gAtcrbuekRQfTY-&amp;original_user=dux&amp;copyuser=dux&amp;copytags=web+apps+url+security+generator+shortener+anonymous+links">SAVE</a></strong> </em> </span> </h4> I'm trying to find all the links where class="inlinesave action". Here's the code: sock = urllib2.urlopen('http://delicious.com/theuser') html = sock.read() soup = BeautifulSoup(html) tags = soup.findAll('a', attrs={'class':'inlinesave action'}) print len(tags) But it doesn't find anything! Any thoughts? Thanks
[ "If you want to look for an anchor with exactly those two classes you'd, have to use a regexp, I think:\ntags = soup.findAll('a', attrs={'class': re.compile(r'\\binlinesave\\b.*\\baction\\b')})\n\nKeep in mind that this regexp won't work if the ordering of the class names is reversed (class=\"action inlinesave\"). \nThe following statement should work for all cases (even though it looks ugly imo.):\nsoup.findAll('a', \n attrs={'class': \n re.compile(r'\\baction\\b.*\\binlinesave\\b|\\binlinesave\\b.*\\baction\\b')\n })\n\n", "Python string methods\nhtml=open(\"file\").read()\nfor item in html.split(\"<strong>\"):\n if \"class\" in item and \"inlinesave action\" in item:\n url_with_junk = item.split('href=\"')[1]\n m = url_with_junk.index('\">') \n print url_with_junk[:m]\n\n", "May be that issue is fixed in verion 3.1.0, I could parse yours,\n>>> html=\"\"\"<h4>\n... <a rel=\"nofollow\" class=\"taggedlink \" href=\"http://imfy.us/\" >Generate Secure Links with Anony\n... <span class=\"saverem\">\n... <em class=\"bookmark-actions\">\n... <strong><a class=\"inlinesave action\" href=\"/save?url=http%3A%2F%2Fimfy.us%2F&amp;title=Gen\n... </em>\n... </span>\n... </h4>\"\"\"\n>>>\n>>> from BeautifulSoup import BeautifulSoup\n>>> soup = BeautifulSoup(html)\n>>> tags = soup.findAll('a', attrs={'class':'inlinesave action'})\n>>> print len(tags)\n1\n>>> tags\n[<a class=\"inlinesave action\" href=\"/save?url=http%3A%2F%2Fimfy.us%2F&amp;title=Generate%20Secure%\n>>>\n\nI have tried with BeautifulSoup 2.1.1 also, its does not work at all.\n", "You might make some forward progress using pyparsing:\nfrom pyparsing import makeHTMLTags, withAttribute\n\nhtmlsrc=\"\"\"<h4>... etc.\"\"\"\n\natag = makeHTMLTags(\"a\")[0]\natag.setParseAction(withAttribute((\"class\",\"inlinesave action\")))\n\nfor result in atag.searchString(htmlsrc):\n print result.href\n\nGives (long result output snipped at '...'):\n/save?url=http%3A%2F%2Fimfy.us%2F&amp;title=Genera...+anonymous+links\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "beautifulsoup", "html", "python" ]
stackoverflow_0001796725_beautifulsoup_html_python.txt
Q: basic python module design question I have: lib/ lib/__init__.py lib/game.py In __init__.py I'd like to define a variable that can be accessed by any class inside lib, like so: BASE = 'http://www.whatever.com' And then inside game.py, acces it inside in the Game class: class Game: def __init__(self, game_id): self.game_id = game_id url = '%syear_%s/month_%s/day_%s/%s/' % (lib.BASE, year, month, day, game_id) Yeah so clearly 'lib.BASE' isn't right- what's the convention here? Is there a tidier, more pythonic way to handle what I'd call package-global variables? A: See http://docs.python.org/tutorial/modules.html#intra-package-references So you could have a lib/settings.py file which contains the line BASE = 'http://www.whatever.com' and then say from settings import * in game.py you should then be able to write url = '%syear_%s/month_%s/day_%s/%s/' % (BASE, year, month, day, game_id)
basic python module design question
I have: lib/ lib/__init__.py lib/game.py In __init__.py I'd like to define a variable that can be accessed by any class inside lib, like so: BASE = 'http://www.whatever.com' And then inside game.py, acces it inside in the Game class: class Game: def __init__(self, game_id): self.game_id = game_id url = '%syear_%s/month_%s/day_%s/%s/' % (lib.BASE, year, month, day, game_id) Yeah so clearly 'lib.BASE' isn't right- what's the convention here? Is there a tidier, more pythonic way to handle what I'd call package-global variables?
[ "See http://docs.python.org/tutorial/modules.html#intra-package-references\nSo you could have a lib/settings.py file which contains the line\nBASE = 'http://www.whatever.com'\n\nand then say\nfrom settings import *\n\nin game.py you should then be able to write\nurl = '%syear_%s/month_%s/day_%s/%s/' % (BASE, year, month, day, game_id)\n\n" ]
[ 4 ]
[]
[]
[ "python" ]
stackoverflow_0001798656_python.txt
Q: Overcoming Python's limitations regarding instance methods It seems that Python has some limitations regarding instance methods. Instance methods can't be copied. Instance methods can't be pickled. This is problematic for me, because I work on a very object-oriented project in which I reference instance methods, and there's use of both deepcopying and pickling. The pickling thing is done mostly by the multiprocessing mechanism. What would be a good way to solve this? I did some ugly workaround to the copying issue, but I'm looking for a nicer solution to both problems. Does anyone have any suggestions? Update: My use case: I have a tiny event system. Each event has an .action attribute that points to a function it's supposed to trigger, and sometimes that function is an instance method of some object. A: You might be able to do this using copy_reg.pickle. In Python 2.6: import copy_reg import types def reduce_method(m): return (getattr, (m.__self__, m.__func__.__name__)) copy_reg.pickle(types.MethodType, reduce_method) This does not store the code of the method, just its name; but that will work correctly in the common case. This makes both pickling and copying work! A: REST - Representation State Transfer. Just send state, not methods. To transfer an object X from A to B, we do this. A encode the state of X in some handy, easy-to-parse notation. JSON is popular. A sends the JSON text to B. B decodes the state of X form JSON notation, reconstructing X. B must have the class definitions for X's class for this to work. B must have all functions and other class definitions on which X's class depends. In short, both A and B have all the definitions. Only a representation of the object's state gets moved around. See any article on REST. http://en.wikipedia.org/wiki/Representational_State_Transfer http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
Overcoming Python's limitations regarding instance methods
It seems that Python has some limitations regarding instance methods. Instance methods can't be copied. Instance methods can't be pickled. This is problematic for me, because I work on a very object-oriented project in which I reference instance methods, and there's use of both deepcopying and pickling. The pickling thing is done mostly by the multiprocessing mechanism. What would be a good way to solve this? I did some ugly workaround to the copying issue, but I'm looking for a nicer solution to both problems. Does anyone have any suggestions? Update: My use case: I have a tiny event system. Each event has an .action attribute that points to a function it's supposed to trigger, and sometimes that function is an instance method of some object.
[ "You might be able to do this using copy_reg.pickle. In Python 2.6:\nimport copy_reg\nimport types\n\ndef reduce_method(m):\n return (getattr, (m.__self__, m.__func__.__name__))\n\ncopy_reg.pickle(types.MethodType, reduce_method)\n\nThis does not store the code of the method, just its name; but that will work correctly in the common case.\nThis makes both pickling and copying work!\n", "REST - Representation State Transfer. Just send state, not methods.\nTo transfer an object X from A to B, we do this.\n\nA encode the state of X in some\nhandy, easy-to-parse notation. JSON\nis popular.\nA sends the JSON text to B.\nB decodes the state of X form JSON\nnotation, reconstructing X.\n\nB must have the class definitions for X's class for this to work. B must have all functions and other class definitions on which X's class depends. In short, both A \nand B have all the definitions. Only a representation of the object's state gets moved \naround.\nSee any article on REST.\nhttp://en.wikipedia.org/wiki/Representational_State_Transfer\nhttp://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm\n" ]
[ 15, 3 ]
[ "pickle the instance and then access the method after unpickling it. Pickling a method of an instance doesn't make sense because it relies on the instance. If it doesn't, then write it as an independent function. \nimport pickle\n\nclass A:\n def f(self):\n print 'hi'\n\nx = A()\nf = open('tmp', 'w')\nr = pickle.dump(x, f)\nf.close()\nf = open('tmp', 'r')\npickled_x = pickle.load(f)\npickled_x.f()\n\n" ]
[ -3 ]
[ "copy", "instance_method", "oop", "python" ]
stackoverflow_0001798450_copy_instance_method_oop_python.txt
Q: Download file using partial download (HTTP) Is there a way to download huge and still growing file over HTTP using the partial-download feature? It seems that this code downloads file from scratch every time it executed: import urllib urllib.urlretrieve ("http://www.example.com/huge-growing-file", "huge-growing-file") I'd like: To fetch just the newly-written data Download from scratch only if the source file becomes smaller (for example it has been rotated). A: It is possible to do partial download using the range header, the following will request a selected range of bytes: req = urllib2.Request('http://www.python.org/') req.headers['Range'] = 'bytes=%s-%s' % (start, end) f = urllib2.urlopen(req) For example: >>> req = urllib2.Request('http://www.python.org/') >>> req.headers['Range'] = 'bytes=%s-%s' % (100, 150) >>> f = urllib2.urlopen(req) >>> f.read() 'l1-transitional.dtd">\n\n\n<html xmlns="http://www.w3.' Using this header you can resume partial downloads. In your case all you have to do is to keep track of already downloaded size and request a new range. Keep in mind that the server need to accept this header for this to work. A: This is quite easy to do using TCP sockets and raw HTTP. The relevant request header is "Range". An example request might look like: mysock = connect(("www.example.com", 80)) mysock.write( "GET /huge-growing-file HTTP/1.1\r\n"+\ "Host: www.example.com\r\n"+\ "Range: bytes=XXXX-\r\n"+\ "Connection: close\r\n\r\n") Where XXXX represents the number of bytes you've already retrieved. Then you can read the response headers and any content from the server. If the server returns a header like: Content-Length: 0 You know you've got the entire file. If you want to be particularly nice as an HTTP client you can look into "Connection: keep-alive". Perhaps there is a python library that does everything I have described (perhaps even urllib2 does it!) but I'm not familiar with one.
Download file using partial download (HTTP)
Is there a way to download huge and still growing file over HTTP using the partial-download feature? It seems that this code downloads file from scratch every time it executed: import urllib urllib.urlretrieve ("http://www.example.com/huge-growing-file", "huge-growing-file") I'd like: To fetch just the newly-written data Download from scratch only if the source file becomes smaller (for example it has been rotated).
[ "It is possible to do partial download using the range header, the following will request a selected range of bytes:\nreq = urllib2.Request('http://www.python.org/')\nreq.headers['Range'] = 'bytes=%s-%s' % (start, end)\nf = urllib2.urlopen(req)\n\nFor example:\n>>> req = urllib2.Request('http://www.python.org/')\n>>> req.headers['Range'] = 'bytes=%s-%s' % (100, 150)\n>>> f = urllib2.urlopen(req)\n>>> f.read()\n'l1-transitional.dtd\">\\n\\n\\n<html xmlns=\"http://www.w3.'\n\nUsing this header you can resume partial downloads. In your case all you have to do is to keep track of already downloaded size and request a new range.\nKeep in mind that the server need to accept this header for this to work.\n", "This is quite easy to do using TCP sockets and raw HTTP. The relevant request header is \"Range\".\nAn example request might look like:\nmysock = connect((\"www.example.com\", 80))\nmysock.write(\n \"GET /huge-growing-file HTTP/1.1\\r\\n\"+\\\n \"Host: www.example.com\\r\\n\"+\\\n \"Range: bytes=XXXX-\\r\\n\"+\\\n \"Connection: close\\r\\n\\r\\n\")\n\nWhere XXXX represents the number of bytes you've already retrieved. Then you can read the response headers and any content from the server. If the server returns a header like:\nContent-Length: 0\n\nYou know you've got the entire file.\nIf you want to be particularly nice as an HTTP client you can look into \"Connection: keep-alive\". Perhaps there is a python library that does everything I have described (perhaps even urllib2 does it!) but I'm not familiar with one.\n" ]
[ 43, 2 ]
[ "If I understand your question correctly, the file is not changing during download, but is updated regularly. If that is the question, rsync is the answer.\nIf the file is being updated continually including during download, you'll need to modify rsync or a bittorrent program. They split files into separate chunks and download or update the chunks independently. When you get to the end of the file from the first iteration, repeat to get the appended chunk; continue as necessary. With less efficiency, one could just repeatedly rsync.\n" ]
[ -1 ]
[ "http", "partial", "python" ]
stackoverflow_0001798879_http_partial_python.txt
Q: Python, thread and gobject I am writing a program by a framework using pygtk. The main program doing the following things: Create a watchdog thread to monitor some resource Create a client to receive data from socket call gobject.Mainloop() but it seems after my program enter the Mainloop, the watchdog thread also won't run. My workaround is to use gobject.timeout_add to run the monitor thing. But why does creating another thread not work? Here is my code: import gobject import time from threading import Thread class MonitorThread(Thread): def __init__(self): Thread.__init__(self) def run(self): print "Watchdog running..." time.sleep(10) def main(): mainloop = gobject.MainLoop(is_running=True) def quit(): mainloop.quit() def sigterm_cb(): gobject.idle_add(quit) t = MonitorThread() t.start() print "Enter mainloop..." while mainloop.is_running(): try: mainloop.run() except KeyboardInterrupt: quit() if __name__ == '__main__': main() The program output only "Watchdog running...Enter mainloop..", then nothing. Seems thread never run after entering mainloop. A: Can you post some code? It could be that you have problems with the Global Interpreter Lock. Your problem solved by someone else :). I could copy-paste the article here, but in short gtk's c-threads clash with Python threads. You need to disable c-threads by calling gobject.threads_init() and all should be fine. A: You have failed to initialise the threading-based code-paths in gtk. You must remember two things when using threads with PyGTK: GTK Threads must be initialised with gtk.gdk.threads_init: From http://unpythonic.blogspot.com/2007/08/using-threads-in-pygtk.html, copyright entirely retained by author. This copyright notice must not be removed. You can think glib/gobject instead of pygtk, it's the same thing.
Python, thread and gobject
I am writing a program by a framework using pygtk. The main program doing the following things: Create a watchdog thread to monitor some resource Create a client to receive data from socket call gobject.Mainloop() but it seems after my program enter the Mainloop, the watchdog thread also won't run. My workaround is to use gobject.timeout_add to run the monitor thing. But why does creating another thread not work? Here is my code: import gobject import time from threading import Thread class MonitorThread(Thread): def __init__(self): Thread.__init__(self) def run(self): print "Watchdog running..." time.sleep(10) def main(): mainloop = gobject.MainLoop(is_running=True) def quit(): mainloop.quit() def sigterm_cb(): gobject.idle_add(quit) t = MonitorThread() t.start() print "Enter mainloop..." while mainloop.is_running(): try: mainloop.run() except KeyboardInterrupt: quit() if __name__ == '__main__': main() The program output only "Watchdog running...Enter mainloop..", then nothing. Seems thread never run after entering mainloop.
[ "Can you post some code? It could be that you have problems with the Global Interpreter Lock.\nYour problem solved by someone else :). I could copy-paste the article here, but in short gtk's c-threads clash with Python threads. You need to disable c-threads by calling gobject.threads_init() and all should be fine.\n", "You have failed to initialise the threading-based code-paths in gtk.\n\nYou must remember two things when\n using threads with PyGTK:\n\nGTK Threads must be initialised with gtk.gdk.threads_init:\n\n\nFrom http://unpythonic.blogspot.com/2007/08/using-threads-in-pygtk.html, copyright entirely retained by author. This copyright notice must not be removed.\nYou can think glib/gobject instead of pygtk, it's the same thing.\n" ]
[ 9, 2 ]
[]
[]
[ "multithreading", "pygtk", "python" ]
stackoverflow_0001796588_multithreading_pygtk_python.txt
Q: Launching default application for given type of file, OS X I'm writing a python script that generates html file. Every time I run this script I'd like at the end to open default system browser for this file. It's all in OS X environment. What python code can launch Safari/Firefox/whatever is system default html viewer and open given file? subprocess.call doesn't seem to do the trick. A: What python code can launch Safari/Firefox/whatever is system default html viewer and open given file? There is a webbrowser module in python, try this: import webbrowser webbrowser.open('file://%s' % path) This will open a new tab in the default browser. There are methods to open a new tab, new window and other options. A: Do you know about the open command in Mac OS X? I think you can solve your problem by calling it from Python. man open for details: The open command opens a file (or a directory or URL), just as if you had double-clicked the file's icon. If no application name is specified, the default application as determined via LaunchServices is used to open the specified files. A: import ic ic.launchurl('file:///somefile.html')
Launching default application for given type of file, OS X
I'm writing a python script that generates html file. Every time I run this script I'd like at the end to open default system browser for this file. It's all in OS X environment. What python code can launch Safari/Firefox/whatever is system default html viewer and open given file? subprocess.call doesn't seem to do the trick.
[ "\nWhat python code can launch\n Safari/Firefox/whatever is system\n default html viewer and open given\n file?\n\nThere is a webbrowser module in python, try this:\nimport webbrowser\nwebbrowser.open('file://%s' % path)\n\nThis will open a new tab in the default browser.\nThere are methods to open a new tab, new window and other options.\n", "Do you know about the open command in Mac OS X? I think you can solve your problem by calling it from Python.\nman open for details:\nThe open command opens a file (or a directory or URL), just as if you had double-clicked the\nfile's icon. If no application name is specified, the default application as determined via\nLaunchServices is used to open the specified files.\n", "\nimport ic\nic.launchurl('file:///somefile.html')\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0001798351_python_subprocess.txt
Q: Web service: PHP or Ruby on Rails or Python? I am a .Net / SQL Server developer via my daytime job, and on the side I do some objective C development for the iPhone. I would like to develop a web service and since dreamhost supports mySql, python, ruby on rails and PHP5, I would like to create it using one of those languages. If you had no experience in either python, Ruby on Rails or PHP, which would you go with and why? The service basically just takes a request and talks to a MySql database. Note: Was planning on using the SOAP protocol.. though I am open to suggestions since I have a clean slate with these languages. A: Ruby-on-rails, Python and PHP would all be excellent choices for developing a web service in. All the languages are capable (with of course Ruby being the language that Ruby on Rails is written in), have strong frameworks if that is your fancy (Django being a good python example, and something like Drupal or CakePHP being good PHP examples) and can play nicely with MySql. I'd say that it would depend mostly on your past experience and what you'd be the most comfortable with. Assuming that you're developing C# on .NET and have experience with Objective-C PHP may be a good choice because it is most certainly from the C family of languages. So the syntax might be more familiar and a bit easier to deal with. I'm a PHP developer so I'll give you that slant and let more knowledgeable developers with the others give theirs as well. PHP is tightly integrated with Apache, which can make some of the more mundane tasks that you'd have to handle with the others a bit more trivial (though when working with a framework those are usually removed). The PHP documentation is second to none and is a great resource for getting up and going easily. It has decent speed and there are good caching mechanisms out there to get more performance out of it. I know that getting up and running with PHP on Dreamhost is trivial. I haven't done it in the other instances although it wouldn't surprise me if those were just as easy as well. I'd suggest digging a bit more into the documentation and frameworks for each language to find out what suits you best. A: The short answer is, I'd go with PHP. I have some experience in all two of your three choices: PHP, Ruby with Ruby on Rails. If I had no experience however and I was looking to set out and create a web service that largely just interacts with a database and I wanted it done this weekend, I'd choose PHP. If I had no experience with any of the above languages and I wanted to project done in a couple of weeks, I'd choose rails. I personally have much less experience with with Python and Django so I can't really comment. Ruby with Ruby on Rails: I've been working with Ruby and ruby on rails for several years now. I previously had experience in Java (which is roughly analogous to your experience in .Net). I found the transition to rails to be a little bit bumpy. I wanted to jump right in and start understanding how rails works and how to build a web application but with no understanding of ruby this was difficult. There are a lot of example out there that will help you build an application quickly but often times the quickness comes at the expense of understanding. To build solid rails web application you need a good understanding of ruby and of the rails frameworks. Rails is fantastic, but for building something you understand and getting it up and running quickly it may not be your best choice. Also, rails hosting has come a long way (you can tell because we're starting to see many hosts offer it) but there are still some bumps. PHP: PHP is fantastic for getting something up and running quickly. You can upload files and immediately see if your result if working. If you keep your database setup clean (and it sounds like you will, because you work with databases all day) the PHP shouldn't be too bad. I would look into an Object Relational Mapper to help keep your PHP even cleaner, I've heard good thinks about Doctorine. Python: I would imagine that you'd probably use Django with Python. Because of this you're probably going to come up against the same stumbling blocks that you would with ruby + ruby on rails. If you'd like to start to learn Ruby on Rails, I'd recommend checking out this thread on stackoverflow. Finally, if you'd like to work with a PHP framework, there's a great thread on that here on stackoverflow. A: I have developed in Python and PHP and my personal preference would be Python. Django is a great, easy to understand, light-weight framework for Python. Django Site If you went the PHP route, I would recommend Kohana. Kohana Site A: The first programming I ever did was with PHP, and it's definitely very easy to get going with PHP on Dreamhost (I use Dreamhost for my PHP-based blog as well as Ruby on Rails project hosting). Ruby on Rails is pretty easy to get going on Dreamhost as well, now that they've started using Passenger. I learned Ruby and Ruby on Rails several years after I became comfortable in PHP and I prefer it to PHP because it feels much cleaner and I love the Model View Controller pattern for separation of code and content. I tried to learn Django after that but found myself frustrated because the meaning of "view" was different in Django than in Rails/MVC, so I didn't get very far. If you are doing quick-and-dirty, you might go with PHP. You could look into various frameworks for PHP, such as CakePHP or Symfony, for cleaner, more organized development. If you're willing to spend more time learning (first for the language Ruby, then for the framework Ruby on Rails), you could go with Ruby on Rails. I really enjoy Rails development, but there was a learning curve since I learned both Ruby and Rails at the same time. There's a lot of information out there about deploying Rails apps on Dreamhost. A: This is an extremely subjective question, and even if you gave us the specifics of your web service, we can argue about the best choice all day. I'm a PHP developer, so I could whip off a basic web service with no problems. There's lots of simple PHP frameworks available that would handle that very nicely. That being said, Python and Django give you some great out-of-the-box functionality, and it's on my list of things to learn. You could achieve something pretty fast with that.
Web service: PHP or Ruby on Rails or Python?
I am a .Net / SQL Server developer via my daytime job, and on the side I do some objective C development for the iPhone. I would like to develop a web service and since dreamhost supports mySql, python, ruby on rails and PHP5, I would like to create it using one of those languages. If you had no experience in either python, Ruby on Rails or PHP, which would you go with and why? The service basically just takes a request and talks to a MySql database. Note: Was planning on using the SOAP protocol.. though I am open to suggestions since I have a clean slate with these languages.
[ "Ruby-on-rails, Python and PHP would all be excellent choices for developing a web service in. All the languages are capable (with of course Ruby being the language that Ruby on Rails is written in), have strong frameworks if that is your fancy (Django being a good python example, and something like Drupal or CakePHP being good PHP examples) and can play nicely with MySql.\nI'd say that it would depend mostly on your past experience and what you'd be the most comfortable with. Assuming that you're developing C# on .NET and have experience with Objective-C PHP may be a good choice because it is most certainly from the C family of languages. So the syntax might be more familiar and a bit easier to deal with.\nI'm a PHP developer so I'll give you that slant and let more knowledgeable developers with the others give theirs as well. PHP is tightly integrated with Apache, which can make some of the more mundane tasks that you'd have to handle with the others a bit more trivial (though when working with a framework those are usually removed). The PHP documentation is second to none and is a great resource for getting up and going easily. It has decent speed and there are good caching mechanisms out there to get more performance out of it. I know that getting up and running with PHP on Dreamhost is trivial. I haven't done it in the other instances although it wouldn't surprise me if those were just as easy as well.\nI'd suggest digging a bit more into the documentation and frameworks for each language to find out what suits you best. \n", "The short answer is, I'd go with PHP.\nI have some experience in all two of your three choices: PHP, Ruby with Ruby on Rails. If I had no experience however and I was looking to set out and create a web service that largely just interacts with a database and I wanted it done this weekend, I'd choose PHP. If I had no experience with any of the above languages and I wanted to project done in a couple of weeks, I'd choose rails. I personally have much less experience with with Python and Django so I can't really comment.\nRuby with Ruby on Rails: I've been working with Ruby and ruby on rails for several years now. I previously had experience in Java (which is roughly analogous to your experience in .Net). I found the transition to rails to be a little bit bumpy. I wanted to jump right in and start understanding how rails works and how to build a web application but with no understanding of ruby this was difficult. There are a lot of example out there that will help you build an application quickly but often times the quickness comes at the expense of understanding. To build solid rails web application you need a good understanding of ruby and of the rails frameworks. Rails is fantastic, but for building something you understand and getting it up and running quickly it may not be your best choice. Also, rails hosting has come a long way (you can tell because we're starting to see many hosts offer it) but there are still some bumps.\nPHP: PHP is fantastic for getting something up and running quickly. You can upload files and immediately see if your result if working. If you keep your database setup clean (and it sounds like you will, because you work with databases all day) the PHP shouldn't be too bad. I would look into an Object Relational Mapper to help keep your PHP even cleaner, I've heard good thinks about Doctorine.\nPython: I would imagine that you'd probably use Django with Python. Because of this you're probably going to come up against the same stumbling blocks that you would with ruby + ruby on rails.\nIf you'd like to start to learn Ruby on Rails, I'd recommend checking out this thread on stackoverflow.\nFinally, if you'd like to work with a PHP framework, there's a great thread on that here on stackoverflow.\n", "I have developed in Python and PHP and my personal preference would be Python. \nDjango is a great, easy to understand, light-weight framework for Python. Django Site\nIf you went the PHP route, I would recommend Kohana. Kohana Site\n", "The first programming I ever did was with PHP, and it's definitely very easy to get going with PHP on Dreamhost (I use Dreamhost for my PHP-based blog as well as Ruby on Rails project hosting). Ruby on Rails is pretty easy to get going on Dreamhost as well, now that they've started using Passenger. I learned Ruby and Ruby on Rails several years after I became comfortable in PHP and I prefer it to PHP because it feels much cleaner and I love the Model View Controller pattern for separation of code and content. I tried to learn Django after that but found myself frustrated because the meaning of \"view\" was different in Django than in Rails/MVC, so I didn't get very far.\nIf you are doing quick-and-dirty, you might go with PHP. You could look into various frameworks for PHP, such as CakePHP or Symfony, for cleaner, more organized development. If you're willing to spend more time learning (first for the language Ruby, then for the framework Ruby on Rails), you could go with Ruby on Rails. I really enjoy Rails development, but there was a learning curve since I learned both Ruby and Rails at the same time. There's a lot of information out there about deploying Rails apps on Dreamhost.\n", "This is an extremely subjective question, and even if you gave us the specifics of your web service, we can argue about the best choice all day.\nI'm a PHP developer, so I could whip off a basic web service with no problems. There's lots of simple PHP frameworks available that would handle that very nicely.\nThat being said, Python and Django give you some great out-of-the-box functionality, and it's on my list of things to learn. You could achieve something pretty fast with that.\n" ]
[ 11, 6, 2, 1, 0 ]
[]
[]
[ "php", "python", "ruby_on_rails" ]
stackoverflow_0001183420_php_python_ruby_on_rails.txt
Q: suggestions for a daemon that accepts zip files for processing im looking to write a daemon that: reads a message from a queue (sqs, rabbit-mq, whatever ...) containing a path to a zip file updates a record in the database saying something like "this job is processing" reads the aforementioned archive's contents and inserts a row into a database w/ information culled from file meta data for each file found duplicates each file to s3 deletes the zip file marks the job as "complete" read next message in queue, repeat this should be running as a service, and initiated by a message queued when someone uploads a file via the web frontend. the uploader doesn't need to immediately see the results, but the upload be processed in the background fairly expediently. im fluent with python, so the very first thing that comes to mind is writing a simple server with twisted to handle each request and carry out the process mentioned above. but, ive never written anything like this that would run in a multi-user context. its not going to service hundreds of uploads per minute or hour, but it'd be nice if it could handle several at a time, reasonable. i also am not terribly familiar with writing multi-threaded applications and dealing with issues like blocking. how have people solved this in the past? what are some other approaches i could take? thanks in advance for any help and discussion! A: I've used Beanstalkd as a queueing daemon to very good effect (some near-time processing and image resizing - over 2 million so far in the last few weeks). Throw a message into the queue with the zip filename (maybe from a specific directory) [I serialise a command and parameters in JSON], and when you reserve the message in your worker-client, no one else can get it, unless you allow it to time out (when it goes back to the queue to be picked up). The rest is the unzipping and uploading to S3, for which there are other libraries. If you want to handle several zip files at once, run as many worker processes as you want. A: I would avoid doing anything multi-threaded and instead use the queue and the database to synchronize as many worker processes as you care to start up. For this application I think twisted or any framework for creating server applications is going to be overkill. Keep it simple. Python script starts up, checks the queue, does some work, checks the queue again. If you want a proper background daemon you might want to just make sure you detach from the terminal as described here: How do you create a daemon in Python? Add some logging, maybe a try/except block to email out failures to you. A: i opted to use a combination of celery (http://ask.github.com/celery/introduction.html), rabbitmq, and a simple django view to handle uploads. the workflow looks like this: django view accepts, stores upload a celery Task is dispatched to process the upload. all work is done inside the Task.
suggestions for a daemon that accepts zip files for processing
im looking to write a daemon that: reads a message from a queue (sqs, rabbit-mq, whatever ...) containing a path to a zip file updates a record in the database saying something like "this job is processing" reads the aforementioned archive's contents and inserts a row into a database w/ information culled from file meta data for each file found duplicates each file to s3 deletes the zip file marks the job as "complete" read next message in queue, repeat this should be running as a service, and initiated by a message queued when someone uploads a file via the web frontend. the uploader doesn't need to immediately see the results, but the upload be processed in the background fairly expediently. im fluent with python, so the very first thing that comes to mind is writing a simple server with twisted to handle each request and carry out the process mentioned above. but, ive never written anything like this that would run in a multi-user context. its not going to service hundreds of uploads per minute or hour, but it'd be nice if it could handle several at a time, reasonable. i also am not terribly familiar with writing multi-threaded applications and dealing with issues like blocking. how have people solved this in the past? what are some other approaches i could take? thanks in advance for any help and discussion!
[ "I've used Beanstalkd as a queueing daemon to very good effect (some near-time processing and image resizing - over 2 million so far in the last few weeks). Throw a message into the queue with the zip filename (maybe from a specific directory) [I serialise a command and parameters in JSON], and when you reserve the message in your worker-client, no one else can get it, unless you allow it to time out (when it goes back to the queue to be picked up).\nThe rest is the unzipping and uploading to S3, for which there are other libraries.\nIf you want to handle several zip files at once, run as many worker processes as you want.\n", "I would avoid doing anything multi-threaded and instead use the queue and the database to synchronize as many worker processes as you care to start up.\nFor this application I think twisted or any framework for creating server applications is going to be overkill. \nKeep it simple. Python script starts up, checks the queue, does some work, checks the queue again. If you want a proper background daemon you might want to just make sure you detach from the terminal as described here: How do you create a daemon in Python?\nAdd some logging, maybe a try/except block to email out failures to you.\n", "i opted to use a combination of celery (http://ask.github.com/celery/introduction.html), rabbitmq, and a simple django view to handle uploads. the workflow looks like this:\n\ndjango view accepts, stores upload\na celery Task is dispatched to process the upload. all work is done inside the Task.\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "daemon", "django", "python", "zip" ]
stackoverflow_0000758466_daemon_django_python_zip.txt
Q: Python: print doesn't work, script hangs endlessly Using Python 2.6, I wrote a script in Windows XP. The script does the following: Input: Domain name (ie: amazon.com) The script queries DNS via the dnspython module and returns any A record IP Addresses. The output is in a special format needed for a specific application which utilizes this data. This works fine in Windows, but when I've placed this on my Linux server, I'm getting some unusual and inconsistent results. When run for the first time, it completes as expected. If I run it again immediately, the script will just hang and do nothing, no output, and the script won't end either. If I use CTRL-C to exit the process, it prints! (almost as it had been buffered, but no written to the terminal) I've tried various techniques to fix this issue, like forcing sys.stdout.flush() after print (though, print should automatically flush anyways) and have had no luck. If I wait some time (a few minutes), and run the script again, it will work again (once) and then subsequent attempts will continue to fail. I'm not sure what is going on... has anyone else experienced anything like this? Python 2.6 on both Windows and Linux (Ubuntu). Here is my script: from dns.resolver import Resolver from dns.exception import DNSException from cStringIO import StringIO import sys def maltego_transform(entities, messages = ''): print '''<MaltegoMessage> <MaltegoTransformResponseMessage> <Entities> {0} </Entities> <UIMessages> {1} </UIMessages> </MaltegoTransformResponseMessage> </MaltegoMessage>'''.format(entities, messages) def domain_to_ip(domain): resolver = Resolver() results = [] for type in ['A', 'AAAA']: try: query = resolver.query(domain, type) except DNSException: query = [] results += query entities = StringIO() for answer in results: entities.write('''<Entity Type="IPAddress"><Value>{0}</Value></Entity>'''.format(answer)) maltego_transform(entities.getvalue()) def domain_to_mxdomain(domain): resolver = Resolver() try: query = resolver.query(domain, 'MX') except DNSException: query = [] entities = StringIO() for answer in query: entities.write('''<Entity Type="Domain"><Value>{0}</Value> <AdditionalFields><Field Name="DomainType" DisplayName="Domain Type">Mail Exchange</Field></AdditionalFields> </Entity>'''.format(answer.exchange)) maltego_transform(entities.getvalue()) def main(): options = {'domain_to_ip' : domain_to_ip, 'domain_to_mxdomain' : domain_to_mxdomain} if len(sys.argv) > 2: func = options.get(sys.argv[1], None) if func: func(sys.argv[2]) if __name__ == '__main__': main() Use: python myscript.py domain_to_ip amazon.com 2 parameters for this script, the first maps to the function to run, the second specifies the domain. A: Apparently dnspython wants 16 bytes of high-quality random numbers at startup. Getting them (from /dev/random) can block. If you hit Ctrl+C, it actually catches the KeyboardInterupt exception and falls back on less-secure random numbers (taken from the current system time). Then your program finishes running. The code in question is here: http://www.dnspython.com/docs/1.7.1/html/dns.entropy-pysrc.html I guess I would consider this a bug in dnspython. It should find a way not to block there, and fall back on /dev/urandom. In any case it shouldn't silence a KeyboardInterrupt. A: Have you tried doing entities = StringIO() for answer in results: entities.write('''<Entity Type="IPAddress"><Value>{0}</Value></Entity>'''.format(answer)) entities.flush() maltego_transform(entities.getvalue()) entities.close()
Python: print doesn't work, script hangs endlessly
Using Python 2.6, I wrote a script in Windows XP. The script does the following: Input: Domain name (ie: amazon.com) The script queries DNS via the dnspython module and returns any A record IP Addresses. The output is in a special format needed for a specific application which utilizes this data. This works fine in Windows, but when I've placed this on my Linux server, I'm getting some unusual and inconsistent results. When run for the first time, it completes as expected. If I run it again immediately, the script will just hang and do nothing, no output, and the script won't end either. If I use CTRL-C to exit the process, it prints! (almost as it had been buffered, but no written to the terminal) I've tried various techniques to fix this issue, like forcing sys.stdout.flush() after print (though, print should automatically flush anyways) and have had no luck. If I wait some time (a few minutes), and run the script again, it will work again (once) and then subsequent attempts will continue to fail. I'm not sure what is going on... has anyone else experienced anything like this? Python 2.6 on both Windows and Linux (Ubuntu). Here is my script: from dns.resolver import Resolver from dns.exception import DNSException from cStringIO import StringIO import sys def maltego_transform(entities, messages = ''): print '''<MaltegoMessage> <MaltegoTransformResponseMessage> <Entities> {0} </Entities> <UIMessages> {1} </UIMessages> </MaltegoTransformResponseMessage> </MaltegoMessage>'''.format(entities, messages) def domain_to_ip(domain): resolver = Resolver() results = [] for type in ['A', 'AAAA']: try: query = resolver.query(domain, type) except DNSException: query = [] results += query entities = StringIO() for answer in results: entities.write('''<Entity Type="IPAddress"><Value>{0}</Value></Entity>'''.format(answer)) maltego_transform(entities.getvalue()) def domain_to_mxdomain(domain): resolver = Resolver() try: query = resolver.query(domain, 'MX') except DNSException: query = [] entities = StringIO() for answer in query: entities.write('''<Entity Type="Domain"><Value>{0}</Value> <AdditionalFields><Field Name="DomainType" DisplayName="Domain Type">Mail Exchange</Field></AdditionalFields> </Entity>'''.format(answer.exchange)) maltego_transform(entities.getvalue()) def main(): options = {'domain_to_ip' : domain_to_ip, 'domain_to_mxdomain' : domain_to_mxdomain} if len(sys.argv) > 2: func = options.get(sys.argv[1], None) if func: func(sys.argv[2]) if __name__ == '__main__': main() Use: python myscript.py domain_to_ip amazon.com 2 parameters for this script, the first maps to the function to run, the second specifies the domain.
[ "Apparently dnspython wants 16 bytes of high-quality random numbers at startup. Getting them (from /dev/random) can block.\nIf you hit Ctrl+C, it actually catches the KeyboardInterupt exception and falls back on less-secure random numbers (taken from the current system time). Then your program finishes running.\nThe code in question is here: http://www.dnspython.com/docs/1.7.1/html/dns.entropy-pysrc.html\nI guess I would consider this a bug in dnspython. It should find a way not to block there, and fall back on /dev/urandom. In any case it shouldn't silence a KeyboardInterrupt.\n", "Have you tried doing\nentities = StringIO()\nfor answer in results:\n entities.write('''<Entity Type=\"IPAddress\"><Value>{0}</Value></Entity>'''.format(answer))\nentities.flush()\nmaltego_transform(entities.getvalue())\nentities.close()\n\n" ]
[ 5, 1 ]
[]
[]
[ "printing", "python" ]
stackoverflow_0001799462_printing_python.txt
Q: Trouble Upgrading Python / Django on CentOS As you can see by reading my other thread today here, I'm having some troubles upgrading Python. At the moment I have Python 2.4 with Django installed on a CentOS machine. However I've recently deployed an application that requires 2.5 which has resulted in me installing that and getting into a whole load of mess. My original assumption was that I could direct Django to a different Python version, however as S.Lott informed me, I had it backwards... you attach Django to Python, not the other way round. Which means I currently have: Python 2.4 with Django and Python 2.5. I've tried a few things so far to no avail. The first idea being an easy_install which would put Django onto the Python 2.5 (So I'd have 2 versions of python with seperate Djangos). Therefore I went into 2.5's directory and did that, which then allowed me to find out that it had just reinstalls it on 2.4, not 2.5. Therefore first question is How do I direct easy_install to Python 2.5, not 2.4? Is there no way to just hit 'upgrade' and for a full update to occur? I know this may be asking for much, however it just seems like so much hassle and I'm surprised I can't find anyone else complaining. Any help would be greatly appreciated. A: I don't know anything about CentOS, but if you have multiple Python version installed and you wan't to install packages using easy_install, you just need to call it with the corresponding Python interpreter. This should install the packing into the site-package directory of Python 2.5: # /path/to/python-2.5 easy_install Django A: assuming your python2.5 interpreter lives in /usr/bin/python2.5, you can install setuptools for python2.5 as such: curl -O http://peak.telecommunity.com/dist/ez_setup.py sudo /usr/bin/python2.5 ez_setup.py Among other things, this installs an "easy_install-2.5" script in your $PATH (check the output of the above command). Now you have two easy_install scripts: one for python 2.4 ("easy_install") and one for python 2.5 ("easy_install-2.5"). To install Django for your python2.5, use sudo easy_install-2.5 django That's it! A: easy_install is a shell script the first line of which tells it where python is installed (I am on OSX so can't say exactly what your directories will be ) So you could copy easy_install and change the first line to point to 2.5. (or do as Heas suggests) Althernatively when you installed python 2.5 there might be a file easy_install-2.5 with the correct python (again these were installed for me under OSX so might be a special version) A: You don't need to 'install' Django at all, it just needs to live on the Pythonpath somewhere. Assuming it's currently in the Python2.4 site-packages directory, you can just move it to the 2.5 one: sudo mv /usr/lib/python2.4/site-packages/django /usr/lib/python2.5/site-packages/django or whatever the correct path is for CentOS. However, as I noted on your other question, this won't necessarily help - S.Lott was unfortunately misleading in his answer. To serve Django via modpython or modwsgi with a new Python version, you'll need to recompile those extensions or download packages versions precompiled with Python 2.5.
Trouble Upgrading Python / Django on CentOS
As you can see by reading my other thread today here, I'm having some troubles upgrading Python. At the moment I have Python 2.4 with Django installed on a CentOS machine. However I've recently deployed an application that requires 2.5 which has resulted in me installing that and getting into a whole load of mess. My original assumption was that I could direct Django to a different Python version, however as S.Lott informed me, I had it backwards... you attach Django to Python, not the other way round. Which means I currently have: Python 2.4 with Django and Python 2.5. I've tried a few things so far to no avail. The first idea being an easy_install which would put Django onto the Python 2.5 (So I'd have 2 versions of python with seperate Djangos). Therefore I went into 2.5's directory and did that, which then allowed me to find out that it had just reinstalls it on 2.4, not 2.5. Therefore first question is How do I direct easy_install to Python 2.5, not 2.4? Is there no way to just hit 'upgrade' and for a full update to occur? I know this may be asking for much, however it just seems like so much hassle and I'm surprised I can't find anyone else complaining. Any help would be greatly appreciated.
[ "I don't know anything about CentOS, but if you have multiple Python version installed and you wan't to install packages using easy_install, you just need to call it with the corresponding Python interpreter. This should install the packing into the site-package directory of Python 2.5:\n# /path/to/python-2.5 easy_install Django\n\n", "assuming your python2.5 interpreter lives in /usr/bin/python2.5, you can install setuptools for python2.5 as such:\ncurl -O http://peak.telecommunity.com/dist/ez_setup.py\nsudo /usr/bin/python2.5 ez_setup.py\n\nAmong other things, this installs an \"easy_install-2.5\" script in your $PATH (check the output of the above command).\nNow you have two easy_install scripts: one for python 2.4 (\"easy_install\") and one for python 2.5 (\"easy_install-2.5\").\nTo install Django for your python2.5, use\nsudo easy_install-2.5 django\n\nThat's it!\n", "easy_install is a shell script the first line of which tells it where python is installed (I am on OSX so can't say exactly what your directories will be )\nSo you could copy easy_install and change the first line to point to 2.5. (or do as Heas suggests)\nAlthernatively when you installed python 2.5 there might be a file easy_install-2.5 with the correct python (again these were installed for me under OSX so might be a special version)\n", "You don't need to 'install' Django at all, it just needs to live on the Pythonpath somewhere. Assuming it's currently in the Python2.4 site-packages directory, you can just move it to the 2.5 one:\nsudo mv /usr/lib/python2.4/site-packages/django /usr/lib/python2.5/site-packages/django\n\nor whatever the correct path is for CentOS.\nHowever, as I noted on your other question, this won't necessarily help - S.Lott was unfortunately misleading in his answer. To serve Django via modpython or modwsgi with a new Python version, you'll need to recompile those extensions or download packages versions precompiled with Python 2.5.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "centos", "django", "python" ]
stackoverflow_0001797017_centos_django_python.txt
Q: Regular Expression to match a string only when certain characters don't exist So, here's my question: I have a crawler that goes and downloads web pages and strips those of URLs (for future crawling). My crawler operates from a whitelist of URLs which are specified in regular expressions, so they're along the lines of: (http://www.example.com/subdirectory/)(.*?) ...which would allow URLs that followed the pattern to be crawled in the future. The problem I'm having is that I'd like to exclude certain characters in URLs, so that (for example) addresses such as: (http://www.example.com/subdirectory/)(somepage?param=1&param=5#print) ...in the case above, as an example, I'd like to be able to exclude URLs that feature ?, #, and = (to avoid crawling those pages). I've tried quite a few different approaches, but I can't seem to get it right: (http://www.example.com/)([^=\?#](.*?)) etc. Any help would be really appreciated! EDIT: sorry, should've mentioned this is written in Python, and I'm normally fairly proficient at regex (although this has me stumped) EDIT 2: VoDurden's answer (the accepted one below) almost yields the correct result, all it needs is the $ character at the end of the expression and it works perfectly - example: (http://www.example.com/)([^=\?#]*)$ A: (http://www.example.com/)([^=?#]*?) Should do it, this will allow any URL that does not contain the characters you don't want. It might however be a little bit hard to extend this approach. A better option is to have the system work two-tiered, i.e. one set of matching regex, and one set of blocking regex. Then only URL:s which pass both of these will be allowed. I think this solution will be a bit more transparent and flexible. A: You will need to crawl the pages upto ?param=1&param=5 because normally param=1 and param=2 could give you completely different web page. pick up one the wordpress website to confirm that. Try like this one, It will try to match just before # char (http://www.example.com/)([^#]*?) A: This expression should be what you're looking for: (http://www.example.com/subdirectory/)([^=?#]*)$ [^=\?#] Will match anything except for the characters you specified. For Example: http://www.example.com/subdirectory/ Match http://www.example.com/subdirectory/index.php Match http://www.example.com/subdirectory/somepage?param=1&param=5#print No Match http://www.example.com/subdirectory/index.php?param=1 No Match A: I'm not sure of what you want. If you wan't to match anything that doesn't containst any ?, #, and = then the regex is ([^=?#]*) A: As an alternative there's always the urlparse module which is designed for parsing urls. from urlparse import urlparse urls= [ 'http://www.example.com/subdirectory/', 'http://www.example.com/subdirectory/index.php', 'http://www.example.com/subdirectory/somepage?param=1&param=5#print', 'http://www.example.com/subdirectory/index.php?param=1', ] for url in urls: # in python 2.5+ you can use urlparse(url).query instead if not urlparse(url)[4]: print url Provides the following: http://www.example.com/subdirectory/ http://www.example.com/subdirectory/index.php
Regular Expression to match a string only when certain characters don't exist
So, here's my question: I have a crawler that goes and downloads web pages and strips those of URLs (for future crawling). My crawler operates from a whitelist of URLs which are specified in regular expressions, so they're along the lines of: (http://www.example.com/subdirectory/)(.*?) ...which would allow URLs that followed the pattern to be crawled in the future. The problem I'm having is that I'd like to exclude certain characters in URLs, so that (for example) addresses such as: (http://www.example.com/subdirectory/)(somepage?param=1&param=5#print) ...in the case above, as an example, I'd like to be able to exclude URLs that feature ?, #, and = (to avoid crawling those pages). I've tried quite a few different approaches, but I can't seem to get it right: (http://www.example.com/)([^=\?#](.*?)) etc. Any help would be really appreciated! EDIT: sorry, should've mentioned this is written in Python, and I'm normally fairly proficient at regex (although this has me stumped) EDIT 2: VoDurden's answer (the accepted one below) almost yields the correct result, all it needs is the $ character at the end of the expression and it works perfectly - example: (http://www.example.com/)([^=\?#]*)$
[ "(http://www.example.com/)([^=?#]*?)\n\nShould do it, this will allow any URL that does not contain the characters you don't want. \nIt might however be a little bit hard to extend this approach. A better option is to have the system work two-tiered, i.e. one set of matching regex, and one set of blocking regex. Then only URL:s which pass both of these will be allowed. I think this solution will be a bit more transparent and flexible.\n", "You will need to crawl the pages upto ?param=1&param=5 \nbecause normally param=1 and param=2 could give you completely different web page. \npick up one the wordpress website to confirm that.\nTry like this one, It will try to match just before # char\n(http://www.example.com/)([^#]*?)\n\n", "This expression should be what you're looking for:\n(http://www.example.com/subdirectory/)([^=?#]*)$\n\n[^=\\?#] Will match anything except for the characters you specified.\nFor Example:\n\nhttp://www.example.com/subdirectory/ Match\nhttp://www.example.com/subdirectory/index.php Match\nhttp://www.example.com/subdirectory/somepage?param=1&param=5#print No Match\nhttp://www.example.com/subdirectory/index.php?param=1 No Match\n\n", "I'm not sure of what you want. If you wan't to match anything that doesn't containst any ?, #, and = then the regex is\n([^=?#]*)\n\n", "As an alternative there's always the urlparse module which is designed for parsing urls.\nfrom urlparse import urlparse\n\nurls= [\n 'http://www.example.com/subdirectory/',\n 'http://www.example.com/subdirectory/index.php',\n 'http://www.example.com/subdirectory/somepage?param=1&param=5#print',\n 'http://www.example.com/subdirectory/index.php?param=1',\n]\n\nfor url in urls:\n # in python 2.5+ you can use urlparse(url).query instead\n if not urlparse(url)[4]:\n print url\n\nProvides the following:\nhttp://www.example.com/subdirectory/\nhttp://www.example.com/subdirectory/index.php\n\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "regex", "url" ]
stackoverflow_0001796053_python_regex_url.txt
Q: Are there any good summarizers for a web-page? Suppose I give you a URL...can you analyze the words and spit out the "keywords" of that page? (besides using meta-tags) Are there good open-source summarizers out there? (preferably Python) A: A simple text summarizer: http://pythonwise.blogspot.com/2008/01/simple-text-summarizer.html Algorithm: 1. For each word, calculate it's frequency in the document 2. For each sentence in the document score(sentence) = sum([freq(word) for word in sentence]) 3. Print X top sentences such that their size < MAX_SUMMARY_SIZE A: Frequency counts will get you some of the way but Natural Language Processing will provide better results as it uses linguistic techniques to provide more accuracy. Topia.termextract uses a Parts-Of-Speech (POS) tagging algorithm and is available from PyPi http://pypi.python.org/pypi/topia.termextract/
Are there any good summarizers for a web-page?
Suppose I give you a URL...can you analyze the words and spit out the "keywords" of that page? (besides using meta-tags) Are there good open-source summarizers out there? (preferably Python)
[ "A simple text summarizer: http://pythonwise.blogspot.com/2008/01/simple-text-summarizer.html\nAlgorithm:\n1. For each word, calculate it's frequency in the document\n2. For each sentence in the document \n score(sentence) = sum([freq(word) for word in sentence])\n3. Print X top sentences such that their size < MAX_SUMMARY_SIZE\n\n", "Frequency counts will get you some of the way but Natural Language Processing will provide better results as it uses linguistic techniques to provide more accuracy.\nTopia.termextract uses a Parts-Of-Speech (POS) tagging algorithm and is available from PyPi http://pypi.python.org/pypi/topia.termextract/\n" ]
[ 2, 1 ]
[]
[]
[ "python", "text" ]
stackoverflow_0001795520_python_text.txt
Q: Selenium - Pop Up window I am testing the Router UI using selenium. I am using cisco routers. I am pinging a website and the router opens a pop up window showing the Ping statistics. The selenium ide is recording the popup window as " Ping table " but when i am running it the ide shows an error. I want to verify and validate the data in the pop up window . i tried " select window " , get win ids " , win names , nothing seems to be working. I am using python in selenium . code below sel.open("/Diagnostics.asp") sel.click("ping_button") sel.wait_for_page_to_load("30000") sel.click("ping_button") sel.wait_for_page_to_load("30000") ------- it fails for all the steps below sel.wait_for_pop_up("PingTable", "30000") ------ pop up window -----> ping table ------------ sel.select_window("name=PingTable") self.failUnless(sel.is_text_present("5 Packets transmitted, 5 Packets received, 0% Packet loss")) nothing seems to work ...... A: I'd need to reproduce this locally to be able to answer definitively. The only thing that comes to mind right now is that you say the IDE identifies it as "Ping table" but in your python you refer to it as "PingTable". It might be a typo on your behalf, but maybe not.
Selenium - Pop Up window
I am testing the Router UI using selenium. I am using cisco routers. I am pinging a website and the router opens a pop up window showing the Ping statistics. The selenium ide is recording the popup window as " Ping table " but when i am running it the ide shows an error. I want to verify and validate the data in the pop up window . i tried " select window " , get win ids " , win names , nothing seems to be working. I am using python in selenium . code below sel.open("/Diagnostics.asp") sel.click("ping_button") sel.wait_for_page_to_load("30000") sel.click("ping_button") sel.wait_for_page_to_load("30000") ------- it fails for all the steps below sel.wait_for_pop_up("PingTable", "30000") ------ pop up window -----> ping table ------------ sel.select_window("name=PingTable") self.failUnless(sel.is_text_present("5 Packets transmitted, 5 Packets received, 0% Packet loss")) nothing seems to work ......
[ "I'd need to reproduce this locally to be able to answer definitively. The only thing that comes to mind right now is that you say the IDE identifies it as \"Ping table\" but in your python you refer to it as \"PingTable\". It might be a typo on your behalf, but maybe not.\n" ]
[ 0 ]
[]
[]
[ "popup", "python", "selenium", "window" ]
stackoverflow_0001800291_popup_python_selenium_window.txt
Q: Genshi table loop What is wrong with this Genshi template: <html xmlns:py="http://genshi.edgewall.org/"> <head> <title py:content="title"></title> </head> <body> <left> <table py: for="i in range(1, len(ctabl))"> <li py: for="e in ctabl[i]"> ${e} </li> </table> </body> </html> I get this error: genshi.template.base.TemplateSyntaxError: not well-formed (invalid token): line 7, column 14 (templates/index2.html, line 7) Seems that there is something wrong with the table loop... I don't know. A: I've never used Genshi, but their list of allowed processing directives do not have any spaces between py, the :, and the for. Try removing that space. And anyway, Line 7, Column 14 is on the colon or the space, depending on whether you count from 0 or 1, right?
Genshi table loop
What is wrong with this Genshi template: <html xmlns:py="http://genshi.edgewall.org/"> <head> <title py:content="title"></title> </head> <body> <left> <table py: for="i in range(1, len(ctabl))"> <li py: for="e in ctabl[i]"> ${e} </li> </table> </body> </html> I get this error: genshi.template.base.TemplateSyntaxError: not well-formed (invalid token): line 7, column 14 (templates/index2.html, line 7) Seems that there is something wrong with the table loop... I don't know.
[ "I've never used Genshi, but their list of allowed processing directives do not have any spaces between py, the :, and the for. Try removing that space. And anyway, Line 7, Column 14 is on the colon or the space, depending on whether you count from 0 or 1, right?\n" ]
[ 2 ]
[]
[]
[ "genshi", "python" ]
stackoverflow_0001800508_genshi_python.txt
Q: Local timezone problem in GAE working with Google Data API I am working in a small app in the Google App Engine (Python), which uses the Google Data API in order to create a new calendar in a Google account and populate it with some events. The events that I am using are parsed from another place, and they have the date in the Europe/Stockholm timezone, (I think it is CEST or CET). Therefore, first I create a Python struct_time with (for example) this: start = strptime("2009-11-16 10:15", "%Y-%m-%d %H:%M") Python's doc says that there I could use the flag %Z in order to specify the timezone, and in fact it works in my Python interpreter... but it totally fails in the Google App Engine server (both localhost and deploy). It is something not new, as I have seen here. Since I am using the Google Data API, and it needs the times in UTC/GMT format, I need a way to convert from the Europe/Stockholm local time to the UTC/GMT, without use the %Z flag in strptime ... (and something more smart than just subtract one hour to each date...) Thank you in advance one more time :) A: I posted something related to this subject on my blog last year. Basically, it converts all times to UTC when storing them in the datastore, and attaches the UTC timezone to them when read out. You can feel free to modify it to then convert the values to whatever local timezone you want. This code sample may be out of date — I haven't used appengine in a long time. But I hope it will help you in some way. import pytz class TzDateTimeProperty(db.DateTimeProperty): def get_value_for_datastore(self, model_instance): if model_instance.posted_at.tzinfo is None: model_instance.posted_at = model_instance.posted_at.replace(tzinfo=pytz.utc) else: model_instance.posted_at = model_instance.posted_at.astimezone(pytz.utc) return super(TzDateTimeProperty, self).get_value_for_datastore(model_instance) def make_value_from_datastore(self, value): value = super(TzDateTimeProperty, self).make_value_from_datastore(value) if value.tzinfo is None: value = value.replace(tzinfo=pytz.utc) else: value = value.astimezone(pytz.utc) return value I also recommend the excellent pytz library; it provides timezone objects for just about every useful timezone. (I'd link to it, but spam prevention is stopping me. Just Google for it.) A: after parsing your object into datetime, you have to add timezone. for that you need to create class derived from the datetime.tzinfo import datetime as dt class SomeZone(dt.tzinfo): # a helper class to quickly create simple timezones - give gmt_offset def __init__(self, gmt_offset): dt.tzinfo.__init__(self) self.gmt_offset = gmt_offset def utcoffset(self, dtime): return dt.timedelta(hours=self.gmt_offset) def dst(self, dtime): return dt.timedelta(0) def tzname(self, dtime): return None start = strptime("2009-11-16 10:15", "%Y-%m-%d %H:%M") start.replace(tzinfo=SomeZone(your_offset_here)) et voila, now it is a datetime with timezone. from here google will take over, as the datetime fields are zone aware and on storage will store it in utc. just remember about daylight savings and everything. if you would like to scavenge around, i'm using the class described up there here
Local timezone problem in GAE working with Google Data API
I am working in a small app in the Google App Engine (Python), which uses the Google Data API in order to create a new calendar in a Google account and populate it with some events. The events that I am using are parsed from another place, and they have the date in the Europe/Stockholm timezone, (I think it is CEST or CET). Therefore, first I create a Python struct_time with (for example) this: start = strptime("2009-11-16 10:15", "%Y-%m-%d %H:%M") Python's doc says that there I could use the flag %Z in order to specify the timezone, and in fact it works in my Python interpreter... but it totally fails in the Google App Engine server (both localhost and deploy). It is something not new, as I have seen here. Since I am using the Google Data API, and it needs the times in UTC/GMT format, I need a way to convert from the Europe/Stockholm local time to the UTC/GMT, without use the %Z flag in strptime ... (and something more smart than just subtract one hour to each date...) Thank you in advance one more time :)
[ "I posted something related to this subject on my blog last year. Basically, it converts all times to UTC when storing them in the datastore, and attaches the UTC timezone to them when read out. You can feel free to modify it to then convert the values to whatever local timezone you want.\nThis code sample may be out of date — I haven't used appengine in a long time. But I hope it will help you in some way.\nimport pytz\n\nclass TzDateTimeProperty(db.DateTimeProperty):\n def get_value_for_datastore(self, model_instance):\n if model_instance.posted_at.tzinfo is None:\n model_instance.posted_at = model_instance.posted_at.replace(tzinfo=pytz.utc)\n else:\n model_instance.posted_at = model_instance.posted_at.astimezone(pytz.utc)\n return super(TzDateTimeProperty, self).get_value_for_datastore(model_instance)\n def make_value_from_datastore(self, value):\n value = super(TzDateTimeProperty, self).make_value_from_datastore(value)\n if value.tzinfo is None:\n value = value.replace(tzinfo=pytz.utc)\n else:\n value = value.astimezone(pytz.utc)\n return value\n\nI also recommend the excellent pytz library; it provides timezone objects for just about every useful timezone. (I'd link to it, but spam prevention is stopping me. Just Google for it.)\n", "after parsing your object into datetime, you have to add timezone. for that you need to create class derived from the datetime.tzinfo\n\nimport datetime as dt\nclass SomeZone(dt.tzinfo):\n # a helper class to quickly create simple timezones - give gmt_offset\n def __init__(self, gmt_offset):\n dt.tzinfo.__init__(self)\n self.gmt_offset = gmt_offset \n def utcoffset(self, dtime): return dt.timedelta(hours=self.gmt_offset)\n def dst(self, dtime): return dt.timedelta(0)\n def tzname(self, dtime): return None\n\nstart = strptime(\"2009-11-16 10:15\", \"%Y-%m-%d %H:%M\")\nstart.replace(tzinfo=SomeZone(your_offset_here))\n\net voila, now it is a datetime with timezone. from here google will take over, as the datetime fields are zone aware and on storage will store it in utc.\njust remember about daylight savings and everything.\nif you would like to scavenge around, i'm using the class described up there here\n" ]
[ 1, 0 ]
[]
[]
[ "gdata_api", "google_app_engine", "python" ]
stackoverflow_0001728415_gdata_api_google_app_engine_python.txt
Q: Most pythonic way of ignoring output I've got a class which uses the context management protocol to have a silent stderr stream for a while (mainly used for py2exe deployments, where the app writing anything to stderr causes ugly dialogs when the app is closed, and I'm doing something that I know will have some stderr output) import sys import os from contextlib import contextmanager @contextmanager def noStderr(): stderr = sys.stderr sys.stderr = open(os.devnull, "w") yield sys.stderr = stderr My question is what would be more pythonic, the reasonably clean solution of opening the system's bit bucket and writing to that, or skipping allocation of the fd and write operations, and creating a new class ala: class nullWriter(object): def write(self, string): pass and then replacing the above code with from contextlib import contextmanager @contextmanager def noStderr(): stderr = sys.stderr sys.stderr = nullWriter() yield sys.stderr = stderr A: I think the latter solution is the more elegant. You avoid going to the system environment, potentially wasting an fd. Why go out the operating system when it's not needed? A: I feel that the nullWriter class would be more "Pythonic" because it uses the Python interfaces already in place (that you can assign sys.stderr to anything that has a write method), rather than having to go out to the system environment and write to the "bit bucket" as you put it :) A: It's a decision between using things that already exist (os.devnull) but are a bit "messier" (you need to open() it etc'), and creating your own solution, which might be simpler, but it's a new class that you're creating. Though both are totally fine, I would have gone with the nullWriter, as it's cleaner and depends on pure python knowledge and doesn't mess with os things. A: What's wrong with that? import sys sys.stderr = open('/dev/null', 'w') A: Thanks all for the replies. I think I will go with the nullWriter approach. I'm aware that both options work, but more interested to see what seems cleaner (esp as the overhead of opening the file is negligable).
Most pythonic way of ignoring output
I've got a class which uses the context management protocol to have a silent stderr stream for a while (mainly used for py2exe deployments, where the app writing anything to stderr causes ugly dialogs when the app is closed, and I'm doing something that I know will have some stderr output) import sys import os from contextlib import contextmanager @contextmanager def noStderr(): stderr = sys.stderr sys.stderr = open(os.devnull, "w") yield sys.stderr = stderr My question is what would be more pythonic, the reasonably clean solution of opening the system's bit bucket and writing to that, or skipping allocation of the fd and write operations, and creating a new class ala: class nullWriter(object): def write(self, string): pass and then replacing the above code with from contextlib import contextmanager @contextmanager def noStderr(): stderr = sys.stderr sys.stderr = nullWriter() yield sys.stderr = stderr
[ "I think the latter solution is the more elegant. You avoid going to the system environment, potentially wasting an fd. Why go out the operating system when it's not needed?\n", "I feel that the nullWriter class would be more \"Pythonic\" because it uses the Python interfaces already in place (that you can assign sys.stderr to anything that has a write method), rather than having to go out to the system environment and write to the \"bit bucket\" as you put it :)\n", "It's a decision between using things that already exist (os.devnull) but are a bit \"messier\" (you need to open() it etc'), and creating your own solution, which might be simpler, but it's a new class that you're creating.\nThough both are totally fine, I would have gone with the nullWriter, as it's cleaner and depends on pure python knowledge and doesn't mess with os things.\n", "What's wrong with that?\nimport sys\n\nsys.stderr = open('/dev/null', 'w')\n\n", "Thanks all for the replies.\nI think I will go with the nullWriter approach. I'm aware that both options work, but more interested to see what seems cleaner (esp as the overhead of opening the file is negligable).\n" ]
[ 4, 3, 2, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001800396_python.txt
Q: Munging non-printable characters to dots using string.translate() So I've done this before and it's a surprising ugly bit of code for such a seemingly simple task. The goal is to translate any non-printable character into a . (dot). For my purposes "printable" does exclude the last few characters from string.printable (new-lines, tabs, and so on). This is for printing things like the old MS-DOS debug "hex dump" format ... or anything similar to that (where additional whitespace will mangle the intended dump layout). I know I can use string.translate() and, to use that, I need a translation table. So I use string.maketrans() for that. Here's the best I could come up with: filter = string.maketrans( string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]), '.'*len(string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]))) ... which is an unreadable mess (though it does work). From there you can call use something like: for each_line in sometext: print string.translate(each_line, filter) ... and be happy. (So long as you don't look under the hood). Now it is more readable if I break that horrid expression into separate statements: ascii = string.maketrans('','') # The whole ASCII character set nonprintable = string.translate(ascii, ascii, string.printable[:-5]) # Optional delchars argument filter = string.maketrans(nonprintable, '.' * len(nonprintable)) And it's tempting to do that just for legibility. However, I keep thinking there has to be a more elegant way to express this! A: Here's another approach using a list comprehension: filter = ''.join([['.', chr(x)][chr(x) in string.printable[:-5]] for x in xrange(256)]) A: Broadest use of "ascii" here, but you get the idea >>> import string >>> ascii="".join(map(chr,range(256))) >>> filter="".join(('.',x)[x in string.printable[:-5]] for x in ascii) >>> ascii.translate(filter) '................................ !"#$%&\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~.................................................................................................................................' If I were golfing, probably use something like this: filter='.'*32+"".join(map(chr,range(32,127)))+'.'*129 A: for actual code-golf, I imagine you'd avoid string.maketrans entirely s=set(string.printable[:-5]) newstring = ''.join(x for x in oldstring if x in s else '.') or newstring=re.sub('[^'+string.printable[:-5]+']','',oldstring) A: I don't find this solution ugly. It is certainly more efficient than any regex based solution. Here is a tiny bit shorter solution. But only works in python2.6: nonprintable = string.maketrans('','').translate(None, string.printable[:-5]) filter = string.maketrans(nonprintable, '.' * len(nonprintable))
Munging non-printable characters to dots using string.translate()
So I've done this before and it's a surprising ugly bit of code for such a seemingly simple task. The goal is to translate any non-printable character into a . (dot). For my purposes "printable" does exclude the last few characters from string.printable (new-lines, tabs, and so on). This is for printing things like the old MS-DOS debug "hex dump" format ... or anything similar to that (where additional whitespace will mangle the intended dump layout). I know I can use string.translate() and, to use that, I need a translation table. So I use string.maketrans() for that. Here's the best I could come up with: filter = string.maketrans( string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]), '.'*len(string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]))) ... which is an unreadable mess (though it does work). From there you can call use something like: for each_line in sometext: print string.translate(each_line, filter) ... and be happy. (So long as you don't look under the hood). Now it is more readable if I break that horrid expression into separate statements: ascii = string.maketrans('','') # The whole ASCII character set nonprintable = string.translate(ascii, ascii, string.printable[:-5]) # Optional delchars argument filter = string.maketrans(nonprintable, '.' * len(nonprintable)) And it's tempting to do that just for legibility. However, I keep thinking there has to be a more elegant way to express this!
[ "Here's another approach using a list comprehension:\nfilter = ''.join([['.', chr(x)][chr(x) in string.printable[:-5]] for x in xrange(256)])\n\n", "Broadest use of \"ascii\" here, but you get the idea\n>>> import string\n>>> ascii=\"\".join(map(chr,range(256)))\n>>> filter=\"\".join(('.',x)[x in string.printable[:-5]] for x in ascii)\n>>> ascii.translate(filter)\n'................................ !\"#$%&\\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~.................................................................................................................................'\n\nIf I were golfing, probably use something like this:\nfilter='.'*32+\"\".join(map(chr,range(32,127)))+'.'*129\n\n", "for actual code-golf, I imagine you'd avoid string.maketrans entirely\ns=set(string.printable[:-5])\nnewstring = ''.join(x for x in oldstring if x in s else '.')\n\nor\nnewstring=re.sub('[^'+string.printable[:-5]+']','',oldstring)\n\n", "I don't find this solution ugly. It is certainly more efficient than any regex based solution. Here is a tiny bit shorter solution. But only works in python2.6:\nnonprintable = string.maketrans('','').translate(None, string.printable[:-5])\nfilter = string.maketrans(nonprintable, '.' * len(nonprintable))\n\n" ]
[ 5, 4, 1, 1 ]
[]
[]
[ "code_golf", "python" ]
stackoverflow_0001800790_code_golf_python.txt
Q: Windows with Plesk Panel installs ActiveState Python 2.5.0 - any thoughts? I expect to run Pylons on a Windows Server 2003 and IIS 6 on a Virtual Private Server (VPS). Most work with the VPS is done through the Plesk 8.6 panel. The Plesk panel has a lot of maintenance advantages for us. However, this Plesk configuration installs ActiveState Python 2.5.0. The Parallels Plesk documents for 8.6 and version 9 insist that only this configuration should be installed. I'm not eager to settle for the baseline 2.5.0. but don't see any safe upgrade path. How has ActiveState Python 2.5.0 been for other users? Can you replace the Parallels\Plesk\Additional\Python with another distribution? I don't want to break Plesk, please. Previously, I followed these instructions, Serving a Pylons app with IIS - Pylons Cookbook Using the default web site IP address, I had Python 2.6.3 installing the ISAPI-WSGI dll in IIS so that I successfully ran Pylons in a virutalenv through IIS using the IP address, no domain name. I would be so happy if I could run this successful configuration for my domains while I must use Plesk. Tips and solutions appreciated. A: The default Python install location is something like c:\python26. I think it's likely you could install the latest python there, without it conflicting with the ActiveState Python. (You may have to deal with path issues or conflicts over which copy 'owns' python source files in Explorer, though.)
Windows with Plesk Panel installs ActiveState Python 2.5.0 - any thoughts?
I expect to run Pylons on a Windows Server 2003 and IIS 6 on a Virtual Private Server (VPS). Most work with the VPS is done through the Plesk 8.6 panel. The Plesk panel has a lot of maintenance advantages for us. However, this Plesk configuration installs ActiveState Python 2.5.0. The Parallels Plesk documents for 8.6 and version 9 insist that only this configuration should be installed. I'm not eager to settle for the baseline 2.5.0. but don't see any safe upgrade path. How has ActiveState Python 2.5.0 been for other users? Can you replace the Parallels\Plesk\Additional\Python with another distribution? I don't want to break Plesk, please. Previously, I followed these instructions, Serving a Pylons app with IIS - Pylons Cookbook Using the default web site IP address, I had Python 2.6.3 installing the ISAPI-WSGI dll in IIS so that I successfully ran Pylons in a virutalenv through IIS using the IP address, no domain name. I would be so happy if I could run this successful configuration for my domains while I must use Plesk. Tips and solutions appreciated.
[ "The default Python install location is something like c:\\python26. I think it's likely you could install the latest python there, without it conflicting with the ActiveState Python. (You may have to deal with path issues or conflicts over which copy 'owns' python source files in Explorer, though.)\n" ]
[ 0 ]
[]
[]
[ "activestate", "plesk", "pylons", "python", "virtualenv" ]
stackoverflow_0001744243_activestate_plesk_pylons_python_virtualenv.txt
Q: How does wsgi handle multiple request headers with the same name? In WSGI headers are represented in the environ as 'HTTP_XXX' values. For example the value Cookie: header is stored at the HTTP_COOKIE key of the environ. How are multiple request headers with the same header name represented? A: Multiple cookies are combined into a single header, separated by semicolons. Multiple headers are allowed by the HTTP spec, but only for certain kinds of headers, and it is always permissible to combine those headers into one (though using commas, not semicolons) A: I thought the answer to this one would be trivial, but after digging a bit I'm not so sure. Here's what I've found so far: The WSGI PEP-333 (http://www.python.org/dev/peps/pep-0333/) suggests that the environment variables should contain whatever the CGI specification says. The CGI specification (getting harder to find, a lot of broken links, best I could find at draft-coar-cgi-v11-03) talks about metadata and says (section 6.1.5) ". If multiple header fields with the same field-name are received then the server MUST rewrite them as though they had been received as a single header field having the same semantics before being represented in a metavariable" Which suggests to me that if you have multiple header lines with the same key, you must join them up somehow into one line. HTTP_COOKIE, as an example, supports this by concatenating all the key=value pairs into one line with semicolons between them.
How does wsgi handle multiple request headers with the same name?
In WSGI headers are represented in the environ as 'HTTP_XXX' values. For example the value Cookie: header is stored at the HTTP_COOKIE key of the environ. How are multiple request headers with the same header name represented?
[ "Multiple cookies are combined into a single header, separated by semicolons.\nMultiple headers are allowed by the HTTP spec, but only for certain kinds of headers, and it is always permissible to combine those headers into one (though using commas, not semicolons)\n", "I thought the answer to this one would be trivial, but after digging a bit I'm not so sure.\nHere's what I've found so far:\nThe WSGI PEP-333 (http://www.python.org/dev/peps/pep-0333/) suggests that the environment variables should contain whatever the CGI specification says.\nThe CGI specification (getting harder to find, a lot of broken links, best I could find at draft-coar-cgi-v11-03) talks about metadata and says (section 6.1.5) \n\n\". If multiple header fields with the\n same field-name are received then the\n server MUST rewrite them as though\n they had been received as a single\n header field having the same semantics\n before being represented in a\n metavariable\"\n\nWhich suggests to me that if you have multiple header lines with the same key, you must join them up somehow into one line.\nHTTP_COOKIE, as an example, supports this by concatenating all the key=value pairs into one line with semicolons between them.\n" ]
[ 8, 3 ]
[]
[]
[ "http", "python", "wsgi" ]
stackoverflow_0001801124_http_python_wsgi.txt
Q: Is smtplib pure python or implemented in C? Is smtplib pure python or implemented in C? A: In [32]: import smtplib In [33]: smtplib Out[33]: <module 'smtplib' from '/usr/lib/python2.6/smtplib.pyc'> Therefore, smtplib is written in python. A: smtplib itself is implemented in python but socket is based on C, so its means both. A: Basically pure Python (as the underlying implementation if you go down far enough is C). You can find the source under the Lib\ directory in your Python root.
Is smtplib pure python or implemented in C?
Is smtplib pure python or implemented in C?
[ "In [32]: import smtplib\n\nIn [33]: smtplib\nOut[33]: <module 'smtplib' from '/usr/lib/python2.6/smtplib.pyc'>\n\nTherefore, smtplib is written in python.\n", "smtplib itself is implemented in python but socket is based on C, so its means both.\n", "Basically pure Python (as the underlying implementation if you go down far enough is C). You can find the source under the Lib\\ directory in your Python root.\n" ]
[ 8, 4, 2 ]
[]
[]
[ "c", "python", "smtplib" ]
stackoverflow_0001801271_c_python_smtplib.txt
Q: Is os.popen really deprecated in Python 2.6? The on-line documentation states that os.popen is now deprecated. All other deprecated functions duly raise a DeprecationWarning. For instance: >>> import os >>> [c.close() for c in os.popen2('ps h -eo pid:1,command')] __main__:1: DeprecationWarning: os.popen2 is deprecated. Use the subprocess module. [None, None] The function os.popen, on the other hand, completes silently: >>>len(list(os.popen('ps h -eo pid:1,command'))) 202 Without raising a warning. Of the three possible scenarios It is expected behaviour that documentation and standard library have different ideas of what is deprecated; There is an error in the documentation and os.popen is not really deprecated; There is an error in the standard library and os.popen should raise a warning; which one is the correct one? For background information, here's the Python I'm using: >>> import sys >>> print sys.version 2.6.2 (r262:71600, May 12 2009, 10:57:01) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] The argument to os.popen is taken from a reply of mine here on Stack Overflow. Addendum: Thanks to cobbal below, it turns out that os.popen is not deprecated in Python 3.1, after all. A: Here is the PEP. Deprecated modules and functions in the standard library: - buildtools - cfmfile - commands.getstatus() - macostools.touched() - md5 - MimeWriter - mimify - popen2, os.popen[234]() - posixfile - sets - sha A: one thing that I can think of is that os.popen exists in python3, while os.popen2 doesn't. So one is "more deprecated" than the other, and scheduled for sooner removal from the language. A: In the meanwhile I have opened a ticket on the Python issue tracker. I'll keep this question open until the ticket is closed. A: commands.getstatusoutput still uses it according to the 2.6.4 documentation.
Is os.popen really deprecated in Python 2.6?
The on-line documentation states that os.popen is now deprecated. All other deprecated functions duly raise a DeprecationWarning. For instance: >>> import os >>> [c.close() for c in os.popen2('ps h -eo pid:1,command')] __main__:1: DeprecationWarning: os.popen2 is deprecated. Use the subprocess module. [None, None] The function os.popen, on the other hand, completes silently: >>>len(list(os.popen('ps h -eo pid:1,command'))) 202 Without raising a warning. Of the three possible scenarios It is expected behaviour that documentation and standard library have different ideas of what is deprecated; There is an error in the documentation and os.popen is not really deprecated; There is an error in the standard library and os.popen should raise a warning; which one is the correct one? For background information, here's the Python I'm using: >>> import sys >>> print sys.version 2.6.2 (r262:71600, May 12 2009, 10:57:01) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] The argument to os.popen is taken from a reply of mine here on Stack Overflow. Addendum: Thanks to cobbal below, it turns out that os.popen is not deprecated in Python 3.1, after all.
[ "Here is the PEP.\n\nDeprecated modules and functions in the standard library:\n\n - buildtools\n - cfmfile\n - commands.getstatus()\n - macostools.touched()\n - md5\n - MimeWriter\n - mimify\n - popen2, os.popen[234]()\n - posixfile\n - sets\n - sha\n\n\n", "one thing that I can think of is that os.popen exists in python3, while os.popen2 doesn't. So one is \"more deprecated\" than the other, and scheduled for sooner removal from the language.\n", "In the meanwhile I have opened a ticket on the Python issue tracker. I'll keep this question open until the ticket is closed.\n", "commands.getstatusoutput still uses it according to the 2.6.4 documentation.\n" ]
[ 5, 4, 3, 0 ]
[]
[]
[ "deprecated", "python", "std" ]
stackoverflow_0001098257_deprecated_python_std.txt
Q: Anyone know where I can download a zipped Python distribution? Yeah, kind of random, but I was wondering if anyone could link me to a .zip file containing a Python distribution. I know I could download the installer, so please don't suggest that. :P. A: I didn't exactly understand what you want. Is Portable Python enough for you? If it isn't, check Python's official download website where you have a lot of options - including compressed source tarballs. You can downlod the tarballs, extract and create a zip file. A: Can you use the official Source Distribution of Python? It is not zipped, but you can unpack the whole thing in one line of Python. import tarfile; tarfile.open('Python-3.1.1.tar.bz2').extractall()
Anyone know where I can download a zipped Python distribution?
Yeah, kind of random, but I was wondering if anyone could link me to a .zip file containing a Python distribution. I know I could download the installer, so please don't suggest that. :P.
[ "I didn't exactly understand what you want. Is Portable Python enough for you? If it isn't, check Python's official download website where you have a lot of options - including compressed source tarballs. You can downlod the tarballs, extract and create a zip file. \n", "Can you use the official Source Distribution of Python? It is not zipped, but you can unpack the whole thing in one line of Python.\nimport tarfile; tarfile.open('Python-3.1.1.tar.bz2').extractall()\n\n" ]
[ 4, 2 ]
[]
[]
[ "download", "python" ]
stackoverflow_0001801286_download_python.txt
Q: Django Html email adds extra characters to the email body I'm using Django to send an e-mail which has a text part, and an HTML part. Here's the code: subject = request.session.get('email_subject', None) from_email = request.session.get('user_email', None) to = request.session.get('user_email', None) bcc = [email.strip() for email in request.session.get('email_recipients', None).split(settings.EMAIL_DELIMITER)] text_content = render_to_response(email_text_template, { 'body': request.session.get('email_body', None), 'link': "http://%(site_url)s/ecard/?%(encoded_greeting)s" % { 'site_url': settings.SITE_URL, 'encoded_greeting': urlencode({'g': quote_plus(request.session.get('card_greeting'))}), }, }, context_instance=RequestContext(request)) html_content = render_to_response(email_html_template, { 'body': request.session.get('email_body', None), 'link': "http://%(site_url)s/ecard/?%(encoded_greeting)s" % { 'site_url': settings.SITE_URL, 'encoded_greeting': urlencode({'g': request.session.get('card_greeting')}), }, 'site_url': settings.SITE_URL, }, context_instance=RequestContext(request)) email = EmailMultiAlternatives(subject, text_content, from_email, [to], bcc) email.attach_alternative(html_content, "text/html") sent = email.send() When the user receives the email, it has this text in it: "Content-Type: text/html; charset=utf-8". Is there a good way to get rid of this? A: You are generating html_content and text_content with render_to_response, which returns an HttpResponse object. However you want html_content and text_content to be strings, so use render_to_string instead. You can import render_to_string with the following line: from django.template.loader import render_to_string A: Before you go with Alasdair's suggestion, fire up the shell and take a look at the output from render_to_string and render_to_response. The shell just might help you figure out a problem like this in the future. The "Content-Type: text/html; charset=utf-8" line that you mentioned is the header for the response generated by HttpResponse. It is the only item in the header for a simple HttpResponse object like the one in your example. $ ./manage.py shell Python 2.6.3 (r263:75183, Oct 14 2009, 15:40:24) Type "help", "copyright", "credits" or "license" for more information. >>> from django.shortcuts import render_to_response >>> from django.template.loader import render_to_string >>> template = 'your_template.html' >>> print( "\n".join(render_to_string(template).split('\n')[:3]) ) template-line-1 template-line-2 template-line-3 >>> print( "\n".join(str(render_to_response(template)).split('\n')[:3]) ) Content-Type: text/html; charset=utf-8 template-line-1 >>>
Django Html email adds extra characters to the email body
I'm using Django to send an e-mail which has a text part, and an HTML part. Here's the code: subject = request.session.get('email_subject', None) from_email = request.session.get('user_email', None) to = request.session.get('user_email', None) bcc = [email.strip() for email in request.session.get('email_recipients', None).split(settings.EMAIL_DELIMITER)] text_content = render_to_response(email_text_template, { 'body': request.session.get('email_body', None), 'link': "http://%(site_url)s/ecard/?%(encoded_greeting)s" % { 'site_url': settings.SITE_URL, 'encoded_greeting': urlencode({'g': quote_plus(request.session.get('card_greeting'))}), }, }, context_instance=RequestContext(request)) html_content = render_to_response(email_html_template, { 'body': request.session.get('email_body', None), 'link': "http://%(site_url)s/ecard/?%(encoded_greeting)s" % { 'site_url': settings.SITE_URL, 'encoded_greeting': urlencode({'g': request.session.get('card_greeting')}), }, 'site_url': settings.SITE_URL, }, context_instance=RequestContext(request)) email = EmailMultiAlternatives(subject, text_content, from_email, [to], bcc) email.attach_alternative(html_content, "text/html") sent = email.send() When the user receives the email, it has this text in it: "Content-Type: text/html; charset=utf-8". Is there a good way to get rid of this?
[ "You are generating html_content and text_content with render_to_response, which returns an HttpResponse object. \nHowever you want html_content and text_content to be strings, so use render_to_string instead.\nYou can import render_to_string with the following line:\nfrom django.template.loader import render_to_string\n\n", "Before you go with Alasdair's suggestion, fire up the shell and take a look at the output from render_to_string and render_to_response. The shell just might help you figure out a problem like this in the future.\nThe \"Content-Type: text/html; charset=utf-8\" line that you mentioned is the header for the response generated by HttpResponse. It is the only item in the header for a simple HttpResponse object like the one in your example.\n$ ./manage.py shell\nPython 2.6.3 (r263:75183, Oct 14 2009, 15:40:24) \nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from django.shortcuts import render_to_response\n>>> from django.template.loader import render_to_string\n>>> template = 'your_template.html'\n>>> print( \"\\n\".join(render_to_string(template).split('\\n')[:3]) )\ntemplate-line-1\ntemplate-line-2\ntemplate-line-3\n>>> print( \"\\n\".join(str(render_to_response(template)).split('\\n')[:3]) )\nContent-Type: text/html; charset=utf-8\n\ntemplate-line-1\n>>>\n\n" ]
[ 5, 2 ]
[]
[]
[ "django", "email", "python" ]
stackoverflow_0001801008_django_email_python.txt
Q: The Pythonic way of organizing modules and packages I come from a background where I normally create one file per class. I organize common classes under directories as well. This practice is intuitive to me and it has been proven to be effective in C++, PHP, JavaSript, etc. I am having trouble bringing this metaphor into Python: files are not just files anymore, but they are formal modules. It doesn't seem right to just have one class in a module --- most classes are useless by themselves. If I have a automobile.py and an Automobile class, it seems silly to always reference it as automobile.Automobile as well. But, at the same time, it doesn't seem right to throw a ton of code into one file and call it a day. Obviously, a very complex application should have more than 5 files. What is the correct---or pythonic---way? (Or if there is no correct way, what is your preferred way and why?) How much code should I be throwing in a Python module? A: Think in terms of a "logical unit of packaging" -- which may be a single class, but more often will be a set of classes that closely cooperate. Classes (or module-level functions -- don't "do Java in Python" by always using static methods when module-level functions are also available as a choice!-) can be grouped based on this criterion. Basically, if most users of A also need B and vice versa, A and B should probably be in the same module; but if many users will only need one of them and not the other, then they should probably be in distinct modules (perhaps in the same package, i.e., directory with an __init__.py file in it). The standard Python library, while far from perfect, tends to reflect (mostly) reasonably good practices -- so you can mostly learn from it by example. E.g., the threading module of course defines a Thread class... but it also holds the synchronization-primitive classes such as locks, events, conditions, and semaphores, and an exception-class that can be raised by threading operations (and a few more things). It's at the upper bound of reasonable size (800 lines including whitespace and docstrings), and some crucial thread-related functionality such as Queue has been placed in a separate module, nevertheless it's a good example of what maximum amount of functionality it still makes sense to pack into a single module. A: If you want to stick to your one-class-per-file system (which is logical, don't get me wrong), you might do something like this to avoid having to refer to automobile.Automobile: from automobile import Automobile car = Automobile() However, as mentioned by cobbal, more than one class per file is pretty common in Python. Either way, as long as you pick a sensible system and use it consistently, I don't think any Python users are going to get mad at you :). A: If you are coming from a c++ point of view, you could view python modules akin to a .so or .dll. Yeah they look like source files, because python is scripted, but they are actually loadable libraries of specific functionality. Another metaphor that may help is you might look python modules as namespaces. A: In a mid-sized project, I found myself with several sets of closely related classes. Several of those sets are now grouped into files; for example, the low-level network classes are all in a single network module. However, a few of the largest classes have been split out into their own file. Perhaps the best way to start down that path from a one-class-per-file history is to take the classes that you would normally place in the same directory, and instead keep them in the same file. If that file starts looking too large, split it. A: As a vague guideline: more than 1 class per file is the norm for python also, see How many Python classes should I put in one file?
The Pythonic way of organizing modules and packages
I come from a background where I normally create one file per class. I organize common classes under directories as well. This practice is intuitive to me and it has been proven to be effective in C++, PHP, JavaSript, etc. I am having trouble bringing this metaphor into Python: files are not just files anymore, but they are formal modules. It doesn't seem right to just have one class in a module --- most classes are useless by themselves. If I have a automobile.py and an Automobile class, it seems silly to always reference it as automobile.Automobile as well. But, at the same time, it doesn't seem right to throw a ton of code into one file and call it a day. Obviously, a very complex application should have more than 5 files. What is the correct---or pythonic---way? (Or if there is no correct way, what is your preferred way and why?) How much code should I be throwing in a Python module?
[ "Think in terms of a \"logical unit of packaging\" -- which may be a single class, but more often will be a set of classes that closely cooperate. Classes (or module-level functions -- don't \"do Java in Python\" by always using static methods when module-level functions are also available as a choice!-) can be grouped based on this criterion. Basically, if most users of A also need B and vice versa, A and B should probably be in the same module; but if many users will only need one of them and not the other, then they should probably be in distinct modules (perhaps in the same package, i.e., directory with an __init__.py file in it).\nThe standard Python library, while far from perfect, tends to reflect (mostly) reasonably good practices -- so you can mostly learn from it by example. E.g., the threading module of course defines a Thread class... but it also holds the synchronization-primitive classes such as locks, events, conditions, and semaphores, and an exception-class that can be raised by threading operations (and a few more things). It's at the upper bound of reasonable size (800 lines including whitespace and docstrings), and some crucial thread-related functionality such as Queue has been placed in a separate module, nevertheless it's a good example of what maximum amount of functionality it still makes sense to pack into a single module.\n", "If you want to stick to your one-class-per-file system (which is logical, don't get me wrong), you might do something like this to avoid having to refer to automobile.Automobile:\nfrom automobile import Automobile\ncar = Automobile()\n\nHowever, as mentioned by cobbal, more than one class per file is pretty common in Python. Either way, as long as you pick a sensible system and use it consistently, I don't think any Python users are going to get mad at you :).\n", "If you are coming from a c++ point of view, you could view python modules akin to a .so or .dll. Yeah they look like source files, because python is scripted, but they are actually loadable libraries of specific functionality. \nAnother metaphor that may help is you might look python modules as namespaces.\n", "In a mid-sized project, I found myself with several sets of closely related classes. Several of those sets are now grouped into files; for example, the low-level network classes are all in a single network module. However, a few of the largest classes have been split out into their own file.\nPerhaps the best way to start down that path from a one-class-per-file history is to take the classes that you would normally place in the same directory, and instead keep them in the same file. If that file starts looking too large, split it.\n", "As a vague guideline: more than 1 class per file is the norm for python\nalso, see How many Python classes should I put in one file?\n" ]
[ 39, 11, 7, 4, 3 ]
[]
[]
[ "module", "package", "project_organization", "python" ]
stackoverflow_0001801878_module_package_project_organization_python.txt
Q: Using a hash function to give a memorable personality to objects (Note: The project is in Python.) I'm running a simulation in which I have many objects that I want to show on the screen and manipulate with. There needs to be a way to identify each object, because they'll be moving from place to place abruptly and I want to be able to track which object moved where. What I've been thinking is, to every object I'll generate a "personality". A couple of colors, and an english name, and I'll put that as the object representation on the GUI. I figured that a hash function will be used to make these colors and names, but I've never worked with hash functions. How can I do what I want to do? A: use a uuid (module uuid in python >= 2.5). This uuid, in version 4, is by definition random on all fields (except one) >>> uuid.uuid4() UUID('9d477dc7-a986-4e3d-aa4f-6e57f690be78') You can decompose the fields properly to create a color or a name (by mapping a bucket of names to a specific field). Of course you are limiting your hash (the real identity is always the uuid) but for visual purposes it's greatly sufficient. For example, you could use the first three octets to generate the color #9d477d, and the remaining octet c7 to pick one name out of a set of 256. If you end up with too ugly colors, you can work in HSV instead, and clamp saturation and value to given levels. again, this restricts your hash even more (but the color space is already pretty limited after all).
Using a hash function to give a memorable personality to objects
(Note: The project is in Python.) I'm running a simulation in which I have many objects that I want to show on the screen and manipulate with. There needs to be a way to identify each object, because they'll be moving from place to place abruptly and I want to be able to track which object moved where. What I've been thinking is, to every object I'll generate a "personality". A couple of colors, and an english name, and I'll put that as the object representation on the GUI. I figured that a hash function will be used to make these colors and names, but I've never worked with hash functions. How can I do what I want to do?
[ "use a uuid (module uuid in python >= 2.5).\nThis uuid, in version 4, is by definition random on all fields (except one)\n>>> uuid.uuid4()\nUUID('9d477dc7-a986-4e3d-aa4f-6e57f690be78')\n\nYou can decompose the fields properly to create a color or a name (by mapping a bucket of names to a specific field). Of course you are limiting your hash (the real identity is always the uuid) but for visual purposes it's greatly sufficient.\nFor example, you could use the first three octets to generate the color #9d477d, and the remaining octet c7 to pick one name out of a set of 256.\nIf you end up with too ugly colors, you can work in HSV instead, and clamp saturation and value to given levels. again, this restricts your hash even more (but the color space is already pretty limited after all).\n" ]
[ 2 ]
[]
[]
[ "hash", "identification", "python" ]
stackoverflow_0001802094_hash_identification_python.txt
Q: Python MySQLdb: Update if exists, else insert I am looking for a simple way to query an update or insert based on if the row exists in the first place. I am trying to use Python's MySQLdb right now. This is how I execute my query: self.cursor.execute("""UPDATE `inventory` SET `quantity` = `quantity`+{1} WHERE `item_number` = {0} """.format(item_number,quantity)); I have seen four ways to accomplish this: DUPLICATE KEY. Unfortunately the primary key is already taken up as a unique ID so I can't use this. REPLACE. Same as above, I believe it relies on a primary key to work properly. mysql_affected_rows(). Usually you can use this after updating the row to see if anything was affected. I don't believe MySQLdb in Python supports this feature. Of course the last ditch effort: Make a SELECT query, fetchall, then update or insert based on the result. Basically I am just trying to keep the queries to a minimum, so 2 queries instead of 1 is less than ideal right now. Basically I am wondering if I missed any other way to accomplish this before going with option 4. Thanks for your time. A: Mysql DOES allow you to have unique indexes, and INSERT ... ON DUPLICATE UPDATE will do the update if any unique index has a duplicate, not just the PK. However, I'd probably still go for the "two queries" approach. You are doing this in a transaction, right? Do the update Check the rows affected, if it's 0 then do the insert OR Attempt the insert If it failed because of a unique index violation, do the update (NB: You'll want to check the error code to make sure it didn't fail for some OTHER reason) The former is good if the row will usually exist already, but can cause a race (or deadlock) condition if you do it outside a transaction or have your isolation mode is not high enough. Creating a unique index on item_number in your inventory table sounds like a good idea to me, because I imagine (without knowing the details of your schema) that one item should only have a single stock level (assuming your system doesn't allow multiple stock locations etc).
Python MySQLdb: Update if exists, else insert
I am looking for a simple way to query an update or insert based on if the row exists in the first place. I am trying to use Python's MySQLdb right now. This is how I execute my query: self.cursor.execute("""UPDATE `inventory` SET `quantity` = `quantity`+{1} WHERE `item_number` = {0} """.format(item_number,quantity)); I have seen four ways to accomplish this: DUPLICATE KEY. Unfortunately the primary key is already taken up as a unique ID so I can't use this. REPLACE. Same as above, I believe it relies on a primary key to work properly. mysql_affected_rows(). Usually you can use this after updating the row to see if anything was affected. I don't believe MySQLdb in Python supports this feature. Of course the last ditch effort: Make a SELECT query, fetchall, then update or insert based on the result. Basically I am just trying to keep the queries to a minimum, so 2 queries instead of 1 is less than ideal right now. Basically I am wondering if I missed any other way to accomplish this before going with option 4. Thanks for your time.
[ "Mysql DOES allow you to have unique indexes, and INSERT ... ON DUPLICATE UPDATE will do the update if any unique index has a duplicate, not just the PK.\nHowever, I'd probably still go for the \"two queries\" approach. You are doing this in a transaction, right?\n\nDo the update\nCheck the rows affected, if it's 0 then do the insert\n\nOR\n\nAttempt the insert\nIf it failed because of a unique index violation, do the update (NB: You'll want to check the error code to make sure it didn't fail for some OTHER reason)\n\nThe former is good if the row will usually exist already, but can cause a race (or deadlock) condition if you do it outside a transaction or have your isolation mode is not high enough.\nCreating a unique index on item_number in your inventory table sounds like a good idea to me, because I imagine (without knowing the details of your schema) that one item should only have a single stock level (assuming your system doesn't allow multiple stock locations etc).\n" ]
[ 2 ]
[]
[]
[ "python", "sql", "sql_insert", "sql_update" ]
stackoverflow_0001802172_python_sql_sql_insert_sql_update.txt
Q: Large Sqlite database search How is it possible to implement an efficient large Sqlite db search (more than 90000 entries)? I'm using Python and SQLObject ORM: import re ... def search1(): cr = re.compile(ur'foo') for item in Item.select(): if cr.search(item.name) or cr.search(item.skim): print item.name This function runs in more than 30 seconds. How should I make it run faster? UPD: The test: for item in Item.select(): pass ... takes almost the same time as my initial function (0:00:33.093141 to 0:00:33.322414). So the regexps eat no time. A Sqlite3 shell query: select '' from item where name like '%foo%'; runs in about a second. So the main time consumption happens due to the inefficient ORM's data retrieval from db. I guess SQLObject grabs entire rows here, while Sqlite touches only necessary fields. A: The best way would be to rework your logic to do the selection in the database instead of in your python program. Instead of doing Item.select(), you should rework it to do Item.select("""name LIKE .... If you do this, and make sure you have the name and skim columns indexed, it will return very quickly. 90000 entries is not a large database. A: 30 seconds to fetch 90,000 rows might not be all that bad. Have you benchmarked the time required to do the following? for item in Item.select(): pass Just to see if the time is DB time, network time or application time? If your SQLite DB is physically very large, you could be looking at -- simply -- a lot of physical I/O to read all that database stuff in. A: If you really need to use a regular expression, there's not really anything you can do to speed that up tremendously. The best thing would be to write an sqlite function that performs the comparison for you in the db engine, instead of Python. You could also switch to a db server like postgresql that has support for SIMILAR. http://www.postgresql.org/docs/8.3/static/functions-matching.html A: Given your example and expanding on Reed's answer your code could look a bit like the following: import re import sqlalchemy.sql.expression as expr ... def search1(): searchStr = ur'foo' whereClause = expr.or_(itemsTable.c.nameColumn.contains(searchStr), itemsTable.c.skimColumn.contains(searchStr)) for item in Items.select().where(whereClause): print item.name which translates to SELECT * FROM items WHERE name LIKE '%foo%' or skim LIKE '%foo%' This will have the database do all the filtering work for you instead of fetching all 90000 records and doing possibly two regex operations on each record. You can find some info here on the .contains() method here. As well as the SQLAlchemy SQL Expression Language Tutorial here. Of course the example above assumes variable names for your itemsTable and the column it has (nameColumn and skimColumn). A: I would definitely take a suggestion of Reed to pass the filter to the SQL (forget the index part though). I do not think that selecting only specified fields or all fields make a difference (unless you do have a lot of large fields). I would bet that SQLObject creates/instanciates 80K objects and puts them into a Session/UnitOfWork for tracking. This could definitely take some time. Also if you do not need objects in your session, there must be a way to select just what the fields you need using custom-query creation so that no Item objects are created, but only tuples. A: Initially doing regex via Python was considered for y_serial, but that was dropped in favor of SQLite's GLOB (which is far faster). GLOB is similar to LIKE except that it's syntax is more conventional: * instead of %, ? instead of _ . See the Endnotes at http://yserial.sourceforge.net/ for more details.
Large Sqlite database search
How is it possible to implement an efficient large Sqlite db search (more than 90000 entries)? I'm using Python and SQLObject ORM: import re ... def search1(): cr = re.compile(ur'foo') for item in Item.select(): if cr.search(item.name) or cr.search(item.skim): print item.name This function runs in more than 30 seconds. How should I make it run faster? UPD: The test: for item in Item.select(): pass ... takes almost the same time as my initial function (0:00:33.093141 to 0:00:33.322414). So the regexps eat no time. A Sqlite3 shell query: select '' from item where name like '%foo%'; runs in about a second. So the main time consumption happens due to the inefficient ORM's data retrieval from db. I guess SQLObject grabs entire rows here, while Sqlite touches only necessary fields.
[ "The best way would be to rework your logic to do the selection in the database instead of in your python program.\nInstead of doing Item.select(), you should rework it to do Item.select(\"\"\"name LIKE ....\nIf you do this, and make sure you have the name and skim columns indexed, it will return very quickly. 90000 entries is not a large database.\n", "30 seconds to fetch 90,000 rows might not be all that bad.\nHave you benchmarked the time required to do the following?\n for item in Item.select():\n pass\n\nJust to see if the time is DB time, network time or application time?\nIf your SQLite DB is physically very large, you could be looking at -- simply -- a lot of physical I/O to read all that database stuff in. \n", "If you really need to use a regular expression, there's not really anything you can do to speed that up tremendously. \nThe best thing would be to write an sqlite function that performs the comparison for you in the db engine, instead of Python.\nYou could also switch to a db server like postgresql that has support for SIMILAR. \nhttp://www.postgresql.org/docs/8.3/static/functions-matching.html\n", "Given your example and expanding on Reed's answer your code could look a bit like the following:\nimport re\nimport sqlalchemy.sql.expression as expr\n\n...\n\ndef search1():\n searchStr = ur'foo'\n whereClause = expr.or_(itemsTable.c.nameColumn.contains(searchStr), itemsTable.c.skimColumn.contains(searchStr))\n for item in Items.select().where(whereClause):\n print item.name\n\nwhich translates to\nSELECT * FROM items WHERE name LIKE '%foo%' or skim LIKE '%foo%'\n\nThis will have the database do all the filtering work for you instead of fetching all 90000 records and doing possibly two regex operations on each record.\nYou can find some info here on the .contains() method here.\nAs well as the SQLAlchemy SQL Expression Language Tutorial here.\nOf course the example above assumes variable names for your itemsTable and the column it has (nameColumn and skimColumn).\n", "I would definitely take a suggestion of Reed to pass the filter to the SQL (forget the index part though).\nI do not think that selecting only specified fields or all fields make a difference (unless you do have a lot of large fields). I would bet that SQLObject creates/instanciates 80K objects and puts them into a Session/UnitOfWork for tracking. This could definitely take some time.\nAlso if you do not need objects in your session, there must be a way to select just what the fields you need using custom-query creation so that no Item objects are created, but only tuples.\n", "Initially doing regex via Python was considered for y_serial, but that \nwas dropped in favor of SQLite's GLOB (which is far faster).\nGLOB is similar to LIKE except that it's syntax is more \nconventional: * instead of %, ? instead of _ .\nSee the Endnotes at http://yserial.sourceforge.net/ for more details. \n" ]
[ 3, 2, 0, 0, 0, 0 ]
[]
[]
[ "database", "performance", "python", "search", "sql" ]
stackoverflow_0001002953_database_performance_python_search_sql.txt
Q: django embedding user id into URL template best practice I'm building a navigation menu in my django app, and one of the options is "My Account". There are different roles I have for users, but in order for them all to view their profile, I use a generic URL such as http://mysite/user//profile. What's a Django best practice for building this url using templates? Is it simply something like: <a href="/user/{{ user.id }}/profile">My Account</a> Or is it: <a href="{{ url something something }}">My Account</a> Not entirely sure what the appropriate syntax for using the url template tag is. Here's what my URLconf looks like: (r'^user/(?P<user_id>\d+)/profile/$', user_profile) What's my best bet? A: Look into named URLs, you can find the official django documentation here. Basically you can name your URLs in your URL conf as such: url(r'^user/(?P<user_id>\d+)/profile/$', 'yourapp.views.view', name='user_url') And then in any template, you can do this: <a href="{% url user_url user.id %}"> However, this will make your URL structure pretty ugly, and there are better ways of doing this. For example you could just go to /profile/ and the userid is retrieved from the current session (each request has a 'user' attribute, use it). So for example, in your view you can do this: def myview(request): user = request.user And subsequently you can use that information to do what you want. Much nicer than using ids in the URL and you don't have to worry about any other security issues that might involve. A: Easiest way - define a get_absolute_url method in your model and use that, in conjunction with the permalink decorator if you like.
django embedding user id into URL template best practice
I'm building a navigation menu in my django app, and one of the options is "My Account". There are different roles I have for users, but in order for them all to view their profile, I use a generic URL such as http://mysite/user//profile. What's a Django best practice for building this url using templates? Is it simply something like: <a href="/user/{{ user.id }}/profile">My Account</a> Or is it: <a href="{{ url something something }}">My Account</a> Not entirely sure what the appropriate syntax for using the url template tag is. Here's what my URLconf looks like: (r'^user/(?P<user_id>\d+)/profile/$', user_profile) What's my best bet?
[ "Look into named URLs, you can find the official django documentation here.\nBasically you can name your URLs in your URL conf as such:\nurl(r'^user/(?P<user_id>\\d+)/profile/$', 'yourapp.views.view', name='user_url')\n\nAnd then in any template, you can do this:\n<a href=\"{% url user_url user.id %}\">\n\nHowever, this will make your URL structure pretty ugly, and there are better ways of doing this. For example you could just go to /profile/ and the userid is retrieved from the current session (each request has a 'user' attribute, use it). So for example, in your view you can do this:\ndef myview(request):\n user = request.user\n\nAnd subsequently you can use that information to do what you want. Much nicer than using ids in the URL and you don't have to worry about any other security issues that might involve.\n", "Easiest way - define a get_absolute_url method in your model and use that, in conjunction with the permalink decorator if you like.\n" ]
[ 8, 2 ]
[ "Your first try matches the url listed in your URLconf. I'd also use that aproach.\n" ]
[ -2 ]
[ "django", "python" ]
stackoverflow_0001801350_django_python.txt
Q: What the mechanism use to integrate python with other languages (.Net, Java ....) Somebody talking the python's code can embed into C#'s code. What the mechanism to do that? please explain for me. Thanks a lot A: There are several approaches to this, depending on which languages you want to interoperate with. .Net/CLR Languages - Iron Python provides an implementation of Python running on the CLR. Allows you to use other CLR assemblies and embed a python scripting engine in your code Java/JVM Based Languages - Jython provides an implementation on the JVM and allows you to use Java classes and call to call into jython as a scripting language using JSR 223 - Scripting for the Java Platform C/C++/Perl/etc, etc The Simplified Wrapper and Interface Generator allows you to interop between C based languages and others, including .Net and Java. It's very good for C++, C and COM - other languages are little trickier - but worth checking out if you need to use CPython with .Net or Java A: Use IronPython for integration with .net. Likewise, Jython integrates with Java. A: And Jython for integration with Java.
What the mechanism use to integrate python with other languages (.Net, Java ....)
Somebody talking the python's code can embed into C#'s code. What the mechanism to do that? please explain for me. Thanks a lot
[ "There are several approaches to this, depending on which languages you want to interoperate with.\n\n.Net/CLR Languages - Iron Python provides an implementation of Python running on the CLR. Allows you to use other CLR assemblies and embed a python scripting engine in your code \nJava/JVM Based Languages - Jython provides an implementation on the JVM and allows you to use Java classes and call to call into jython as a scripting language using JSR 223 - Scripting for the Java Platform\nC/C++/Perl/etc, etc The Simplified Wrapper and Interface Generator allows you to interop between C based languages and others, including .Net and Java. It's very good for C++, C and COM - other languages are little trickier - but worth checking out if you need to use CPython with .Net or Java\n\n", "Use IronPython for integration with .net. Likewise, Jython integrates with Java.\n", "And Jython for integration with Java.\n" ]
[ 6, 5, 2 ]
[]
[]
[ ".net", "embed", "java", "python" ]
stackoverflow_0001802256_.net_embed_java_python.txt
Q: What is paste script? I'm trying to understand what paste script and paster are. The website is far from clear. I used paster to generate pre-made layouts for projects, but I don't get the big picture. As far as I understand, and from the wikipedia entry, it says it's a framework for web frameworks, but that seems reductive. paster create seems to be able to create pre-made layouts for setuptools/distutils enabled packages. What is the problem (or set of problems) it's trying to solve? A: Paste got several components: Paste Core: various modules to aid in creating wsgi web apps or frameworks (module index). Includes stuff like request and response objects. From the web site: "The future of these pieces is to split them into independent packages, and refactor the internal Paste dependencies to rely instead on WebOb". If you're considering using components from paste core, I suggest you look at the spin-offs instead, like WebOb. Paste Deploy: a system for loading and configuring WSGI applications and servers (module index). Basically some stuff to read a config file and create a WSGI app as specified in the file. Paste Script: A framework for defining commands. It comes with a few commands out of the box, like paster serve (loads and serves a WSGI application defined in a Paste Deploy config file) and paster create (creates directory layout for packages etc). The best intro to paste script I found is http://pythonpaste.org/script/developer.html Here's the source for the paster serve command: serve.py. And paster create: create_distro.py. A: PasteScript (and its companion PasteDeploy) are tools for running Python code using 'entry points'. Basically, a python library can specify in metadata that it knows how to create a certain kind of Python project, or perform certain operations on those projects. paster is a commandline tool that looks up the appropriate code for the operation you requested. It's a very general kind of problem; if you're familiar with Ruby at all, the equivalent might be 'rake'. In particular, PasteDeploy is a configuration format to serve Python webapps using paster. Both PasteScript and PasteDeploy are important for the Pylons web framework.
What is paste script?
I'm trying to understand what paste script and paster are. The website is far from clear. I used paster to generate pre-made layouts for projects, but I don't get the big picture. As far as I understand, and from the wikipedia entry, it says it's a framework for web frameworks, but that seems reductive. paster create seems to be able to create pre-made layouts for setuptools/distutils enabled packages. What is the problem (or set of problems) it's trying to solve?
[ "Paste got several components:\n\nPaste Core: various modules to aid in creating wsgi web apps or frameworks (module index). Includes stuff like request and response objects. From the web site: \"The future of these pieces is to split them into independent packages, and refactor the internal Paste dependencies to rely instead on WebOb\". If you're considering using components from paste core, I suggest you look at the spin-offs instead, like WebOb.\nPaste Deploy: a system for loading and configuring WSGI applications and servers (module index). Basically some stuff to read a config file and create a WSGI app as specified in the file.\nPaste Script: A framework for defining commands. It comes with a few commands out of the box, like paster serve (loads and serves a WSGI application defined in a Paste Deploy config file) and paster create (creates directory layout for packages etc). The best intro to paste script I found is http://pythonpaste.org/script/developer.html\n\nHere's the source for the paster serve command: serve.py.\nAnd paster create: create_distro.py.\n", "PasteScript (and its companion PasteDeploy) are tools for running Python code using 'entry points'. Basically, a python library can specify in metadata that it knows how to create a certain kind of Python project, or perform certain operations on those projects. paster is a commandline tool that looks up the appropriate code for the operation you requested. It's a very general kind of problem; if you're familiar with Ruby at all, the equivalent might be 'rake'.\nIn particular, PasteDeploy is a configuration format to serve Python webapps using paster. Both PasteScript and PasteDeploy are important for the Pylons web framework.\n" ]
[ 14, 4 ]
[]
[]
[ "paste", "paster", "python" ]
stackoverflow_0001802282_paste_paster_python.txt
Q: How to identify whether a variable is a class or an object I am working at a bit lower level writing a small framework for creating test fixtures for my project in Python. In this I want to find out whether a particular variable is an instance of a certain class or a class itself and if it is a class, I want to know if it is a subclass of a certain class defined by my framework. How do I do it? class MyBase(object): pass class A(MyBase): a1 = 'Val1' a2 = 'Val2' class B(MyBase): a1 = 'Val3' a2 = A I want to find out if the properties a1 and a2 are instances of a class/type (a1 is a string type in B) or a class object itself (i.e a2 is A in B). Can you please help me how do I find this out? A: Use the inspect module. The inspect module provides several useful functions to help get information about live objects such as modules, classes, methods, functions, tracebacks, frame objects, and code objects. For example, it can help you examine the contents of a class, retrieve the source code of a method, extract and format the argument list for a function, or get all the information you need to display a detailed traceback. For example, the inspect.isclass() function returns true if the object is a class: >>> import inspect >>> inspect.isclass(inspect) False >>> inspect.isclass(inspect.ArgInfo) True >>> A: Use isinstance and type, types.ClassType (that latter is for old-style classes): >>> isinstance(int, type) True >>> isinstance(1, type) False A: Use the type() function. It will return the type of the object. You can get 'stock' types to match with from the types library. Old-style classes (that don't inherit from anything) have the type types.ClassType. New-style classes like in your example have the type types.TypeType. There are plenty of other useful types in this module for strings and such. Calling type() on an instance of an old-style class returns types.InstanceType. Calling it on an instance of a new-style class returns the class itself. A: Use type() >>> class A: pass >>> print type(A) <type 'classobj'> >>> A = "abc" >>> print type(A) <type 'str'>
How to identify whether a variable is a class or an object
I am working at a bit lower level writing a small framework for creating test fixtures for my project in Python. In this I want to find out whether a particular variable is an instance of a certain class or a class itself and if it is a class, I want to know if it is a subclass of a certain class defined by my framework. How do I do it? class MyBase(object): pass class A(MyBase): a1 = 'Val1' a2 = 'Val2' class B(MyBase): a1 = 'Val3' a2 = A I want to find out if the properties a1 and a2 are instances of a class/type (a1 is a string type in B) or a class object itself (i.e a2 is A in B). Can you please help me how do I find this out?
[ "Use the inspect module.\n\nThe inspect module provides several useful functions to help get information about live objects such as modules, classes, methods, functions, tracebacks, frame objects, and code objects. For example, it can help you examine the contents of a class, retrieve the source code of a method, extract and format the argument list for a function, or get all the information you need to display a detailed traceback.\n\nFor example, the inspect.isclass() function returns true if the object is a class:\n>>> import inspect\n>>> inspect.isclass(inspect)\nFalse\n>>> inspect.isclass(inspect.ArgInfo)\nTrue\n>>> \n\n", "Use isinstance and type, types.ClassType (that latter is for old-style classes):\n>>> isinstance(int, type)\nTrue\n\n>>> isinstance(1, type)\nFalse\n\n", "Use the type() function. It will return the type of the object. You can get 'stock' types to match with from the types library. Old-style classes (that don't inherit from anything) have the type types.ClassType. New-style classes like in your example have the type types.TypeType. There are plenty of other useful types in this module for strings and such.\nCalling type() on an instance of an old-style class returns types.InstanceType. Calling it on an instance of a new-style class returns the class itself.\n", "Use type()\n>>> class A: pass\n>>> print type(A)\n<type 'classobj'>\n>>> A = \"abc\"\n>>> print type(A)\n<type 'str'>\n\n" ]
[ 11, 4, 3, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001802480_python.txt
Q: Python nested lists and recursion problem I'm trying to process a first order logic formula represented as nested lists and strings in python so that that its in disjunctive normal form, i.e ['&', ['|', 'a', 'b'], ['|', 'c', 'd']] turns into ['|' ['&', ['&', 'a', 'c'], ['&', 'b', 'c']], ['&', ['&', 'a', 'd'], ['&', 'b', 'd']]] where | is 'or' and & is 'and'. currently im using a recursive implementation which does several passes over a formula until it can't find any nested 'or' symbols inside a list argument for 'ands'. This is my implementation, performDNF(form) is the entry point. Right now it performs a single pass over the formula but then the while loop checking function finds no '|'s inside of '&'s and terminates, help anyone this is driving me mad. def dnfDistributivity(self, form): if isinstance(form, type([])): if len(form) == 3: if form[0] == '&': if form[1][0] == '|': form = ['|', ['&', form[2], form[1][1]], ['&', form[2], form[1][2]]] elif form[2][0] == '|': form = ['|', ['&', form[1], form[2][1]], ['&', form[1], form[2][2]]] form[1] = self.dnfDistributivity(form[1]) form[2] = self.dnfDistributivity(form[2]) elif len(form) == 2: form[1] = self.dnfDistributivity(form[1]) return form def checkDistributivity(self, form, result = 0): if isinstance(form, type([])): if len(form) == 3: if form[0] == '&': print "found &" if isinstance(form[1], type([])): if form[1][0] == '|': return 1 elif isinstance(form[2], type([])): if form[2][0] == '|': return 1 else: result = self.checkDistributivity(form[1], result) print result if result != 1: result = self.checkDistributivity(form[2], result) print result elif len(form) == 2: result = self.checkDistributivity(form[1], result) print result return result def performDNF(self, form): while self.checkDistributivity(form): form = self.dnfDistributivity(self.dnfDistributivity(form)) return form A: Okay, here is an actual solution that seems to work. I do not understand your code, and I hadn't heard of DNF, so I started out by studying the problem some more. The Wikipedia page on DNF was very helpful. It included a grammar that describes DNF. Based on that, I wrote a simple set of recursive functions that I believe correctly recognize DNF in the format you require. My code includes some simple test cases. Then I realized that the binary tree nature of your data representation makes it relatively simple to apply DeMorgan's laws to simplify the 'not' case, by writing a function called negate() that recursively negates, and the rest fell into place. I have included test cases. It seems to work. I have no further plans to work on this. If someone finds a bug, please provide a test case and I'll look at it. This code should run on any Python version 2.4 or newer. You could even port it to older Python versions by replacing the frozenset with a simple list of characters. I tested with Python 3.x, and found the exception syntax has changed, so you will need to change the raise lines if you want to run it under Python 3; the important parts all work. In the question, you did not mention what character you use for not; given your use of & for and and | for or, I assumed you likely use ! for not and wrote the code accordingly. This is one of the things that puzzles me about your code: don't you ever expect to find not in your input? I had some fun working on this. It's not as pointless as a sudoku puzzle. import sys ch_and = '&' ch_not = '!' ch_or = '|' def echo(*args): # like print() in Python 3 but works in 2.x or in 3 sys.stdout.write(" ".join(str(x) for x in args) + "\n") try: symbols = frozenset([ch_and, ch_not, ch_or]) except NameError: raise Exception, "sorry, your Python is too old for this code" try: __str_type = basestring except NameError: __str_type = str def is_symbol(x): if not isinstance(x, __str_type) or len(x) == 0: return False return x[0] in symbols def is_and(x): if not isinstance(x, __str_type) or len(x) == 0: return False return x[0] == ch_and def is_or(x): if not isinstance(x, __str_type) or len(x) == 0: return False return x[0] == ch_or def is_not(x): if not isinstance(x, __str_type) or len(x) == 0: return False return x[0] == ch_not def is_literal_char(x): if not isinstance(x, __str_type) or len(x) == 0: return False return x[0] not in symbols def is_list(x, n): return isinstance(x, list) and len(x) == n def is_literal(x): """\ True if x is a literal char, or a 'not' followed by a literal char.""" if is_literal_char(x): return True return is_list(x, 2) and is_not(x[0]) and is_literal_char(x[1]) def is_conjunct(x): """\ True if x is a literal, or 'and' followed by two conjuctions.""" if is_literal(x): return True return (is_list(x, 3) and is_and(x[0]) and is_conjunct(x[1]) and is_conjunct(x[2])) def is_disjunct(x): """\ True if x is a conjunction, or 'or' followed by two disjuctions.""" if is_conjunct(x): return True return (is_list(x, 3) and is_or(x[0]) and is_disjunct(x[1]) and is_disjunct(x[2])) def is_dnf(x): return is_disjunct(x) def is_wf(x): """returns True if x is a well-formed list""" if is_literal(x): return True elif not isinstance(x, list): raise TypeError, "only lists allowed" elif len(x) == 2 and is_not(x[0]) and is_wf(x[1]): return True else: return (is_list(x, 3) and (is_and(x[0]) or is_or(x[0])) and is_wf(x[1]) and is_wf(x[2])) def negate(x): # trivial: negate a returns !a if is_literal_char(x): return [ch_not, x] # trivial: negate !a returns a if is_list(x, 2) and is_not(x[0]): return x[1] # DeMorgan's law: negate (a && b) returns (!a || !b) if is_list(x, 3) and is_and(x[0]): return [ch_or, negate(x[1]), negate(x[2])] # DeMorgan's law: negate (a || b) returns (!a && !b) if is_list(x, 3) and is_or(x[0]): return [ch_and, negate(x[1]), negate(x[2])] raise ValueError, "negate() only works on well-formed values." def __rewrite(x): # handle all dnf, which includes simple literals. if is_dnf(x): # basis case. no work to do, return unchanged. return x if len(x) == 2 and is_not(x[0]): x1 = x[1] if is_list(x1, 2) and is_not(x1[0]): # double negative! throw away the 'not' 'not' and keep rewriting. return __rewrite(x1[1]) assert is_list(x1, 3) # handle non-inner 'not' return __rewrite(negate(x1)) # handle 'and' with 'or' inside it assert is_list(x, 3) and is_and(x[0]) or is_or(x[0]) if len(x) == 3 and is_and(x[0]): x1, x2 = x[1], x[2] if ((is_list(x1, 3) and is_or(x1[0])) and (is_list(x2, 3) and is_or(x2[0]))): # (a || b) && (c || d) -- (a && c) || (b && c) || (a && d) || (b && d) lst_ac = [ch_and, x1[1], x2[1]] lst_bc = [ch_and, x1[2], x2[1]] lst_ad = [ch_and, x1[1], x2[2]] lst_bd = [ch_and, x1[2], x2[2]] new_x = [ch_or, [ch_or, lst_ac, lst_bc], [ch_or, lst_ad, lst_bd]] return __rewrite(new_x) if (is_list(x2, 3) and is_or(x2[0])): # a && (b || c) -- (a && b) || (a && c) lst_ab = [ch_and, x1, x2[1]] lst_ac = [ch_and, x1, x2[2]] new_x = [ch_or, lst_ab, lst_ac] return __rewrite(new_x) if (is_list(x1, 3) and is_or(x1[0])): # (a || b) && c -- (a && c) || (b && c) lst_ac = [ch_and, x1[1], x2] lst_bc = [ch_and, x1[2], x2] new_x = [ch_or, lst_ac, lst_bc] return __rewrite(new_x) return [x[0], __rewrite(x[1]), __rewrite(x[2])] #return x def rewrite(x): if not is_wf(x): raise ValueError, "can only rewrite well-formed lists" while not is_dnf(x): x = __rewrite(x) return x #### self-test code #### __failed = False __verbose = True def test_not_wf(x): global __failed if is_wf(x): echo("is_wf() returned True for:", x) __failed = True def test_dnf(x): global __failed if not is_wf(x): echo("is_wf() returned False for:", x) __failed = True elif not is_dnf(x): echo("is_dnf() returned False for:", x) __failed = True def test_not_dnf(x): global __failed if not is_wf(x): echo("is_wf() returned False for:", x) __failed = True elif is_dnf(x): echo("is_dnf() returned True for:", x) __failed = True else: xr = rewrite(x) if not is_wf(xr): echo("rewrite produced non-well-formed for:", x) echo("result was:", xr) __failed = True elif not is_dnf(xr): echo("rewrite failed for:", x) echo("result was:", xr) __failed = True else: if __verbose: echo("original:", x) echo("rewritten:", xr) echo() def self_test(): a, b, c, d = 'a', 'b', 'c', 'd' test_dnf(a) test_dnf(b) test_dnf(c) test_dnf(d) lstna = [ch_not, a] test_dnf(lstna) lstnb = [ch_not, b] test_dnf(lstnb) lsta = [ch_and, a, b] test_dnf(lsta) lsto = [ch_or, a, b] test_dnf(lsto) test_dnf([ch_and, lsta, lsta]) test_dnf([ch_or, lsta, lsta]) lstnn = [ch_not, [ch_not, a]] test_not_dnf(lstnn) test_not_dnf([ch_and, lstnn, lstnn]) # test 'and'/'or' inside 'not' test_not_dnf([ch_not, lsta]) test_not_dnf([ch_not, lsto]) # test 'or' inside of 'and' # a&(b|c) --> (a&b)|(b&c) test_not_dnf([ch_and, a, [ch_or, b, c]]) # (a|b)&c --> (a&c)|(b&c) test_not_dnf([ch_and, [ch_or, a, b], c]) # (a|b)&(c|d) --> ((a&c)|(b&c))|((a&d)|(b&d)) test_not_dnf([ch_and, [ch_or, a, b], [ch_or, c, d]]) # a&a&a&(b|c) --> a&a&(a&b|b&c) --> a&(a&a&b|a&b&c) --> (a&a&a&b|a&a&b&c) test_not_dnf([ch_and, a, [ch_and, a, [ch_and, a, [ch_or, b, c]]]]) if __failed: echo("one or more tests failed") self_test() Now, I'm sorry to say this, but the more I think about it, the more I think you probably just got me to do your homework for you. So, I just wrote an improved version of this code, but I'm not planning to share it here; I'll leave it as an exercise for you. You should be able to do it easily, after I describe it. It's a horrible hack that I have a while loop repeatedly calling __rewrite(). The rewrite() function should be able to rewrite the tree structure with a single call to __rewrite(). With just a few simple changes, you can get rid of the while loop; I did it, and tested it, and it works. You want __rewrite() to walk the tree down and then rewrite stuff on the way back up, and it will work in one pass. You could also modify __rewrite() to return an error if the list isn't well-formed, and get rid of the call to is_wf(); that's also easy. I suspect your teacher would dock you points for the while loop, so you should be motivated to try this. I hope you have fun doing it, and I hope you learned something useful from my code. A: I didn't really try to understand your solution, sorry. I think it is way too difficult to read and probably the approach is too cumbersome. If you have a DNF, all you need to do is find all combinations of atoms, taking one out of each sublist. This pretty much boils down to this problem ... from each OR-subclause you need to take one atom, and combine all those with AND. The OR-combination of all those possible AND-clauses yields your desired result. Right?
Python nested lists and recursion problem
I'm trying to process a first order logic formula represented as nested lists and strings in python so that that its in disjunctive normal form, i.e ['&', ['|', 'a', 'b'], ['|', 'c', 'd']] turns into ['|' ['&', ['&', 'a', 'c'], ['&', 'b', 'c']], ['&', ['&', 'a', 'd'], ['&', 'b', 'd']]] where | is 'or' and & is 'and'. currently im using a recursive implementation which does several passes over a formula until it can't find any nested 'or' symbols inside a list argument for 'ands'. This is my implementation, performDNF(form) is the entry point. Right now it performs a single pass over the formula but then the while loop checking function finds no '|'s inside of '&'s and terminates, help anyone this is driving me mad. def dnfDistributivity(self, form): if isinstance(form, type([])): if len(form) == 3: if form[0] == '&': if form[1][0] == '|': form = ['|', ['&', form[2], form[1][1]], ['&', form[2], form[1][2]]] elif form[2][0] == '|': form = ['|', ['&', form[1], form[2][1]], ['&', form[1], form[2][2]]] form[1] = self.dnfDistributivity(form[1]) form[2] = self.dnfDistributivity(form[2]) elif len(form) == 2: form[1] = self.dnfDistributivity(form[1]) return form def checkDistributivity(self, form, result = 0): if isinstance(form, type([])): if len(form) == 3: if form[0] == '&': print "found &" if isinstance(form[1], type([])): if form[1][0] == '|': return 1 elif isinstance(form[2], type([])): if form[2][0] == '|': return 1 else: result = self.checkDistributivity(form[1], result) print result if result != 1: result = self.checkDistributivity(form[2], result) print result elif len(form) == 2: result = self.checkDistributivity(form[1], result) print result return result def performDNF(self, form): while self.checkDistributivity(form): form = self.dnfDistributivity(self.dnfDistributivity(form)) return form
[ "Okay, here is an actual solution that seems to work.\nI do not understand your code, and I hadn't heard of DNF, so I started out by studying the problem some more.\nThe Wikipedia page on DNF was very helpful. It included a grammar that describes DNF.\nBased on that, I wrote a simple set of recursive functions that I believe correctly recognize DNF in the format you require. My code includes some simple test cases.\nThen I realized that the binary tree nature of your data representation makes it relatively simple to apply DeMorgan's laws to simplify the 'not' case, by writing a function called negate() that recursively negates, and the rest fell into place.\nI have included test cases. It seems to work.\nI have no further plans to work on this. If someone finds a bug, please provide a test case and I'll look at it.\nThis code should run on any Python version 2.4 or newer. You could even port it to older Python versions by replacing the frozenset with a simple list of characters. I tested with Python 3.x, and found the exception syntax has changed, so you will need to change the raise lines if you want to run it under Python 3; the important parts all work.\nIn the question, you did not mention what character you use for not; given your use of & for and and | for or, I assumed you likely use ! for not and wrote the code accordingly. This is one of the things that puzzles me about your code: don't you ever expect to find not in your input?\nI had some fun working on this. It's not as pointless as a sudoku puzzle.\nimport sys\n\n\nch_and = '&'\nch_not = '!'\nch_or = '|'\n\ndef echo(*args):\n # like print() in Python 3 but works in 2.x or in 3\n sys.stdout.write(\" \".join(str(x) for x in args) + \"\\n\")\n\ntry:\n symbols = frozenset([ch_and, ch_not, ch_or])\nexcept NameError:\n raise Exception, \"sorry, your Python is too old for this code\"\n\ntry:\n __str_type = basestring\nexcept NameError:\n __str_type = str\n\n\ndef is_symbol(x):\n if not isinstance(x, __str_type) or len(x) == 0:\n return False\n return x[0] in symbols\n\ndef is_and(x):\n if not isinstance(x, __str_type) or len(x) == 0:\n return False\n return x[0] == ch_and\n\ndef is_or(x):\n if not isinstance(x, __str_type) or len(x) == 0:\n return False\n return x[0] == ch_or\n\ndef is_not(x):\n if not isinstance(x, __str_type) or len(x) == 0:\n return False\n return x[0] == ch_not\n\ndef is_literal_char(x):\n if not isinstance(x, __str_type) or len(x) == 0:\n return False\n return x[0] not in symbols\n\ndef is_list(x, n):\n return isinstance(x, list) and len(x) == n\n\n\ndef is_literal(x):\n \"\"\"\\\nTrue if x is a literal char, or a 'not' followed by a literal char.\"\"\"\n if is_literal_char(x):\n return True\n return is_list(x, 2) and is_not(x[0]) and is_literal_char(x[1])\n\n\ndef is_conjunct(x):\n \"\"\"\\\nTrue if x is a literal, or 'and' followed by two conjuctions.\"\"\"\n if is_literal(x):\n return True\n return (is_list(x, 3) and\n is_and(x[0]) and is_conjunct(x[1]) and is_conjunct(x[2]))\n\n\ndef is_disjunct(x):\n \"\"\"\\\nTrue if x is a conjunction, or 'or' followed by two disjuctions.\"\"\"\n if is_conjunct(x):\n return True\n return (is_list(x, 3) and\n is_or(x[0]) and is_disjunct(x[1]) and is_disjunct(x[2]))\n\n\n\ndef is_dnf(x):\n return is_disjunct(x)\n\ndef is_wf(x):\n \"\"\"returns True if x is a well-formed list\"\"\"\n if is_literal(x):\n return True\n elif not isinstance(x, list):\n raise TypeError, \"only lists allowed\"\n elif len(x) == 2 and is_not(x[0]) and is_wf(x[1]):\n return True\n else:\n return (is_list(x, 3) and (is_and(x[0]) or is_or(x[0])) and\n is_wf(x[1]) and is_wf(x[2]))\n\ndef negate(x):\n # trivial: negate a returns !a\n if is_literal_char(x):\n return [ch_not, x]\n\n # trivial: negate !a returns a\n if is_list(x, 2) and is_not(x[0]):\n return x[1]\n\n # DeMorgan's law: negate (a && b) returns (!a || !b)\n if is_list(x, 3) and is_and(x[0]):\n return [ch_or, negate(x[1]), negate(x[2])]\n\n # DeMorgan's law: negate (a || b) returns (!a && !b)\n if is_list(x, 3) and is_or(x[0]):\n return [ch_and, negate(x[1]), negate(x[2])]\n\n raise ValueError, \"negate() only works on well-formed values.\"\n\ndef __rewrite(x):\n # handle all dnf, which includes simple literals.\n if is_dnf(x):\n # basis case. no work to do, return unchanged.\n return x\n\n if len(x) == 2 and is_not(x[0]):\n x1 = x[1]\n if is_list(x1, 2) and is_not(x1[0]):\n # double negative! throw away the 'not' 'not' and keep rewriting.\n return __rewrite(x1[1])\n assert is_list(x1, 3)\n # handle non-inner 'not'\n return __rewrite(negate(x1))\n\n # handle 'and' with 'or' inside it\n assert is_list(x, 3) and is_and(x[0]) or is_or(x[0])\n if len(x) == 3 and is_and(x[0]):\n x1, x2 = x[1], x[2]\n if ((is_list(x1, 3) and is_or(x1[0])) and\n (is_list(x2, 3) and is_or(x2[0]))):\n# (a || b) && (c || d) -- (a && c) || (b && c) || (a && d) || (b && d)\n lst_ac = [ch_and, x1[1], x2[1]]\n lst_bc = [ch_and, x1[2], x2[1]]\n lst_ad = [ch_and, x1[1], x2[2]]\n lst_bd = [ch_and, x1[2], x2[2]]\n new_x = [ch_or, [ch_or, lst_ac, lst_bc], [ch_or, lst_ad, lst_bd]]\n return __rewrite(new_x)\n\n if (is_list(x2, 3) and is_or(x2[0])):\n# a && (b || c) -- (a && b) || (a && c)\n lst_ab = [ch_and, x1, x2[1]]\n lst_ac = [ch_and, x1, x2[2]]\n new_x = [ch_or, lst_ab, lst_ac]\n return __rewrite(new_x)\n\n if (is_list(x1, 3) and is_or(x1[0])):\n# (a || b) && c -- (a && c) || (b && c)\n lst_ac = [ch_and, x1[1], x2]\n lst_bc = [ch_and, x1[2], x2]\n new_x = [ch_or, lst_ac, lst_bc]\n return __rewrite(new_x)\n\n return [x[0], __rewrite(x[1]), __rewrite(x[2])]\n #return x\n\ndef rewrite(x):\n if not is_wf(x):\n raise ValueError, \"can only rewrite well-formed lists\"\n while not is_dnf(x):\n x = __rewrite(x)\n return x\n\n\n#### self-test code ####\n\n__failed = False\n__verbose = True\ndef test_not_wf(x):\n global __failed\n if is_wf(x):\n echo(\"is_wf() returned True for:\", x)\n __failed = True\n\ndef test_dnf(x):\n global __failed\n if not is_wf(x):\n echo(\"is_wf() returned False for:\", x)\n __failed = True\n elif not is_dnf(x):\n echo(\"is_dnf() returned False for:\", x)\n __failed = True\n\ndef test_not_dnf(x):\n global __failed\n if not is_wf(x):\n echo(\"is_wf() returned False for:\", x)\n __failed = True\n elif is_dnf(x):\n echo(\"is_dnf() returned True for:\", x)\n __failed = True\n else:\n xr = rewrite(x)\n if not is_wf(xr):\n echo(\"rewrite produced non-well-formed for:\", x)\n echo(\"result was:\", xr)\n __failed = True\n elif not is_dnf(xr):\n echo(\"rewrite failed for:\", x)\n echo(\"result was:\", xr)\n __failed = True\n else:\n if __verbose:\n echo(\"original:\", x)\n echo(\"rewritten:\", xr)\n echo()\n\ndef self_test():\n a, b, c, d = 'a', 'b', 'c', 'd'\n test_dnf(a)\n test_dnf(b)\n test_dnf(c)\n test_dnf(d)\n\n lstna = [ch_not, a]\n test_dnf(lstna)\n\n lstnb = [ch_not, b]\n test_dnf(lstnb)\n\n lsta = [ch_and, a, b]\n test_dnf(lsta)\n\n lsto = [ch_or, a, b]\n test_dnf(lsto)\n\n test_dnf([ch_and, lsta, lsta])\n\n test_dnf([ch_or, lsta, lsta])\n\n lstnn = [ch_not, [ch_not, a]]\n test_not_dnf(lstnn)\n\n test_not_dnf([ch_and, lstnn, lstnn])\n\n # test 'and'/'or' inside 'not'\n test_not_dnf([ch_not, lsta])\n test_not_dnf([ch_not, lsto])\n\n # test 'or' inside of 'and'\n # a&(b|c) --> (a&b)|(b&c)\n test_not_dnf([ch_and, a, [ch_or, b, c]])\n # (a|b)&c --> (a&c)|(b&c)\n test_not_dnf([ch_and, [ch_or, a, b], c])\n # (a|b)&(c|d) --> ((a&c)|(b&c))|((a&d)|(b&d))\n test_not_dnf([ch_and, [ch_or, a, b], [ch_or, c, d]])\n\n # a&a&a&(b|c) --> a&a&(a&b|b&c) --> a&(a&a&b|a&b&c) --> (a&a&a&b|a&a&b&c)\n test_not_dnf([ch_and, a, [ch_and, a, [ch_and, a, [ch_or, b, c]]]])\n\n if __failed:\n echo(\"one or more tests failed\")\n\nself_test()\n\nNow, I'm sorry to say this, but the more I think about it, the more I think you probably just got me to do your homework for you. So, I just wrote an improved version of this code, but I'm not planning to share it here; I'll leave it as an exercise for you. You should be able to do it easily, after I describe it.\nIt's a horrible hack that I have a while loop repeatedly calling __rewrite(). The rewrite() function should be able to rewrite the tree structure with a single call to __rewrite(). With just a few simple changes, you can get rid of the while loop; I did it, and tested it, and it works. You want __rewrite() to walk the tree down and then rewrite stuff on the way back up, and it will work in one pass. You could also modify __rewrite() to return an error if the list isn't well-formed, and get rid of the call to is_wf(); that's also easy.\nI suspect your teacher would dock you points for the while loop, so you should be motivated to try this. I hope you have fun doing it, and I hope you learned something useful from my code.\n", "I didn't really try to understand your solution, sorry. I think it is way too difficult to read and probably the approach is too cumbersome.\nIf you have a DNF, all you need to do is find all combinations of atoms, taking one out of each sublist. This pretty much boils down to this problem ... from each OR-subclause you need to take one atom, and combine all those with AND.\nThe OR-combination of all those possible AND-clauses yields your desired result. Right?\n" ]
[ 1, 0 ]
[]
[]
[ "logic", "python" ]
stackoverflow_0001787576_logic_python.txt
Q: How to get the arch string that distutils uses for builds? When I build a c extension using python setup.py build, the result is created under a directory named build/lib.linux-x86_64-2.6/ where the part after lib. changes by the OS, CPU and Python version. Is there a way I can access the appropriate string for my current architecture from python? Hopefully in a way that is guaranteed to match what distutils is creating. A: >>> from distutils import util >>> util.get_platform() 'linux-x86_64' >>> import sys >>> '%s.%s' % sys.version_info[:2] 2.6
How to get the arch string that distutils uses for builds?
When I build a c extension using python setup.py build, the result is created under a directory named build/lib.linux-x86_64-2.6/ where the part after lib. changes by the OS, CPU and Python version. Is there a way I can access the appropriate string for my current architecture from python? Hopefully in a way that is guaranteed to match what distutils is creating.
[ ">>> from distutils import util\n>>> util.get_platform()\n'linux-x86_64'\n\n>>> import sys\n>>> '%s.%s' % sys.version_info[:2]\n2.6\n\n" ]
[ 3 ]
[]
[]
[ "distutils", "python", "python_c_extension" ]
stackoverflow_0001802534_distutils_python_python_c_extension.txt
Q: Is there a cross-platform python low-level API to capture or generate keyboard events? I am trying to write a cross-platform python program that would run in the background, monitor all keyboard events and when it sees some specific shortcuts, it generates one or more keyboard events of its own. For example, this could be handy to have Ctrl-@ mapped to "my.email@address", so that every time some program asks me for my email address I just need to type Ctrl-@. I know such programs already exist, and I am reinventing the wheel... but my goal is just to learn more about low-level keyboard APIs. Moreover, the answer to this question might be useful to other programmers, for example if they want to startup an SSH connection which requires a password, without using pexpect. Thanks for your help. Note: there is a similar question but it is limited to the Windows platform, and does not require python. I am looking for a cross-platform python api. There are also other questions related to keyboard events, but apparently they are not interested in system-wide keyboard events, just application-specific keyboard shortcuts. Edit: I should probably add a disclaimer here: I do not want to write a keylogger. If I needed a keylogger, I could download one off the web a anyway. ;-) A: There is no such API. My solution was to write a helper module which would use a different helper depending on the value of os.name. On Windows, use the Win32 extensions. On Linux, things are a bit more complex since real OSes protect their users against keyloggers[*]. So here, you will need a root process which watches one of[] the handles in /dev/input/. Your best bet is probably looking for an entry below /dev/input/by-path/ which contains the strings "kbd" or "keyboard". That should work in most cases. [*]: Jeez, not even my virus/trojan scanner will complain when I start a Python program which hooks into the keyboard events... A: As the guy that wrote the original pykeylogger linux port, I can say there isn't really a cross platform one. Essentially I rewrote the pyhook API for keyboard events to capture from the xserver itself, using the record extension. Of course, this assumes the record extension is there, loaded into the x server. From there, it's essentially just detecting if you're on windows, or linux, and then loading the correct module for the OS. Everything else should be identical. Take a look at the pykeylogger source, in pyxhook.py for the class and implimentation. Otherwise, just load that module, or pyhook instead, depending on OS. A: I've made a few tests on Ubuntu 9.10. pykeylogger doesn't seems to be working. I've tryied to change the /etc/X11/xorg.conf in order to allow module to be loaded but in that specific version of ubuntu there is no xorg.conf. So, in my opiniion pykelogger is NOT working on ubuntu 9.10 !! A: Cross-platform UI libraries such as Tkinter or wxPython have API for keyboard events. Using these you could map «CTRL» + «@» to an action. A: On linux, you might want to have a look at pykeylogger. For some strange reason, reading from /dev/input/.... doesn't always work when X is running. For example it doesn't work on ubuntu 8.10. Pykeylogger uses xlib, which works exactly when the other way doesn't. I'm still looking into this, so if you find a simpler way of doing this, please tell me. A: Under Linux it's possible to do this quite easily with Xlib. See this page for details: http://www.larsen-b.com/Article/184.html
Is there a cross-platform python low-level API to capture or generate keyboard events?
I am trying to write a cross-platform python program that would run in the background, monitor all keyboard events and when it sees some specific shortcuts, it generates one or more keyboard events of its own. For example, this could be handy to have Ctrl-@ mapped to "my.email@address", so that every time some program asks me for my email address I just need to type Ctrl-@. I know such programs already exist, and I am reinventing the wheel... but my goal is just to learn more about low-level keyboard APIs. Moreover, the answer to this question might be useful to other programmers, for example if they want to startup an SSH connection which requires a password, without using pexpect. Thanks for your help. Note: there is a similar question but it is limited to the Windows platform, and does not require python. I am looking for a cross-platform python api. There are also other questions related to keyboard events, but apparently they are not interested in system-wide keyboard events, just application-specific keyboard shortcuts. Edit: I should probably add a disclaimer here: I do not want to write a keylogger. If I needed a keylogger, I could download one off the web a anyway. ;-)
[ "There is no such API. My solution was to write a helper module which would use a different helper depending on the value of os.name.\nOn Windows, use the Win32 extensions.\nOn Linux, things are a bit more complex since real OSes protect their users against keyloggers[*]. So here, you will need a root process which watches one of[] the handles in /dev/input/. Your best bet is probably looking for an entry below /dev/input/by-path/ which contains the strings \"kbd\" or \"keyboard\". That should work in most cases.\n[*]: Jeez, not even my virus/trojan scanner will complain when I start a Python program which hooks into the keyboard events...\n", "As the guy that wrote the original pykeylogger linux port, I can say there isn't really a cross platform one. Essentially I rewrote the pyhook API for keyboard events to capture from the xserver itself, using the record extension. Of course, this assumes the record extension is there, loaded into the x server.\nFrom there, it's essentially just detecting if you're on windows, or linux, and then loading the correct module for the OS. Everything else should be identical.\nTake a look at the pykeylogger source, in pyxhook.py for the class and implimentation. Otherwise, just load that module, or pyhook instead, depending on OS.\n", "I've made a few tests on Ubuntu 9.10. pykeylogger doesn't seems to be working. I've tryied to change the /etc/X11/xorg.conf in order to allow module to be loaded but in that specific version of ubuntu there is no xorg.conf. So, in my opiniion pykelogger is NOT working on ubuntu 9.10 !!\n", "Cross-platform UI libraries such as Tkinter or wxPython have API for keyboard events. Using these you could map «CTRL» + «@» to an action. \n", "On linux, you might want to have a look at pykeylogger. For some strange reason, reading from /dev/input/.... doesn't always work when X is running. For example it doesn't work on ubuntu 8.10. Pykeylogger uses xlib, which works exactly when the other way doesn't. I'm still looking into this, so if you find a simpler way of doing this, please tell me.\n", "Under Linux it's possible to do this quite easily with Xlib. See this page for details:\nhttp://www.larsen-b.com/Article/184.html\n" ]
[ 8, 7, 1, 0, 0, 0 ]
[]
[]
[ "cross_platform", "keyboard_events", "low_level_api", "python" ]
stackoverflow_0000676713_cross_platform_keyboard_events_low_level_api_python.txt
Q: Match UserProperty() with StringProperty() I want to match StringProperty() with UserProperty and I can't change property so how can I achieve it? Please help me out. A: You seem to be talking about Google App Engine's model properties. To the best of my knowledge, you can't query a UserProperty with a string, because a User is not a string. Instead, try creating a brand new User object; you just need the email address of the user. Then you can query for users matching that user.
Match UserProperty() with StringProperty()
I want to match StringProperty() with UserProperty and I can't change property so how can I achieve it? Please help me out.
[ "You seem to be talking about Google App Engine's model properties. To the best of my knowledge, you can't query a UserProperty with a string, because a User is not a string. Instead, try creating a brand new User object; you just need the email address of the user. Then you can query for users matching that user.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0001802728_python.txt
Q: Why does SWIG crash Python when linked to gtkglext? Anything I link to gtkglext using SWIG crashes Python on exit. Why does this crash? test.i: %module test %{ void test() { printf("Test.\n"); } %} void test(); Session: $ swig -python test.i $ g++ -I/usr/include/python2.6 -shared -fPIC -o _test.so test_wrap.c -lpython2.6 $ python -c 'import test; test.test()' Test. $ g++ -I/usr/include/python2.6 -shared -fPIC -o _test.so test_wrap.c -lpython2.6 `pkg-config --libs gtkglext-1.0` $ python -c 'import test; test.test()' Test. Segmentation fault Any ideas? Thanks... A: You need to init gtk properly. $ cat test.i %module test %{ void test() { printf("Test.\n"); } %} void test(); $ swig -python test.i ; gcc -I/usr/include/python2.5 -shared -fPIC -o _test.so test_wrap.c -lpython2.5 `pkg-config --libs gtkglext-1.0` $ python -c 'import test; test.test()' Test. Segmentation fault $ python -c 'import gtk; import test; test.test()' Test.
Why does SWIG crash Python when linked to gtkglext?
Anything I link to gtkglext using SWIG crashes Python on exit. Why does this crash? test.i: %module test %{ void test() { printf("Test.\n"); } %} void test(); Session: $ swig -python test.i $ g++ -I/usr/include/python2.6 -shared -fPIC -o _test.so test_wrap.c -lpython2.6 $ python -c 'import test; test.test()' Test. $ g++ -I/usr/include/python2.6 -shared -fPIC -o _test.so test_wrap.c -lpython2.6 `pkg-config --libs gtkglext-1.0` $ python -c 'import test; test.test()' Test. Segmentation fault Any ideas? Thanks...
[ "You need to init gtk properly.\n$ cat test.i \n%module test\n%{\nvoid test() { printf(\"Test.\\n\"); }\n%}\nvoid test();\n$ swig -python test.i ; gcc -I/usr/include/python2.5 -shared -fPIC -o _test.so test_wrap.c -lpython2.5 `pkg-config --libs gtkglext-1.0`\n$ python -c 'import test; test.test()'\nTest.\nSegmentation fault\n$ python -c 'import gtk; import test; test.test()'\nTest.\n\n" ]
[ 1 ]
[]
[]
[ "python", "scripting", "swig" ]
stackoverflow_0001801518_python_scripting_swig.txt
Q: python for in control structure I am a php programmer trying to understand python's for in syntax I get the basic for in for i in range(0,5): in php would be for ($i = 0; $i < 5; $i++){ but what does this do for x, y in z: and what would be the translation to php? This is the full code i am translating to php: def preProcess(self): """ plan for the arrangement of the tile groups """ tier = 0 tileGroupNumber = 0 numberOfTiles = 0 for width, height in self._v_scaleInfo: #cycle through columns, then rows row, column = (0,0) ul_x, ul_y, lr_x, lr_y = (0,0,0,0) #final crop coordinates while not ((lr_x == width) and (lr_y == height)): tileFileName = self.getTileFileName(tier, column, row) tileContainerName = self.getNewTileContainerName(tileGroupNumber=tileGroupNumber) if numberOfTiles ==0: self.createTileContainer(tileContainerName=tileContainerName) elif (numberOfTiles % self.tileSize) == 0: tileGroupNumber += 1 tileContainerName = self.getNewTileContainerName(tileGroupNumber=tileGroupNumber) self.createTileContainer(tileContainerName=tileContainerName) self._v_tileGroupMappings[tileFileName] = tileContainerName numberOfTiles += 1 # for the next tile, set lower right cropping point if (ul_x + self.tileSize) < width: lr_x = ul_x + self.tileSize else: lr_x = width if (ul_y + self.tileSize) < height: lr_y = ul_y + self.tileSize else: lr_y = height # for the next tile, set upper left cropping point if (lr_x == width): ul_x=0 ul_y = lr_y column = 0 row += 1 else: ul_x = lr_x column += 1 tier += 1 A: self._v_scaleInfo: is an array of tuples, presumably, like [(x,y),(x,y),...] so for width, height in self._v_scaleInfo: loops through the array filling width and height with the tuple values. php would go something like: $scaleInfo = array(array(x,y), array(x,y),...); for( $i = 0; $i < count($scaleInfo); $i++ ) { $width = $scaleInfo[$i][0]; $height = $scaleInfo[$i][1]; ... } A: In your simple example for x,y in z, z would be a list of coordinate pairs, like [(0,1), (2,5), (4,3)]. With each turn through the for loop, the x variable gets the first coordinate in the pair and y gets the second. A: It's roughly equivalent to (pseudo code): For every item i in z: x = i[0] y = i[1] Loop body happens here It means that every item in z contains 2 elements (for example, every item is a list with 2 items). A: In python, you can have multiple return values. You can also define a tuple like this t = (1,2,3) To get access to the elements in t you can do the following: a, b, c = t Then a has the value 1, etc. If you had an array of 2 element tuples, you could enumerate through them using the following code z = [(1, 2), (3, 4), (5, 6)] for x, y in z: print x, y which produces the following 1 2 3 4 5 6 A: Suppose z is a list of tuples in Python. Z = [(1,2), (1,3), (2,3), (2,4)] it would be something like: $z = array(array(1,2), array(1,3), array(2,3), array(2,4)); using that for x,y in z would result in: z = [(1,2), (1,3), (2,3), (2,4)] for x, y in z: print "%i %i" % (x,y) 1 2 1 3 2 3 2 4 so Translating for x, y in z: into PHP would be something like: for ($i=0; $i < count($z); $i++){ $x = $z[$i][0]; $y = $z[$i][1]; A: Conceptually for x,y in z is actually iterating using an enumerator (language specific implementation of an iterator pattern), for loops are based on index based iteration. for x,y in z would semantically be like for (x=0 ; x<z.length ; x++ ) for (y=1; x<z.length;y++) print z[x],z[y] note this will work for tuples in python. A: This construct allows you to iterate over multi-dimensional collections so for a 3x2 list you could have have: z = [[1,2], [3,4], [5,6]] for x, y in z: print x, y This prints: 1 2 3 4 5 6 The same construct could be used on a dictionary which is some sense also a 2-dimensional collection: z = {1:"one", 2:"two", 3:"three"} for x, y in z.items(): for x, y in z.items(): print x, y This prints: 1 one 2 two 3 three In Python this construct is general and work at any dimension, changing our original 3x2 list to a 2x3 list we could do this: z = [[1,2,3], [4,5,6]] for w, x, y in z: print w, x, y This prints: 1 2 3 4 5 6 In PHP I think you have to do this with nest for loops, I do not think there is a construct to do the sort of multiple dimension list deconstruction that is possible in Python.
python for in control structure
I am a php programmer trying to understand python's for in syntax I get the basic for in for i in range(0,5): in php would be for ($i = 0; $i < 5; $i++){ but what does this do for x, y in z: and what would be the translation to php? This is the full code i am translating to php: def preProcess(self): """ plan for the arrangement of the tile groups """ tier = 0 tileGroupNumber = 0 numberOfTiles = 0 for width, height in self._v_scaleInfo: #cycle through columns, then rows row, column = (0,0) ul_x, ul_y, lr_x, lr_y = (0,0,0,0) #final crop coordinates while not ((lr_x == width) and (lr_y == height)): tileFileName = self.getTileFileName(tier, column, row) tileContainerName = self.getNewTileContainerName(tileGroupNumber=tileGroupNumber) if numberOfTiles ==0: self.createTileContainer(tileContainerName=tileContainerName) elif (numberOfTiles % self.tileSize) == 0: tileGroupNumber += 1 tileContainerName = self.getNewTileContainerName(tileGroupNumber=tileGroupNumber) self.createTileContainer(tileContainerName=tileContainerName) self._v_tileGroupMappings[tileFileName] = tileContainerName numberOfTiles += 1 # for the next tile, set lower right cropping point if (ul_x + self.tileSize) < width: lr_x = ul_x + self.tileSize else: lr_x = width if (ul_y + self.tileSize) < height: lr_y = ul_y + self.tileSize else: lr_y = height # for the next tile, set upper left cropping point if (lr_x == width): ul_x=0 ul_y = lr_y column = 0 row += 1 else: ul_x = lr_x column += 1 tier += 1
[ "self._v_scaleInfo: is an array of tuples, presumably, like [(x,y),(x,y),...] so \nfor width, height in self._v_scaleInfo: loops through the array filling width and height with the tuple values.\nphp would go something like:\n$scaleInfo = array(array(x,y), array(x,y),...);\n\nfor( $i = 0; $i < count($scaleInfo); $i++ ) {\n $width = $scaleInfo[$i][0];\n $height = $scaleInfo[$i][1];\n ...\n}\n\n", "In your simple example for x,y in z, z would be a list of coordinate pairs, like [(0,1), (2,5), (4,3)]. With each turn through the for loop, the x variable gets the first coordinate in the pair and y gets the second.\n", "It's roughly equivalent to (pseudo code):\nFor every item i in z:\n x = i[0]\n y = i[1]\n Loop body happens here\n\nIt means that every item in z contains 2 elements (for example, every item is a list with 2 items).\n", "In python, you can have multiple return values. You can also define a tuple like this\nt = (1,2,3)\n\nTo get access to the elements in t you can do the following:\n\na, b, c = t\n\nThen a has the value 1, etc.\nIf you had an array of 2 element tuples, you could enumerate through them using the following code\nz = [(1, 2), (3, 4), (5, 6)]\nfor x, y in z:\n print x, y\n\nwhich produces the following\n1 2\n3 4\n5 6\n\n", "Suppose z is a list of tuples in Python.\nZ = [(1,2), (1,3), (2,3), (2,4)]\n\nit would be something like:\n$z = array(array(1,2), array(1,3), array(2,3), array(2,4));\n\nusing that for x,y in z would result in:\nz = [(1,2), (1,3), (2,3), (2,4)]\nfor x, y in z:\n print \"%i %i\" % (x,y)\n\n\n1 2\n1 3\n2 3\n2 4\n\nso Translating\nfor x, y in z:\n\ninto PHP would be something like:\nfor ($i=0; $i < count($z); $i++){\n $x = $z[$i][0];\n $y = $z[$i][1];\n\n", "Conceptually\nfor x,y in z\n\nis actually iterating using an enumerator (language specific implementation of an iterator pattern), for loops are based on index based iteration.\nfor x,y in z would semantically be like\nfor (x=0 ; x<z.length ; x++ )\n for (y=1; x<z.length;y++) print z[x],z[y]\n\nnote this will work for tuples in python.\n", "This construct allows you to iterate over multi-dimensional collections so for a 3x2 list you could have have:\nz = [[1,2], [3,4], [5,6]]\nfor x, y in z:\n print x, y\n\nThis prints:\n1 2\n3 4\n5 6\n\nThe same construct could be used on a dictionary which is some sense also a 2-dimensional collection:\nz = {1:\"one\", 2:\"two\", 3:\"three\"}\nfor x, y in z.items():\n for x, y in z.items():\n print x, y\n\nThis prints:\n1 one\n2 two\n3 three\n\nIn Python this construct is general and work at any dimension, changing our original 3x2 list to a 2x3 list we could do this:\nz = [[1,2,3], [4,5,6]]\nfor w, x, y in z:\n print w, x, y\n\nThis prints:\n1 2 3\n4 5 6\n\nIn PHP I think you have to do this with nest for loops, I do not think there is a construct to do the sort of multiple dimension list deconstruction that is possible in Python.\n" ]
[ 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "for_loop", "php", "python" ]
stackoverflow_0001802415_for_loop_php_python.txt
Q: Parsing a hex formated DEC 32 bit single precision floating point value in python I'm having problems parsing a hex formatted DEC 32bit single precision floating point value in python, the value I'm parsing is represented as D44393DB in hex. The original floating point value is ~108, read from a display of the sending unit. The format is specified as: 1bit sign + 8bit exponent + 23bit mantissa. Byte 2 contains the sign bit + the 7 most significant bits of the exponent Byte 1 contains the least significant bit of the exponent + the starting most significant bits of the mantissa. The only thing I have found that differs in the two formats is the bias of the exponent which is 128 in DEC32 and 127 in IEEE-754 (http://www.irig106.org/docs/106-07/appendixO.pdf) Using http://babbage.cs.qc.edu/IEEE-754/32bit.html does not give the expected result. /Kristofer A: Is it possible that the bytes got shuffled somehow? The arrangements of bits that you describe (sign bit in byte 2, LSB of exponent in byte 1) is different from Appendix O that you link to. It looks like byte 1 and 2 were exchanged. I'll assume that byte 3 and 4 were also exchanged, so that the real hex value is 43D4DB93. This translates as 0100 0011 1101 0100 1101 1011 1001 0011 in binary, so the sign bit is 0, indicating a positive number. The exponent is 10000111 (binary) = 135 (decimal), indicating a factor of 2^(135-128) = 128. Finally, the mantissa is 0.1101 0100 1101 1011 1001 0011 (binary), using that Appendix O says that you have to add 0.1 in front, which is approximately 0.8314 in decimal. So your number is 0.8314 * 128 = 106.4 under my assumptions. Added: Some Python 2 code might clarify: input = 0xD44393DB; reshuffled = ((input & 0xFF00FF00) >> 8) | ((input & 0x00FF00FF) << 8); signbit = (reshuffled & 0x80000000) >> 31; exponent = ((reshuffled & 0x7F800000) >> 23) - 128; mantissa = float((reshuffled & 0x007FFFFF) | 0x00800000) / 2**24; result = (-1)**signbit * mantissa * 2**exponent; This yields result = 106.42885589599609. Here is an explanation for the line computing the mantissa. Firstly, reshuffled & 0x007FFFFF yield the 23 bits encoding the mantissa: 101 0100 1101 1011 1001 0011. Then ... | 0x00800000 sets the hidden bit, yielding 1101 0100 1101 1011 1001 0011. We now have to compute the fraction 0.1101 0100 1101 1011 1001 0011. By definition, this equals 1*2^(-1) + 1*2^(-2) + 0*2^(-3) + ... + 1*2^(-23) + 1*2^(-24). This can also be written as (1*2^23 + 1*2^22 + 0*2^21 + ... + 1*2^1 + 1*2^0) / 2^24. The expression in brackets is the value of 1101 0100 1101 1011 1001 0011 (binary), so we can find the mantissa by dividing (reshuffled & 0x007FFFFF) | 0x00800000 by 2^24. A: From my copy of "Microcomputers and Memories" (DEC, 1981), you are correct about the difference between the two formats. The DEC mantissa is normalized to 0.5<=f<1 and the IEEE format mantissa is normalized to 1<=f<2, both with the MSB implicit and not stored. Thus the mantissa bit-layouts are the same. Jitse Niesens assumptions look like a plausible explanation since the value of D44393DB would be -0.7639748 X 2^40 (which is -8.3999923E11). A: Found under "Related" on the RHS: these answers from last month One of the references helps understanding the "wired" (weird?) byte2 byte1 notation. A: Is it definitely a DEC32 value? The sign bit seems to be 1, which indicates negative by this format. However, you do get a result very close to your 108 value if you ignore this and assume that the exponent bias is 15, retaining the 0.1 factor on the mantissa: def decode(x): exp = (x>>30) & 0xff mantissa = x&((2**24)-1) return 0.1 * mantissa * (2**(exp-15)) >>> decode(0xD44393DB) 108.12409668
Parsing a hex formated DEC 32 bit single precision floating point value in python
I'm having problems parsing a hex formatted DEC 32bit single precision floating point value in python, the value I'm parsing is represented as D44393DB in hex. The original floating point value is ~108, read from a display of the sending unit. The format is specified as: 1bit sign + 8bit exponent + 23bit mantissa. Byte 2 contains the sign bit + the 7 most significant bits of the exponent Byte 1 contains the least significant bit of the exponent + the starting most significant bits of the mantissa. The only thing I have found that differs in the two formats is the bias of the exponent which is 128 in DEC32 and 127 in IEEE-754 (http://www.irig106.org/docs/106-07/appendixO.pdf) Using http://babbage.cs.qc.edu/IEEE-754/32bit.html does not give the expected result. /Kristofer
[ "Is it possible that the bytes got shuffled somehow? The arrangements of bits that you describe (sign bit in byte 2, LSB of exponent in byte 1) is different from Appendix O that you link to. It looks like byte 1 and 2 were exchanged. \nI'll assume that byte 3 and 4 were also exchanged, so that the real hex value is 43D4DB93. This translates as 0100 0011 1101 0100 1101 1011 1001 0011 in binary, so the sign bit is 0, indicating a positive number. The exponent is 10000111 (binary) = 135 (decimal), indicating a factor of 2^(135-128) = 128. Finally, the mantissa is 0.1101 0100 1101 1011 1001 0011 (binary), using that Appendix O says that you have to add 0.1 in front, which is approximately 0.8314 in decimal. So your number is 0.8314 * 128 = 106.4 under my assumptions.\nAdded: Some Python 2 code might clarify:\ninput = 0xD44393DB;\nreshuffled = ((input & 0xFF00FF00) >> 8) | ((input & 0x00FF00FF) << 8);\nsignbit = (reshuffled & 0x80000000) >> 31;\nexponent = ((reshuffled & 0x7F800000) >> 23) - 128;\nmantissa = float((reshuffled & 0x007FFFFF) | 0x00800000) / 2**24;\nresult = (-1)**signbit * mantissa * 2**exponent;\n\nThis yields result = 106.42885589599609.\nHere is an explanation for the line computing the mantissa. Firstly, reshuffled & 0x007FFFFF yield the 23 bits encoding the mantissa: 101 0100 1101 1011 1001 0011. Then ... | 0x00800000 sets the hidden bit, yielding 1101 0100 1101 1011 1001 0011. We now have to compute the fraction 0.1101 0100 1101 1011 1001 0011. By definition, this equals 1*2^(-1) + 1*2^(-2) + 0*2^(-3) + ... + 1*2^(-23) + 1*2^(-24). This can also be written as (1*2^23 + 1*2^22 + 0*2^21 + ... + 1*2^1 + 1*2^0) / 2^24. The expression in brackets is the value of 1101 0100 1101 1011 1001 0011 (binary), so we can find the mantissa by dividing (reshuffled & 0x007FFFFF) | 0x00800000 by 2^24.\n", "From my copy of \"Microcomputers and Memories\" (DEC, 1981), you are correct about the difference between the two formats. The DEC mantissa is normalized to 0.5<=f<1 and the IEEE format mantissa is normalized to 1<=f<2, both with the MSB implicit and not stored. Thus the mantissa bit-layouts are the same. Jitse Niesens assumptions look like a plausible explanation since the value of D44393DB would be -0.7639748 X 2^40 (which is -8.3999923E11).\n", "Found under \"Related\" on the RHS: these answers from last month \nOne of the references helps understanding the \"wired\" (weird?) byte2 byte1 notation.\n", "Is it definitely a DEC32 value? The sign bit seems to be 1, which indicates negative by this format. However, you do get a result very close to your 108 value if you ignore this and assume that the exponent bias is 15, retaining the 0.1 factor on the mantissa:\ndef decode(x):\n exp = (x>>30) & 0xff\n mantissa = x&((2**24)-1)\n return 0.1 * mantissa * (2**(exp-15))\n\n>>> decode(0xD44393DB)\n108.12409668\n\n" ]
[ 4, 1, 1, 0 ]
[]
[]
[ "floating_point", "python" ]
stackoverflow_0001797806_floating_point_python.txt
Q: Problem with SPARQLWrapper (Python) I'm making a SPARQL query against the Sesame store in localhost, using SPARQLWrapper: sparql = SPARQLWrapper('http://localhost:8080/openrdf-sesame/repositories/rep/statements') sparql.setQuery(query) sparql.setReturnFormat(JSON) results = sparql.query().convert() However, I'm getting: File "build/bdist.linux-i686/egg/SPARQLWrapper/Wrapper.py", line 339, in query File "build/bdist.linux-i686/egg/SPARQLWrapper/Wrapper.py", line 318, in _query urllib2.HTTPError: HTTP Error 406: Not Acceptable The strange thing is, however, that querying against the DBPedia SPARQL endpoint everything works fine... Any thoughts? Thanks! A: For SPARQLWrapper you don't normally have to add the statements bit in the URI. I.e., this should work: sparql = SPARQLWrapper('http://localhost:8080/openrdf-sesame/repositories/rep') And then just continue with the rest of your code.
Problem with SPARQLWrapper (Python)
I'm making a SPARQL query against the Sesame store in localhost, using SPARQLWrapper: sparql = SPARQLWrapper('http://localhost:8080/openrdf-sesame/repositories/rep/statements') sparql.setQuery(query) sparql.setReturnFormat(JSON) results = sparql.query().convert() However, I'm getting: File "build/bdist.linux-i686/egg/SPARQLWrapper/Wrapper.py", line 339, in query File "build/bdist.linux-i686/egg/SPARQLWrapper/Wrapper.py", line 318, in _query urllib2.HTTPError: HTTP Error 406: Not Acceptable The strange thing is, however, that querying against the DBPedia SPARQL endpoint everything works fine... Any thoughts? Thanks!
[ "For SPARQLWrapper you don't normally have to add the statements bit in the URI. I.e., this should work:\nsparql = SPARQLWrapper('http://localhost:8080/openrdf-sesame/repositories/rep')\n\nAnd then just continue with the rest of your code.\n" ]
[ 3 ]
[ "I've solved the problem by doing the SPARQL wrapping myself...\n" ]
[ -1 ]
[ "python", "rdf", "sparql" ]
stackoverflow_0001684197_python_rdf_sparql.txt
Q: importing cx_Oracle and kinterbasdb returns error Greetings, everybody. I'm trying to import the following libraries in python: cx_Oracle and kinterbasdb. But, when I try, I get a very similar message error. *for cx_Oracle: Traceback (most recent call last): File "", line 1, in ImportError: DLL load failed: Não foi possível encontrar o procedimento especificado. (translation: It was not possible to find the specified procedure) *for kinterbasdb: Traceback (most recent call last): File "C:\", line 1, in File "c:\Python26\Lib\site-packages\kinterbasdb__init__.py", line 119, in import _kinterbasdb as _k ImportError: DLL load failed: Não foi possível encontrar o módulo especificado. (translation: It was not possible to find the specified procedure) I'm using python 2.6.4 in windows XP. cx_Oracle's version is 5.0.2. kinterbasdb's version is 3.3.0. Edit: I've solved it for cx_Oracle, it was a wrong version problem. But I believe I'm using the correct version, and I downloaded it from the Firebird site ( kinterbasdb-3.3.0.win32-setup-py2.6.exe ). Still need assistance with this, please. Can anyone lend me a hand here? Many Thanks Dante
importing cx_Oracle and kinterbasdb returns error
Greetings, everybody. I'm trying to import the following libraries in python: cx_Oracle and kinterbasdb. But, when I try, I get a very similar message error. *for cx_Oracle: Traceback (most recent call last): File "", line 1, in ImportError: DLL load failed: Não foi possível encontrar o procedimento especificado. (translation: It was not possible to find the specified procedure) *for kinterbasdb: Traceback (most recent call last): File "C:\", line 1, in File "c:\Python26\Lib\site-packages\kinterbasdb__init__.py", line 119, in import _kinterbasdb as _k ImportError: DLL load failed: Não foi possível encontrar o módulo especificado. (translation: It was not possible to find the specified procedure) I'm using python 2.6.4 in windows XP. cx_Oracle's version is 5.0.2. kinterbasdb's version is 3.3.0. Edit: I've solved it for cx_Oracle, it was a wrong version problem. But I believe I'm using the correct version, and I downloaded it from the Firebird site ( kinterbasdb-3.3.0.win32-setup-py2.6.exe ). Still need assistance with this, please. Can anyone lend me a hand here? Many Thanks Dante
[]
[]
[ "oracle is a complete pain. i don't know the details for windows, but for unix you need ORACLE_HOME and LD_LIBRARY_PATH to both be defined before cx_oracle will work. in windows this would be your environment variables, i guess. so check those.\nalso, check that they are defined in the environment in which the program runs (again, i don't know windows specific details, but in unix it's possible for everything to work when you run it from your account by hand, but still not work when run as a batch job because the environment is different).\n" ]
[ -1 ]
[ "cx_oracle", "kinterbasdb", "python" ]
stackoverflow_0001799475_cx_oracle_kinterbasdb_python.txt
Q: Fast Graphics with XServer I am working on embedded linux platform with limited system resources. I want to do fullscreen slideshow with simple transistions (like slide in-out, fade in-out ). I tried PyGtk+GTK+Cairo but its very slow, when I animate GTK image controls I get just two or three frames per second. But smplayer is playing video at good speed! I did some little research and came to know about directfb, libggi, svgalib etc. and I don't know what library should be used. Which library is the best for this kind of application? I would prefer to do this without stopping X. A: I would try this first using just PyCairo, not using GTK controls at all. However, if that does not give you the speed that you need, then you might want to try PyGame which gives you access to SDL including OpenGL backends. PyGame is very actively developed and used in building applications that include full screen animation so even if you are not writing a game, you will still likely find the best support by using PyGame.
Fast Graphics with XServer
I am working on embedded linux platform with limited system resources. I want to do fullscreen slideshow with simple transistions (like slide in-out, fade in-out ). I tried PyGtk+GTK+Cairo but its very slow, when I animate GTK image controls I get just two or three frames per second. But smplayer is playing video at good speed! I did some little research and came to know about directfb, libggi, svgalib etc. and I don't know what library should be used. Which library is the best for this kind of application? I would prefer to do this without stopping X.
[ "I would try this first using just PyCairo, not using GTK controls at all.\nHowever, if that does not give you the speed that you need, then you might want to try PyGame which gives you access to SDL including OpenGL backends. PyGame is very actively developed and used in building applications that include full screen animation so even if you are not writing a game, you will still likely find the best support by using PyGame.\n" ]
[ 2 ]
[]
[]
[ "linux", "python" ]
stackoverflow_0001803458_linux_python.txt
Q: Multiple CouchDB Document fetch with couchdb-python How to fetch multiple documents from CouchDB, in particular with couchdb-python? A: Easiest way is to pass a include_docs=True arg to Database.view. Each row of the results will include the doc. e.g. >>> db = couchdb.Database('http://localhost:5984/test') >>> rows = db.view('_all_docs', keys=['docid1', 'docid2', 'missing'], include_docs=True) >>> docs = [row.doc for row in rows] >>> docs [<Document 'docid1'@'...' {}>, <Document 'docid2'@'...' {}>, None] Note that a row's doc will be None if the document does not exist. This works with any view - just provide a list of keys suitable to the view. A: This is the right way: import couchdb server = couchdb.Server("http://localhost:5984") db = server["dbname"] results = db.view("_all_docs", keys=["key1", "key2"]) A: import couchdb import simplejson as json resource = couchdb.client.Resource(None, 'http://localhost:5984/dbname/_all_docs') params = {"include_docs":True} content = json.dumps({"keys":[idstring1, idstring2, ...]}) headers = {"Content-Type":"application/json"} resource.post(headers=headers, content=content, **params) resource.post(headers=headers, content=content, **params)[1]['rows']
Multiple CouchDB Document fetch with couchdb-python
How to fetch multiple documents from CouchDB, in particular with couchdb-python?
[ "Easiest way is to pass a include_docs=True arg to Database.view. Each row of the results will include the doc. e.g.\n>>> db = couchdb.Database('http://localhost:5984/test')\n>>> rows = db.view('_all_docs', keys=['docid1', 'docid2', 'missing'], include_docs=True)\n>>> docs = [row.doc for row in rows]\n>>> docs\n[<Document 'docid1'@'...' {}>, <Document 'docid2'@'...' {}>, None]\n\nNote that a row's doc will be None if the document does not exist.\nThis works with any view - just provide a list of keys suitable to the view.\n", "This is the right way:\nimport couchdb\n\nserver = couchdb.Server(\"http://localhost:5984\")\ndb = server[\"dbname\"]\nresults = db.view(\"_all_docs\", keys=[\"key1\", \"key2\"])\n\n", "import couchdb\nimport simplejson as json\n\nresource = couchdb.client.Resource(None, 'http://localhost:5984/dbname/_all_docs')\nparams = {\"include_docs\":True}\ncontent = json.dumps({\"keys\":[idstring1, idstring2, ...]})\nheaders = {\"Content-Type\":\"application/json\"}\nresource.post(headers=headers, content=content, **params)\nresource.post(headers=headers, content=content, **params)[1]['rows']\n\n" ]
[ 22, 4, -7 ]
[]
[]
[ "couchdb", "python" ]
stackoverflow_0001640054_couchdb_python.txt
Q: Python regular expression matching a multiline block of text but not replacing it Ok so i have this piece of code: def findNReplaceRegExp(file_name, regexp, replaceString, verbose=True, confirmationNeeded=True): '''Replaces the oldString with the replaceString in the file given,\ returns the number of replaces ''' # initialize local variables cregexp = re.compile(regexp, re.MULTILINE | re.DOTALL) somethingReplaced = True ocurrences = 0 isAborted = False # open file for read file_in = open(file_name, 'r') file_in_string = file_in.read() file_in.close() while somethingReplaced: somethingReplaced = False # if the regexp is found if cregexp.search(file_in_string): # make the substitution replaced_text = re.sub(regexp, replaceString, file_in_string) if verbose == True: # calculate the segment of text in which the resolution will be done # print the old string and the new string print '- ' + file_in_string print '+ ' + replaced_text if confirmationNeeded: # ask user if this should be done question = raw_input('Accept changes? [Yes (Y), No (n), Abort (a)] ') question = string.lower(question) if question == 'a': isAborted = True print "Aborted" break elif question == 'n': pass else: file_in_string = replaced_text somethingReplaced = True ocurrences = ocurrences + 1 else: file_in_string = replaced_text somethingReplaced = True ocurrences = ocurrences + 1 # if some text was replaced, overwrite the original file if ocurrences > 0 and not isAborted: # open the file for overwritting file_out = open(file_name, 'w') file_out.write(file_in_string) file_out.close() if verbose: print "File " + file_name + " written" And this file CMC_SRS T10-24400: DKU Data Supply: SN Time Break-In Area CMC_SRS T10-24401: DKU Data Supply: SN Transponder Enable Area CMC_SRS T10-24402: DKU Data Supply: SN Adjust Master Slave Area CMC_SRS T10-24403: DKU Data Supply: SN ATEC Area CMC_SRS T10-24404: DKU Data Supply: SN PTEC Area CMC_SRS T10-25449: DKU Data Supply: SN Self Init Area CMC_SRS T10-24545: DKU Data Supply: SN Time Area CMC_SRS T10-4017: RFI display update CMC_SRS T10-6711: Radio Interface to PLS Equipment CMC_SRS T10-21077: Safety Requirements: Limit FM Power When i call the procedure with this file and these parameters: regexp=24403.*24404 replace=TESTSTRING i get a coincidence (it matches and questions what to do) but when its time to replace nothing happens... Whats wrong?? A: You're finding with cregexp, which has the multiline option set, but then replacing with regexp, which may or may not.
Python regular expression matching a multiline block of text but not replacing it
Ok so i have this piece of code: def findNReplaceRegExp(file_name, regexp, replaceString, verbose=True, confirmationNeeded=True): '''Replaces the oldString with the replaceString in the file given,\ returns the number of replaces ''' # initialize local variables cregexp = re.compile(regexp, re.MULTILINE | re.DOTALL) somethingReplaced = True ocurrences = 0 isAborted = False # open file for read file_in = open(file_name, 'r') file_in_string = file_in.read() file_in.close() while somethingReplaced: somethingReplaced = False # if the regexp is found if cregexp.search(file_in_string): # make the substitution replaced_text = re.sub(regexp, replaceString, file_in_string) if verbose == True: # calculate the segment of text in which the resolution will be done # print the old string and the new string print '- ' + file_in_string print '+ ' + replaced_text if confirmationNeeded: # ask user if this should be done question = raw_input('Accept changes? [Yes (Y), No (n), Abort (a)] ') question = string.lower(question) if question == 'a': isAborted = True print "Aborted" break elif question == 'n': pass else: file_in_string = replaced_text somethingReplaced = True ocurrences = ocurrences + 1 else: file_in_string = replaced_text somethingReplaced = True ocurrences = ocurrences + 1 # if some text was replaced, overwrite the original file if ocurrences > 0 and not isAborted: # open the file for overwritting file_out = open(file_name, 'w') file_out.write(file_in_string) file_out.close() if verbose: print "File " + file_name + " written" And this file CMC_SRS T10-24400: DKU Data Supply: SN Time Break-In Area CMC_SRS T10-24401: DKU Data Supply: SN Transponder Enable Area CMC_SRS T10-24402: DKU Data Supply: SN Adjust Master Slave Area CMC_SRS T10-24403: DKU Data Supply: SN ATEC Area CMC_SRS T10-24404: DKU Data Supply: SN PTEC Area CMC_SRS T10-25449: DKU Data Supply: SN Self Init Area CMC_SRS T10-24545: DKU Data Supply: SN Time Area CMC_SRS T10-4017: RFI display update CMC_SRS T10-6711: Radio Interface to PLS Equipment CMC_SRS T10-21077: Safety Requirements: Limit FM Power When i call the procedure with this file and these parameters: regexp=24403.*24404 replace=TESTSTRING i get a coincidence (it matches and questions what to do) but when its time to replace nothing happens... Whats wrong??
[ "You're finding with cregexp, which has the multiline option set, but then replacing with regexp, which may or may not.\n" ]
[ 1 ]
[]
[]
[ "multiline", "python", "regex", "replace" ]
stackoverflow_0001803713_multiline_python_regex_replace.txt
Q: Error codes returned by urllib/urllib2 and the actual page the normal behavior of urllib/urllib2 is if an error code is sent in the header of the response (i.e 404) an Exception is raised. How do you look for specific errors i.e (40x, or 50x) based on the different errors, do different things. Also, how do you read the actual data being returned HTML/JSON etc (The data usually has error details which is different to the HTML error code) A: urllib2 raises a HTTPError when HTTP errors happen. You can get to the response code using code on the exception object. You can get the response data using read(): >>> req = urllib2.Request('http://www.python.org/fish.html') >>> try: >>> urllib2.urlopen(req) >>> except urllib2.HTTPError, e: >>> print e.code >>> print e.read() >>> 404 <actual data response will be here> A: In urllib2 HTTPError exception is also a valid HTTP response, so you can treat an HTTP error as an exceptional event or valid response. But in urllib you have to subclass URLopener and define http_error_<code> method[s] or redefine http_error_default to handle them all.
Error codes returned by urllib/urllib2 and the actual page
the normal behavior of urllib/urllib2 is if an error code is sent in the header of the response (i.e 404) an Exception is raised. How do you look for specific errors i.e (40x, or 50x) based on the different errors, do different things. Also, how do you read the actual data being returned HTML/JSON etc (The data usually has error details which is different to the HTML error code)
[ "urllib2 raises a HTTPError when HTTP errors happen. You can get to the response code using code on the exception object. You can get the response data using read():\n\n>>> req = urllib2.Request('http://www.python.org/fish.html')\n>>> try:\n>>> urllib2.urlopen(req)\n>>> except urllib2.HTTPError, e:\n>>> print e.code\n>>> print e.read()\n>>>\n404\n<actual data response will be here>\n\n", "In urllib2 HTTPError exception is also a valid HTTP response, so you can treat an HTTP error as an exceptional event or valid response. But in urllib you have to subclass URLopener and define http_error_<code> method[s] or redefine http_error_default to handle them all.\n" ]
[ 9, 1 ]
[]
[]
[ "error_handling", "python" ]
stackoverflow_0001803741_error_handling_python.txt
Q: communicate with a process in utf-8 on a cp1252 consoless I need to control a program by sending commands in utf-8 encoding to its standard input. For this I run the program using subprocess.Popen(): proc = Popen("myexecutable.exe", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE) proc.stdin.write(u'ééé'.encode('utf_8')) If I run this from a cygwin utf-8 console, it works. If I run it from a windows console (encoding ='cp1252') this doesn't work. Is there a way to make this work without having to install a cygwin utf-8 console on each computer I want it to run from ? (NB: I don't need to output anything to console) A: I wonder if this caveat, from the subprocess documentation, is relevant: The only reason you would need to specify shell=True on Windows is where the command you wish to execute is actually built in to the shell, eg dir, copy. You don’t need shell=True to run a batch file, nor to run a console-based executable. A: Why do you need to force utf-8 pipes? Couldn't you do something like import sys current_encoding = sys.stdout.encoding ... proc.stdin.write(u'ééé'.encode(current_encoding)) EDIT: I wrote this answer before you edited your question. I guess this is not what you're looking for, then, is it?
communicate with a process in utf-8 on a cp1252 consoless
I need to control a program by sending commands in utf-8 encoding to its standard input. For this I run the program using subprocess.Popen(): proc = Popen("myexecutable.exe", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE) proc.stdin.write(u'ééé'.encode('utf_8')) If I run this from a cygwin utf-8 console, it works. If I run it from a windows console (encoding ='cp1252') this doesn't work. Is there a way to make this work without having to install a cygwin utf-8 console on each computer I want it to run from ? (NB: I don't need to output anything to console)
[ "I wonder if this caveat, from the subprocess documentation, is relevant:\n\nThe only reason you would need to specify shell=True on Windows is where the command you wish to execute is actually built in to the shell, eg dir, copy. You don’t need shell=True to run a batch file, nor to run a console-based executable.\n\n", "Why do you need to force utf-8 pipes? Couldn't you do something like\nimport sys\ncurrent_encoding = sys.stdout.encoding\n...\nproc.stdin.write(u'ééé'.encode(current_encoding))\n\nEDIT: I wrote this answer before you edited your question. I guess this is not what you're looking for, then, is it?\n" ]
[ 0, 0 ]
[]
[]
[ "cp1252", "python", "utf_8", "windows" ]
stackoverflow_0001803675_cp1252_python_utf_8_windows.txt
Q: Python: Can you make this __eq__ easy to understand? I have another question for you. I have a python class with a list 'metainfo'. This list contains variable names that my class might contain. I wrote a __eq__ method that returns True if the both self and other have the same variables from metainfo and those variables have the same value. Here is my implementation: def __eq__(self, other): for attr in self.metainfo: try: ours = getattr(self, attr) try: theirs = getattr(other, attr) if ours != theirs: return False except AttributeError: return False except AttributeError: try: theirs = getattr(other, attr) return False except AttributeError: pass return True Does anyone have any suggestions as to how I can make this code easier on the eye? Be as ruthless as you please. A: I would add a docstring which explains what it compares, as you did in your question. A: Use getattr's third argument to set distinct default values: def __eq__(self, other): return all(getattr(self, a, Ellipsis) == getattr(other, a, Ellipsis) for a in self.metainfo) As the default value, set something that will never be an actual value, such as Ellipsis†. Thus the values will match only if both objects contain the same value for a certain attribute or if both do not have said attribute. Edit: as Nadia points out, NotImplemented may be a more appropriate constant (unless you're storing the result of rich comparisons...). Edit 2: Indeed, as Lac points out, just using hasattr results in a more readable solution: def __eq__(self, other): return all(hasattr(self, a) == hasattr(other, a) and getattr(self, a) == getattr(other, a) for a in self.metainfo)   †: for extra obscurity you could write ... instead of Ellipsis, thus getattr(self, a, ...) etc. No, don't do it :) A: def __eq__(self, other): """Returns True if both instances have the same variables from metainfo and they have the same values.""" for attr in self.metainfo: if attr in self.__dict__: if attr not in other.__dict__: return False if getattr(self, attr) != getattr(other, attr): return False continue else: if attr in other.__dict__: return False return True A: Going with "Flat is better than nested" I would remove the nested try statements. Instead, getattr should return a sentinel that only equals itself. Unlike Stephan202, however, I prefer to keep the for loop. I also would create a sentinel by myself, and not re-use some existing Python object. This guarantees that there are no false positives, even in the most exotic situations. def __eq__(self, other): if set(metainfo) != set(other.metainfo): # if the meta info differs, then assume the items differ. # alternatively, define how differences should be handled # (e.g. by using the intersection or the union of both metainfos) # and use that to iterate over return False sentinel = object() # sentinel == sentinel <=> sentinel is sentinel for attr in self.metainfo: if getattr(self, attr, sentinel) != getattr(other, attr, sentinel): return False return True Also, the method should have a doc-string explaining it's eq behavior; same goes for the class which should have a docstring explaining the use of the metainfo attribute. Finally, a unit-test for this equality-behavior should be present as well. Some interesting test cases would be: Objects that have the same content for all metainfo-attributes, but different content for some other attributes (=> they are equal) If required, checking for commutativity of equals, i.e. if a == b: b == a Objects that don't have any of the metainfo-attributes set A: Since it's about to make it easy to understand, not short or very fast : class Test(object): def __init__(self): self.metainfo = ["foo", "bar"] # adding a docstring helps a lot # adding a doctest even more : you have an example and a unit test # at the same time ! (so I know this snippet works :-)) def __eq__(self, other): """ This method check instances equality and returns True if both of the instances have the same attributs with the same values. However, the check is performed only on the attributs whose name are listed in self.metainfo. E.G : >>> t1 = Test() >>> t2 = Test() >>> print t1 == t2 True >>> t1.foo = True >>> print t1 == t2 False >>> t2.foo = True >>> t2.bar = 1 >>> print t1 == t2 False >>> t1.bar = 1 >>> print t1 == t2 True >>> t1.new_value = "test" >>> print t1 == t2 True >>> t1.metainfo.append("new_value") >>> print t1 == t2 False """ # Then, let's keep the code simple. After all, you are just # comparing lists : self_metainfo_val = [getattr(self, info, Ellipsis) for info in self.metainfo] other_metainfo_val = [getattr(other, info, Ellipsis) for info in self.metainfo] return self_metainfo_val == other_metainfo_val A: I would break the logic up into separate chunks that are easier to understand, each one checking a different condition (and each one assuming the previous thing was checked). Easiest just to show the code: # First, check if we have the same list of variables. my_vars = [var for var in self.metainf if hasattr(self, var)] other_vars = [var for var in other.metainf if hasattr(other, var)] if my_vars.sorted() != other_vars.sorted(): return False # Don't even have the same variables. # Now, check each variable: for var in my_vars: if self.var != other.var: return False # We found a variable with a different value. # We're here, which means we haven't found any problems! return True Edit: I misunderstood the question, here is an updated version. I still think this is a clear way to write this kind of logic, but it's uglier than I intended and not at all efficient, so in this case I'd probably go with a different solution. A: The try/excepts make your code harder to read. I'd use getattr with a default value that is guaranteed not to otherwise be there. In the code below I just make a temp object. That way if object do not have a given value they'll both return "NOT_PRESENT" and thus count as being equal. def __eq__(self, other): NOT_PRESENT = object() for attr in self.metainfo: ours = getattr(self, attr, NOT_PRESENT) theirs = getattr(other, attr, NOT_PRESENT) if ours != theirs: return False return True A: Here is a variant that is pretty easy to read IMO, without using sentinel objects. It will first compare if both has or hasnt the attribute, then compare the values. It could be done in one line using all() and a generator expression as Stephen did, but I feel this is more readable. def __eq__(self, other): for a in self.metainfo: if hasattr(self, a) != hasattr(other, a): return False if getattr(self, a, None) != getattr(other, a, None): return False return True A: I like Stephan202's answer, but I think that his code doesn't make equality conditions clear enough. Here's my take on it: def __eq__(self, other): wehave = [attr for attr in self.metainfo if hasattr(self, attr)] theyhave = [attr for attr in self.metainfo if hasattr(other, attr)] if wehave != theyhave: return False return all(getattr(self, attr) == getattr(other, attr) for attr in wehave)
Python: Can you make this __eq__ easy to understand?
I have another question for you. I have a python class with a list 'metainfo'. This list contains variable names that my class might contain. I wrote a __eq__ method that returns True if the both self and other have the same variables from metainfo and those variables have the same value. Here is my implementation: def __eq__(self, other): for attr in self.metainfo: try: ours = getattr(self, attr) try: theirs = getattr(other, attr) if ours != theirs: return False except AttributeError: return False except AttributeError: try: theirs = getattr(other, attr) return False except AttributeError: pass return True Does anyone have any suggestions as to how I can make this code easier on the eye? Be as ruthless as you please.
[ "I would add a docstring which explains what it compares, as you did in your question.\n", "Use getattr's third argument to set distinct default values:\ndef __eq__(self, other):\n return all(getattr(self, a, Ellipsis) == getattr(other, a, Ellipsis)\n for a in self.metainfo)\n\nAs the default value, set something that will never be an actual value, such as Ellipsis†. Thus the values will match only if both objects contain the same value for a certain attribute or if both do not have said attribute.\nEdit: as Nadia points out, NotImplemented may be a more appropriate constant (unless you're storing the result of rich comparisons...).\nEdit 2: Indeed, as Lac points out, just using hasattr results in a more readable solution:\ndef __eq__(self, other):\n return all(hasattr(self, a) == hasattr(other, a) and\n getattr(self, a) == getattr(other, a) for a in self.metainfo)\n\n\n  †: for extra obscurity you could write ... instead of Ellipsis, thus getattr(self, a, ...) etc. No, don't do it :)\n", "def __eq__(self, other):\n \"\"\"Returns True if both instances have the same variables from metainfo\n and they have the same values.\"\"\"\n for attr in self.metainfo:\n if attr in self.__dict__:\n if attr not in other.__dict__:\n return False\n if getattr(self, attr) != getattr(other, attr):\n return False\n continue\n else:\n if attr in other.__dict__:\n return False\n return True\n\n", "Going with \"Flat is better than nested\" I would remove the nested try statements. Instead, getattr should return a sentinel that only equals itself. Unlike Stephan202, however, I prefer to keep the for loop. I also would create a sentinel by myself, and not re-use some existing Python object. This guarantees that there are no false positives, even in the most exotic situations.\ndef __eq__(self, other):\n if set(metainfo) != set(other.metainfo):\n # if the meta info differs, then assume the items differ.\n # alternatively, define how differences should be handled\n # (e.g. by using the intersection or the union of both metainfos)\n # and use that to iterate over\n return False\n sentinel = object() # sentinel == sentinel <=> sentinel is sentinel\n for attr in self.metainfo:\n if getattr(self, attr, sentinel) != getattr(other, attr, sentinel):\n return False\n return True\n\nAlso, the method should have a doc-string explaining it's eq behavior; same goes for the class which should have a docstring explaining the use of the metainfo attribute.\nFinally, a unit-test for this equality-behavior should be present as well. Some interesting test cases would be:\n\nObjects that have the same content for all metainfo-attributes, but different content for some other attributes (=> they are equal)\nIf required, checking for commutativity of equals, i.e. if a == b: b == a\nObjects that don't have any of the metainfo-attributes set\n\n", "Since it's about to make it easy to understand, not short or very fast :\nclass Test(object):\n\n def __init__(self):\n self.metainfo = [\"foo\", \"bar\"]\n\n # adding a docstring helps a lot\n # adding a doctest even more : you have an example and a unit test\n # at the same time ! (so I know this snippet works :-))\n def __eq__(self, other):\n \"\"\"\n This method check instances equality and returns True if both of\n the instances have the same attributs with the same values.\n However, the check is performed only on the attributs whose name\n are listed in self.metainfo.\n\n E.G :\n\n >>> t1 = Test()\n >>> t2 = Test()\n >>> print t1 == t2\n True\n >>> t1.foo = True\n >>> print t1 == t2\n False\n >>> t2.foo = True\n >>> t2.bar = 1\n >>> print t1 == t2\n False\n >>> t1.bar = 1\n >>> print t1 == t2\n True\n >>> t1.new_value = \"test\"\n >>> print t1 == t2\n True\n >>> t1.metainfo.append(\"new_value\")\n >>> print t1 == t2\n False\n\n \"\"\"\n\n # Then, let's keep the code simple. After all, you are just\n # comparing lists :\n\n self_metainfo_val = [getattr(self, info, Ellipsis)\n for info in self.metainfo]\n other_metainfo_val = [getattr(other, info, Ellipsis)\n for info in self.metainfo]\n return self_metainfo_val == other_metainfo_val\n\n", "I would break the logic up into separate chunks that are easier to understand, each one checking a different condition (and each one assuming the previous thing was checked). Easiest just to show the code:\n# First, check if we have the same list of variables.\nmy_vars = [var for var in self.metainf if hasattr(self, var)]\nother_vars = [var for var in other.metainf if hasattr(other, var)]\n\nif my_vars.sorted() != other_vars.sorted():\n return False # Don't even have the same variables.\n\n# Now, check each variable:\nfor var in my_vars:\n if self.var != other.var:\n return False # We found a variable with a different value.\n\n# We're here, which means we haven't found any problems!\nreturn True\n\nEdit: I misunderstood the question, here is an updated version. I still think this is a clear way to write this kind of logic, but it's uglier than I intended and not at all efficient, so in this case I'd probably go with a different solution.\n", "The try/excepts make your code harder to read. I'd use getattr with a default value that is guaranteed not to otherwise be there. In the code below I just make a temp object. That way if object do not have a given value they'll both return \"NOT_PRESENT\" and thus count as being equal.\n\ndef __eq__(self, other):\n NOT_PRESENT = object()\n for attr in self.metainfo:\n ours = getattr(self, attr, NOT_PRESENT) \n theirs = getattr(other, attr, NOT_PRESENT)\n if ours != theirs:\n return False\n return True\n\n", "Here is a variant that is pretty easy to read IMO, without using sentinel objects. It will first compare if both has or hasnt the attribute, then compare the values.\nIt could be done in one line using all() and a generator expression as Stephen did, but I feel this is more readable.\ndef __eq__(self, other):\n for a in self.metainfo:\n if hasattr(self, a) != hasattr(other, a):\n return False\n if getattr(self, a, None) != getattr(other, a, None):\n return False\n return True\n\n", "I like Stephan202's answer, but I think that his code doesn't make equality conditions clear enough. Here's my take on it:\ndef __eq__(self, other):\n wehave = [attr for attr in self.metainfo if hasattr(self, attr)]\n theyhave = [attr for attr in self.metainfo if hasattr(other, attr)]\n if wehave != theyhave:\n return False\n return all(getattr(self, attr) == getattr(other, attr) for attr in wehave)\n\n" ]
[ 9, 9, 5, 3, 3, 1, 1, 1, 0 ]
[]
[]
[ "equality", "python" ]
stackoverflow_0001803710_equality_python.txt
Q: Process two files at the same time in Python I have information about 12340 cars. This info is stored sequentially in two different files: car_names.txt, which contains one line for the name of each car car_descriptions.txt, which contains the descriptions of each car. So 40 lines for each one, where the 6th line reads @CAR_NAME I would like to do in python: to add for each car in the car_descriptions.txt file the name of each car (which comes from the other file) in the 7th line (it is empty), just after @CAR_NAME I thought about: 1) read 1st file and store car names in a matrix/list 2) start to read 2nd file and each time it finds the string @CAR_NAME, just write the name on the next line But I wonder if there is a faster approach, so the program reads each time one line from each file and makes the modification. Thanks A: First, make a generator that retrieves the car name from a sequence. You could yield every 7th line; I've made mine yield whatever line follows the line that starts with @CAR_NAME: def car_names(seq): yieldnext=False for line in seq: if yieldnext: yield line yieldnext = line.startswith('@CAR_NAME') Now you can use itertools.izip to go through both sequences in parallel: from itertools import izip with open(r'c:\temp\cars.txt') as f1: with open(r'c:\temp\car_names.txt') as f2: for (c1, c2) in izip(f1, car_names(f2)): print c1, c2 A: I'm not sure if I completely understand what you're trying to do, is something like this? f1 = open ('car_names.txt') f2 = open ('car_descriptions.txt') for car_name in f1.readlines (): for i in range (6): # echo the first 6 lines print f2.readline () assert f2.readline() == '@CAR_NAME' # skip the 7th, but assert that it is @CAR_NAME print car_name # print the real car name for i in range (33): # print the remaining 33 of the original 40 print f2.readline () A: Reading car_names.txt will save you a piddling amount of memory (really really tiny by today's standards;-) but it absolutely won't be any faster than slurping it down at one gulp (best case it will be exactly the same speed, probably even a little bit slower unless your underlying operating system and storage system do a great job at read-lookahead caching / buffering). So I suggest: import fileinput carnames = open('car_names.txt').readlines() carnamit = iter(carnames) skip = False for line in fileinput.input(['car_descriptions.txt'], True, '.bak'): if not skip: print line, if '@CAR_NAME' in line: print next(carnamit), skip = True else: skip = False So measure the speed of this, and an alternative that does carnamit = open('car_names.txt') at the start instead of reading all lines over a list like my first version -- I bet that the first version (in as much as there's any measurable and repeatable difference) will prove to be faster. BTW, the fileinput module of the standard library is documented here, and it's truly a convenient way to perform "virtual rewriting in-place" of text files (typically keeping the old version as a backup, just in case -- but even if the machine should crash in the middle of the operation the old version of the data will still be there, so in a sense the "rewriting" operates atomically with respect to machine crashes, a nice little touch;-). A: for line1, line2 in zip(file(filename1), file(filename2)): # do your thing or similar A: 12340 is not any data (in sense that there are much bigger data to process on the market). Even better approach would use build in sqlite module. If not use some simple format like CSV for example. This is a structure organized. If not use threads, you could process two files simultaneously. A: I think this fits the question: it reads the description file one line at a time when it sees @CAR_NAME, it still emits it, but replaces the next line in the description file with the next line from the names file def merge_car_descriptions(namefile, descrfile): names = open(namefile,'r') descr = open(descrfile,'r') for d in descr: if '@CAR_NAME' in d: yield d + names.readline() descr.next() else: yield d if __name__=='__main__': import sys if len(sys.argv) != 3: sys.exit("Syntax: %s car_names.txt car_descriptions.txt" % sys.argv[0]) for l in merge_car_descriptions(sys.argv[1], sys.argv[2]): print l,
Process two files at the same time in Python
I have information about 12340 cars. This info is stored sequentially in two different files: car_names.txt, which contains one line for the name of each car car_descriptions.txt, which contains the descriptions of each car. So 40 lines for each one, where the 6th line reads @CAR_NAME I would like to do in python: to add for each car in the car_descriptions.txt file the name of each car (which comes from the other file) in the 7th line (it is empty), just after @CAR_NAME I thought about: 1) read 1st file and store car names in a matrix/list 2) start to read 2nd file and each time it finds the string @CAR_NAME, just write the name on the next line But I wonder if there is a faster approach, so the program reads each time one line from each file and makes the modification. Thanks
[ "First, make a generator that retrieves the car name from a sequence. You could yield every 7th line; I've made mine yield whatever line follows the line that starts with @CAR_NAME:\ndef car_names(seq):\n yieldnext=False\n for line in seq:\n if yieldnext: yield line\n yieldnext = line.startswith('@CAR_NAME')\n\nNow you can use itertools.izip to go through both sequences in parallel:\nfrom itertools import izip\nwith open(r'c:\\temp\\cars.txt') as f1:\n with open(r'c:\\temp\\car_names.txt') as f2:\n for (c1, c2) in izip(f1, car_names(f2)):\n print c1, c2\n\n", "I'm not sure if I completely understand what you're trying to do, is something like this?\nf1 = open ('car_names.txt')\nf2 = open ('car_descriptions.txt')\nfor car_name in f1.readlines ():\n for i in range (6): # echo the first 6 lines\n print f2.readline ()\n assert f2.readline() == '@CAR_NAME' # skip the 7th, but assert that it is @CAR_NAME\n print car_name # print the real car name\n for i in range (33): # print the remaining 33 of the original 40\n print f2.readline ()\n\n", "Reading car_names.txt will save you a piddling amount of memory (really really tiny by today's standards;-) but it absolutely won't be any faster than slurping it down at one gulp (best case it will be exactly the same speed, probably even a little bit slower unless your underlying operating system and storage system do a great job at read-lookahead caching / buffering). So I suggest:\nimport fileinput\n\ncarnames = open('car_names.txt').readlines()\ncarnamit = iter(carnames)\n\nskip = False\nfor line in fileinput.input(['car_descriptions.txt'], True, '.bak'):\n if not skip:\n print line,\n if '@CAR_NAME' in line:\n print next(carnamit),\n skip = True\n else:\n skip = False\n\nSo measure the speed of this, and an alternative that does\ncarnamit = open('car_names.txt')\n\nat the start instead of reading all lines over a list like my first version -- I bet that the first version (in as much as there's any measurable and repeatable difference) will prove to be faster.\nBTW, the fileinput module of the standard library is documented here, and it's truly a convenient way to perform \"virtual rewriting in-place\" of text files (typically keeping the old version as a backup, just in case -- but even if the machine should crash in the middle of the operation the old version of the data will still be there, so in a sense the \"rewriting\" operates atomically with respect to machine crashes, a nice little touch;-).\n", "for line1, line2 in zip(file(filename1), file(filename2)):\n # do your thing\n\nor similar\n", "12340 is not any data (in sense that there are much bigger data to process on the market).\nEven better approach would use build in sqlite module.\nIf not use some simple format like CSV for example. This is a structure organized.\nIf not use threads, you could process two files simultaneously. \n", "I think this fits the question:\n\nit reads the description file one line at a time\nwhen it sees @CAR_NAME, it still emits it, but replaces the next line in the description file with the next line from the names file\n\n\n\n\ndef merge_car_descriptions(namefile, descrfile):\n names = open(namefile,'r')\n descr = open(descrfile,'r')\n for d in descr:\n if '@CAR_NAME' in d:\n yield d + names.readline()\n descr.next()\n else:\n yield d\n\nif __name__=='__main__':\n import sys\n if len(sys.argv) != 3:\n sys.exit(\"Syntax: %s car_names.txt car_descriptions.txt\" % sys.argv[0])\n for l in merge_car_descriptions(sys.argv[1], sys.argv[2]):\n print l,\n\n" ]
[ 9, 8, 4, 1, 0, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001731102_python_string.txt
Q: PyAudio.open, how to use? I'm trying to make a pyaudio input stream but can't figure out how to make it. What I did is: a = pyaudio.PyAudio() Then tried to call a.open() but I don't know the arguments I should type in. It asks me to check Stream.init for a reference, but I don't know what a PA_MANAGER is and the documentation isn't useful at all about it. A: Perhaps you could start by modfying some of the examples?
PyAudio.open, how to use?
I'm trying to make a pyaudio input stream but can't figure out how to make it. What I did is: a = pyaudio.PyAudio() Then tried to call a.open() but I don't know the arguments I should type in. It asks me to check Stream.init for a reference, but I don't know what a PA_MANAGER is and the documentation isn't useful at all about it.
[ "Perhaps you could start by modfying some of the examples?\n" ]
[ 4 ]
[]
[]
[ "audio", "python" ]
stackoverflow_0001803894_audio_python.txt
Q: Running methods on different cores on python Is there any easy way to make 2 methods, let's say MethodA() and MethodB() run in 2 different cores? I don't mean 2 different threads. I'm running in Windows, but I'd like to know if it is possible to be platform independent. edit: And what about http://docs.python.org/dev/library/multiprocessing.html and parallel python ? A: You have to use separate processes (because of the often-mentioned GIL). The multiprocessing module is here to help. from multiprocessing import Process from somewhere import A, B if __name__ == '__main__': procs = [ Process(target=t) for t in (A,B) ] for p in procs: p.start() for p in procs: p.join() A: Assuming you use CPython (the reference implementation) the answer is NO because of the Global Interpreter Lock. In CPython threads are mainly used when there is much IO to do (one thread waits, another does computation). A: In general, running different threads is the best portable way to run on multiple cores. Of course, in Python, the global interpreter lock makes this a moot point -- only one thread will make progress at a time. A: Because of the global interpreter lock, Python programs only ever run one thread at a time. If you want true multicore Python programming, you could look into Jython (which has access to the JVM's threads), or the brilliant stackless, which has Go-like channels and tasklets.
Running methods on different cores on python
Is there any easy way to make 2 methods, let's say MethodA() and MethodB() run in 2 different cores? I don't mean 2 different threads. I'm running in Windows, but I'd like to know if it is possible to be platform independent. edit: And what about http://docs.python.org/dev/library/multiprocessing.html and parallel python ?
[ "You have to use separate processes (because of the often-mentioned GIL). The multiprocessing module is here to help.\nfrom multiprocessing import Process\nfrom somewhere import A, B \nif __name__ == '__main__':\n procs = [ Process(target=t) for t in (A,B) ]\n\n for p in procs: \n p.start()\n\n for p in procs: \n p.join()\n\n", "Assuming you use CPython (the reference implementation) the answer is NO because of the Global Interpreter Lock. In CPython threads are mainly used when there is much IO to do (one thread waits, another does computation).\n", "In general, running different threads is the best portable way to run on multiple cores. Of course, in Python, the global interpreter lock makes this a moot point -- only one thread will make progress at a time.\n", "Because of the global interpreter lock, Python programs only ever run one thread at a time. If you want true multicore Python programming, you could look into Jython (which has access to the JVM's threads), or the brilliant stackless, which has Go-like channels and tasklets.\n" ]
[ 8, 0, 0, 0 ]
[]
[]
[ "multicore", "python" ]
stackoverflow_0001803955_multicore_python.txt
Q: How to set User-Agent in python-twitter? i'm writing a small script to tweet messages from the monitoring systems. The only issue i ran into so far is that i can't set the User-Agent correctly, all tweets show up as "from API" which ain't a huge deal but i wonder what I'm doing wrong. An example to reproduce this behavior: import sys import twitter USERNAME="twitteruser" PASSWORD="twitterpassword" api = twitter.Api(username=USERNAME, password=PASSWORD) api.SetUserAgent("Monitor") api.SetXTwitterHeaders("Monitor", None, "0.1") status = api.PostUpdate("Test") I'm running Python 2.6.4 on Ubuntu 9.10 with python-twitter 0.6 Any ideas ? :-) A: In order to get Twitter to recognize your application you have to use OAuth nowadays and register your application. See this FAQ entry and Twitter's application registration form.
How to set User-Agent in python-twitter?
i'm writing a small script to tweet messages from the monitoring systems. The only issue i ran into so far is that i can't set the User-Agent correctly, all tweets show up as "from API" which ain't a huge deal but i wonder what I'm doing wrong. An example to reproduce this behavior: import sys import twitter USERNAME="twitteruser" PASSWORD="twitterpassword" api = twitter.Api(username=USERNAME, password=PASSWORD) api.SetUserAgent("Monitor") api.SetXTwitterHeaders("Monitor", None, "0.1") status = api.PostUpdate("Test") I'm running Python 2.6.4 on Ubuntu 9.10 with python-twitter 0.6 Any ideas ? :-)
[ "In order to get Twitter to recognize your application you have to use OAuth nowadays and register your application.\nSee this FAQ entry and Twitter's application registration form.\n" ]
[ 2 ]
[]
[]
[ "python", "python_twitter", "twitter", "user_agent" ]
stackoverflow_0001803669_python_python_twitter_twitter_user_agent.txt
Q: Writing crawler that stay logged in with any server I am writing a crawler. Once after the crawler logs into a website I want to make the crawler to "stay-always-logged-in". How can I do that? Is a client (like browser, crawler etc.,) make a server to obey this rule? This scenario could occur when the server allows limited logins in day. A: "Logged-in state" is usually represented by cookies. So what your have to do is to store the cookie information sent by that server on login, then send that cookie with each of your subsequent requests (as noted by Aiden Bell in his message, thx). See also this question: How to "keep-alive" with cookielib and httplib in python? A more comprehensive article on how to implement it: http://www.voidspace.org.uk/python/articles/cookielib.shtml The simplest examples are at the bottom of this manual page: https://docs.python.org/library/cookielib.html You can also use a regular browser (like Firefox) to log in manually. Then you'll be able to save the cookie from that browser and use that in your crawler. But such cookies are usually valid only for a limited time, so it is not a long-term fully automated solution. It can be quite handy for downloading contents from a Web site once, however. UPDATE: I've just found another interesting tool in a recent question: http://www.scrapy.org It can also do such cookie based login: http://doc.scrapy.org/topics/request-response.html#topics-request-response-ref-request-userlogin The question I mentioned is here: Scrapy domain_name for spider Hope this helps.
Writing crawler that stay logged in with any server
I am writing a crawler. Once after the crawler logs into a website I want to make the crawler to "stay-always-logged-in". How can I do that? Is a client (like browser, crawler etc.,) make a server to obey this rule? This scenario could occur when the server allows limited logins in day.
[ "\"Logged-in state\" is usually represented by cookies. So what your have to do is to store the cookie information sent by that server on login, then send that cookie with each of your subsequent requests (as noted by Aiden Bell in his message, thx).\nSee also this question:\nHow to \"keep-alive\" with cookielib and httplib in python?\nA more comprehensive article on how to implement it:\nhttp://www.voidspace.org.uk/python/articles/cookielib.shtml\nThe simplest examples are at the bottom of this manual page:\nhttps://docs.python.org/library/cookielib.html\nYou can also use a regular browser (like Firefox) to log in manually. Then you'll be able to save the cookie from that browser and use that in your crawler. But such cookies are usually valid only for a limited time, so it is not a long-term fully automated solution. It can be quite handy for downloading contents from a Web site once, however.\nUPDATE:\nI've just found another interesting tool in a recent question:\nhttp://www.scrapy.org\nIt can also do such cookie based login:\nhttp://doc.scrapy.org/topics/request-response.html#topics-request-response-ref-request-userlogin\nThe question I mentioned is here:\nScrapy domain_name for spider\nHope this helps.\n" ]
[ 5 ]
[]
[]
[ "c#", "http", "python", "session", "web_crawler" ]
stackoverflow_0001804258_c#_http_python_session_web_crawler.txt
Q: Defining models in a top level Django directory I've noticed that in order for me to define models, I need to do something like: python manage.py startapp app_name Is there anyway to avoid this convention and be able to create a models.py directly in the top level site that django-admin.py has created for me? Sometimes I'm building a site that can be put together in literally 15 minutes thanks to Django. I don't see the need for complexity of having an additional app built and a modified settings.py just for a form model or something similar. A: Not really. The django admin programs expect app/models.py file names.
Defining models in a top level Django directory
I've noticed that in order for me to define models, I need to do something like: python manage.py startapp app_name Is there anyway to avoid this convention and be able to create a models.py directly in the top level site that django-admin.py has created for me? Sometimes I'm building a site that can be put together in literally 15 minutes thanks to Django. I don't see the need for complexity of having an additional app built and a modified settings.py just for a form model or something similar.
[ "Not really. The django admin programs expect app/models.py file names.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001804591_django_python.txt
Q: Appengine reference order I have declared models in AppEngine's models.py: class Post(db.Model): topic = db.ReferenceProperty(Topic, collection_name='posts', verbose_name=_('Topic')) (..) class Topic(db.Model): (..) last_post = db.ReferenceProperty(Post, collection_name='last_topic_post') Problem is ReferenceProperty must have Model class but Topic class is undeclared when declaring Post. The same will happen with Post class after switch. How to solve this? Thanks. A: ReferenceProperty accepts None in place of a model class, which means "no type restriction" on that field. It is not a nice solution, however. See: http://code.google.com/appengine/docs/python/datastore/typesandpropertyclasses.html#ReferenceProperty Having such cyclic references in your model is not a good idea IMHO. You should find your last_post on demand instead of storing a reference to it.
Appengine reference order
I have declared models in AppEngine's models.py: class Post(db.Model): topic = db.ReferenceProperty(Topic, collection_name='posts', verbose_name=_('Topic')) (..) class Topic(db.Model): (..) last_post = db.ReferenceProperty(Post, collection_name='last_topic_post') Problem is ReferenceProperty must have Model class but Topic class is undeclared when declaring Post. The same will happen with Post class after switch. How to solve this? Thanks.
[ "ReferenceProperty accepts None in place of a model class, which means \"no type restriction\" on that field. It is not a nice solution, however.\nSee:\nhttp://code.google.com/appengine/docs/python/datastore/typesandpropertyclasses.html#ReferenceProperty\nHaving such cyclic references in your model is not a good idea IMHO. You should find your last_post on demand instead of storing a reference to it.\n" ]
[ 2 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001804573_django_django_models_python.txt
Q: How to save double to file in python? Let's say I need to save a matrix(each line corresponds one row) that could be loaded from fortran later. What method should I prefer? Is converting everything to string is the only one approach? A: You can save them in binary format as well. Please see the documentation on the struct standard module, it has a pack function for converting Python object into binary data. For example: import struct value = 3.141592654 data = struct.pack('d', value) open('file.ext', 'wb').write(data) You can convert each element of your matrix and write to a file. Fortran should be able to load that binary data. You can speed up the process by converting a row as a whole, like this: row_data = struct.pack('d' * len(matrix_row), *matrix_row) Please note, that 'd' * len(matrix_row) is a constant for your matrix size, so you need to calculate that format string only once. A: I don't know fortran, so it's hard to tell what is easy for you to perform on that side for parsing. It sounds like your options are either saving the doubles in plaintext (meaning, 'converting' them to string), or in binary (using struct and the likes). The decision for which one is better depends. I would go with the plaintext solution, as it means the files will be easily readable, and you won't have to mess with different kinds of details (endianity, default double sizes). But, there are cases where binary is better (for example, if you have a really big list of doubles and space is of importance, or if it is easier for you to parse it and you need the optimization) - but this is likely not your case. A: You can use JSON import json matrix = [[2.3452452435, 3.34134], [4.5, 7.9]] data = json.dumps(matrix) open('file.ext', 'wb').write(data) File content will look like: [[2.3452452435, 3.3413400000000002], [4.5, 7.9000000000000004]] A: If legibility and ease of access is important (and file size is reasonable), Fortran can easily parse a simple array of numbers, at least if it knows the size of the matrix beforehand (with something like READ(FILE_ID, '2(F)'), I think): 1.234 5.6789e4 3.1415 9.265358978 42 ... Two nested for loops in your Python code can easily write your matrix in this form.
How to save double to file in python?
Let's say I need to save a matrix(each line corresponds one row) that could be loaded from fortran later. What method should I prefer? Is converting everything to string is the only one approach?
[ "You can save them in binary format as well. Please see the documentation on the struct standard module, it has a pack function for converting Python object into binary data.\nFor example:\nimport struct\n\nvalue = 3.141592654\ndata = struct.pack('d', value)\nopen('file.ext', 'wb').write(data)\n\nYou can convert each element of your matrix and write to a file. Fortran should be able to load that binary data. You can speed up the process by converting a row as a whole, like this:\nrow_data = struct.pack('d' * len(matrix_row), *matrix_row)\n\nPlease note, that 'd' * len(matrix_row) is a constant for your matrix size, so you need to calculate that format string only once.\n", "I don't know fortran, so it's hard to tell what is easy for you to perform on that side for parsing.\nIt sounds like your options are either saving the doubles in plaintext (meaning, 'converting' them to string), or in binary (using struct and the likes). The decision for which one is better depends.\nI would go with the plaintext solution, as it means the files will be easily readable, and you won't have to mess with different kinds of details (endianity, default double sizes).\nBut, there are cases where binary is better (for example, if you have a really big list of doubles and space is of importance, or if it is easier for you to parse it and you need the optimization) - but this is likely not your case.\n", "You can use JSON\nimport json\nmatrix = [[2.3452452435, 3.34134], [4.5, 7.9]]\ndata = json.dumps(matrix)\nopen('file.ext', 'wb').write(data)\n\nFile content will look like:\n[[2.3452452435, 3.3413400000000002], [4.5, 7.9000000000000004]]\n\n", "If legibility and ease of access is important (and file size is reasonable), Fortran can easily parse a simple array of numbers, at least if it knows the size of the matrix beforehand (with something like READ(FILE_ID, '2(F)'), I think):\n1.234 5.6789e4\n3.1415 9.265358978\n42 ...\n\nTwo nested for loops in your Python code can easily write your matrix in this form.\n" ]
[ 6, 2, 1, 1 ]
[]
[]
[ "double", "file", "numbers", "python" ]
stackoverflow_0001804049_double_file_numbers_python.txt
Q: Reading socket buffer using asyncore I'm new to Python (I have been programming in Java for multiple years now though), and I am working on a simple socket-based networking application (just for fun). The idea is that my code connects to a remote TCP end-point and then listens for any data being pushed from the server to the client, and perform some parsing on this. The data being pushed from server -> client is UTF-8 encoded text, and each line is delimited by CRLF (\x0D\x0A). You probably guessed: the idea is that the client connects to the server (until cancelled by the user), and then reads and parses the lines as they come in. I've managed to get this to work, however, I'm not sure that I'm doing this quite the right way. So hence my actual questions (code to follow): Is this the right way to do it in Python (ie. is it really this simple)? Any tips/tricks/useful resources (apart from the reference documentation) regarding buffers/asyncore? Currently, the data is being read and buffered as follows: def handle_read(self): self.ibuffer = b"" while True: self.ibuffer += self.recv(self.buffer_size) if ByteUtils.ends_with_crlf(self.ibuffer): self.logger.debug("Got full line including CRLF") break else: self.logger.debug("Buffer not full yet (%s)", self.ibuffer) self.logger.debug("Filled up the buffer with line") print(str(self.ibuffer, encoding="UTF-8")) The ByteUtils.ends_with_crlf function simply checks the last two bytes of the buffer for \x0D\x0A. The first question is the main one (answer is based on this), but any other ideas/tips are appreciated. Thanks. A: TCP is a stream, and you are not guaranteed that your buffer will not contain the end of one message and the beginning of the next. So, checking for \n\r at the end of the buffer will not work as expected in all situations. You have to check each byte in the stream. And, I would strongly recommend that you use Twisted instead of asyncore. Something like this (from memory, might not work out of the box): from twisted.internet import reactor, protocol from twisted.protocols.basic import LineReceiver class MyHandler(LineReceiver): def lineReceived(self, line): print "Got line:", line f = protocol.ClientFactory() f.protocol = MyHandler reactor.connectTCP("127.0.0.1", 4711, f) reactor.run() A: It's even simpler -- look at asynchat and its set_terminator method (and other helpful tidbits in that module). Twisted is orders of magnitude richer and more powerful, but, for sufficiently simple tasks, asyncore and asynchat (which are designed to interoperate smoothly) are indeed very simple to use, as you've started observing.
Reading socket buffer using asyncore
I'm new to Python (I have been programming in Java for multiple years now though), and I am working on a simple socket-based networking application (just for fun). The idea is that my code connects to a remote TCP end-point and then listens for any data being pushed from the server to the client, and perform some parsing on this. The data being pushed from server -> client is UTF-8 encoded text, and each line is delimited by CRLF (\x0D\x0A). You probably guessed: the idea is that the client connects to the server (until cancelled by the user), and then reads and parses the lines as they come in. I've managed to get this to work, however, I'm not sure that I'm doing this quite the right way. So hence my actual questions (code to follow): Is this the right way to do it in Python (ie. is it really this simple)? Any tips/tricks/useful resources (apart from the reference documentation) regarding buffers/asyncore? Currently, the data is being read and buffered as follows: def handle_read(self): self.ibuffer = b"" while True: self.ibuffer += self.recv(self.buffer_size) if ByteUtils.ends_with_crlf(self.ibuffer): self.logger.debug("Got full line including CRLF") break else: self.logger.debug("Buffer not full yet (%s)", self.ibuffer) self.logger.debug("Filled up the buffer with line") print(str(self.ibuffer, encoding="UTF-8")) The ByteUtils.ends_with_crlf function simply checks the last two bytes of the buffer for \x0D\x0A. The first question is the main one (answer is based on this), but any other ideas/tips are appreciated. Thanks.
[ "TCP is a stream, and you are not guaranteed that your buffer will not contain the end of one message and the beginning of the next. \nSo, checking for \\n\\r at the end of the buffer will not work as expected in all situations. You have to check each byte in the stream.\nAnd, I would strongly recommend that you use Twisted instead of asyncore.\nSomething like this (from memory, might not work out of the box):\nfrom twisted.internet import reactor, protocol\nfrom twisted.protocols.basic import LineReceiver\n\n\nclass MyHandler(LineReceiver):\n\n def lineReceived(self, line):\n print \"Got line:\", line\n\n\nf = protocol.ClientFactory()\nf.protocol = MyHandler\nreactor.connectTCP(\"127.0.0.1\", 4711, f)\nreactor.run()\n\n", "It's even simpler -- look at asynchat and its set_terminator method (and other helpful tidbits in that module). Twisted is orders of magnitude richer and more powerful, but, for sufficiently simple tasks, asyncore and asynchat (which are designed to interoperate smoothly) are indeed very simple to use, as you've started observing.\n" ]
[ 6, 6 ]
[]
[]
[ "asyncore", "buffer", "python", "sockets" ]
stackoverflow_0001804980_asyncore_buffer_python_sockets.txt
Q: Lpr -module in Python How can you call lpr in Python? It is not in the sys -module which is surprising. I aim to use the lpr as follows shown by pseudo-code 10*i for i in range(77): lpr --pages(i,i+1) file.pdf A: First of, I don't understand your pseudo code. (What does 10*i for i in range(77): mean in this case?) Generally, you use subprocess.Popen to run external commands. ActiveState recipe 511505 shows an example specifically with lpr. Basically, you can invoke lpr like this: subprocess.Popen(['lpr', 'some_filename']) However: Depending on your version of lpr, there may not be an option to select a subset of all pages, or this functionality may be available only for e.g. dvi files. Edit: Since you seem to want to print selected pages of PDF files, have a look at the PDF toolkit. That software appears to provide splitting functionality. Also, make sure that directly printing PDF files is supported. You may need to convert the input to postscript first (e.g. using pdf2ps). Of course you can automate these tasks using subprocess.Popen as well. A: Just call it from the commandline: import commands for i in range(77): # I'm making no assumptions about lpr command syntax here. cmd = "lpr --pages(%s,%s) file.pdf" % (2*i, 2*i+1) commands.getoutput(cmd) Something like that. A: I haven't tried it, but pycups appears to be python bindings for cups. http://cyberelk.net/tim/software/pycups/
Lpr -module in Python
How can you call lpr in Python? It is not in the sys -module which is surprising. I aim to use the lpr as follows shown by pseudo-code 10*i for i in range(77): lpr --pages(i,i+1) file.pdf
[ "First of, I don't understand your pseudo code. (What does 10*i for i in range(77): mean in this case?)\nGenerally, you use subprocess.Popen to run external commands. ActiveState recipe 511505 shows an example specifically with lpr. Basically, you can invoke lpr like this:\nsubprocess.Popen(['lpr', 'some_filename'])\n\nHowever: Depending on your version of lpr, there may not be an option to select a subset of all pages, or this functionality may be available only for e.g. dvi files.\nEdit: Since you seem to want to print selected pages of PDF files, have a look at the PDF toolkit. That software appears to provide splitting functionality. Also, make sure that directly printing PDF files is supported. You may need to convert the input to postscript first (e.g. using pdf2ps). Of course you can automate these tasks using subprocess.Popen as well.\n", "Just call it from the commandline:\nimport commands\n\nfor i in range(77):\n # I'm making no assumptions about lpr command syntax here.\n cmd = \"lpr --pages(%s,%s) file.pdf\" % (2*i, 2*i+1)\n commands.getoutput(cmd)\n\nSomething like that.\n", "I haven't tried it, but pycups appears to be python bindings for cups.\nhttp://cyberelk.net/tim/software/pycups/\n" ]
[ 5, 2, 0 ]
[]
[]
[ "lpr", "printing", "python" ]
stackoverflow_0001804365_lpr_printing_python.txt
Q: Pythonic way to select first variable that evaluates to True I have some variables and I want to select the first one that evaluates to True, or else return a default value. For instance I have a, b, and c. My existing code: result = a if a else (b if b else (c if c else default)) Another approach I was considering: result = ([v for v in (a, b, c) if v] + [default])[0] But they both feel messy, so is there a more Pythonic way? A: Did you mean returning first value for what bool(value)==True? Then you can just rely on the fact that boolean operators return last evaluated argument: result = a or b or c or default A: If one variable is not "defined", you can't access its name. So any reference to 'a' raises a NameError Exception. In the other hand, if you have something like: a = None b = None c = 3 you can do default = 1 r = a or b or c or default # r value is 3 A: So long as default evaluates to True: result = next((x for x in (a, b, c, d , e, default) if x)) A: You could do something like this (in contrast to the other answers this is a solution where you don't have to define the 'missing' values as being either None or False): b = 6 c = 8 def first_defined(items): for x in items: try: return globals()[x] break except KeyError: continue print first_defined(["a", "b", "c"]) In order to avoid NameErrors when a, b or c isn't defined: give the function a list of strings instead of variable references (you can't pass non-existing references). If you are using variables outside the 'globals()' scope, you could use getattr with its default argument. -- If a, b and c are defined, I'd go for something like this (considering the fact that an empty string, None or False evaluate to a boolean False): a = None b = 6 c = 8 def firstitem(items): for x in items: if x: return x break else: continue print firstitem([a, b, c]) A: Don't know if this works in every case, but this works for this case. a = False b = "b" c = False default = "default" print a or b or c or default # b A: How about this ? a=None b=None c=None val= reduce(lambda x,y:x or y,(a,b,c,"default")) print val The above prints "default". If any of the inputs is defined, val would contain the first defined input. A: If by defined you mean ever assigned any value whatsoever to in any scope accessible from here, then trying to access an "undefined" variable will raise a NameError exception (or some subclass thereof, but catching NameError will catch the subclass too). So, the simplest way to perform, literally, the absolutely weird task you ask about, is: for varname in ('a', 'b', 'c'): try: return eval(varname) except NameError: pass return default Any alleged solution lacking a try/except won't work under the above meaning for "defined". Approaches based on exploring specific scopes will either miss other scopes, or be quite complex by trying to replicate the scope-ordering logic that eval does for you so simply. If by "defined" you actually mean "assigned a value that evaluates to true (as opposed to false)", i.e., all values are actually defined (but might happen to be false, and you want the first true value instead), then the already-proposed a or b or c or default becomes the simplest approach. But that's a totally different (and even weirder!) meaning for the word "defined"!-)
Pythonic way to select first variable that evaluates to True
I have some variables and I want to select the first one that evaluates to True, or else return a default value. For instance I have a, b, and c. My existing code: result = a if a else (b if b else (c if c else default)) Another approach I was considering: result = ([v for v in (a, b, c) if v] + [default])[0] But they both feel messy, so is there a more Pythonic way?
[ "Did you mean returning first value for what bool(value)==True? Then you can just rely on the fact that boolean operators return last evaluated argument:\nresult = a or b or c or default\n\n", "If one variable is not \"defined\", you can't access its name. So any reference to 'a' raises a NameError Exception.\nIn the other hand, if you have something like:\na = None\nb = None\nc = 3\n\nyou can do\ndefault = 1\nr = a or b or c or default\n# r value is 3\n\n", "So long as default evaluates to True:\nresult = next((x for x in (a, b, c, d , e, default) if x))\n\n", "You could do something like this (in contrast to the other answers this is a solution where you don't have to define the 'missing' values as being either None or False):\nb = 6\nc = 8\n\ndef first_defined(items):\n for x in items:\n try:\n return globals()[x]\n break\n except KeyError:\n continue\n\nprint first_defined([\"a\", \"b\", \"c\"])\n\nIn order to avoid NameErrors when a, b or c isn't defined: give the function a list of strings instead of variable references (you can't pass non-existing references). If you are using variables outside the 'globals()' scope, you could use getattr with its default argument.\n-- \nIf a, b and c are defined, I'd go for something like this (considering the fact that an empty string, None or False evaluate to a boolean False):\na = None\nb = 6\nc = 8\n\ndef firstitem(items):\n for x in items:\n if x:\n return x\n break\n else:\n continue\n\nprint firstitem([a, b, c])\n\n", "Don't know if this works in every case, but this works for this case.\na = False\nb = \"b\"\nc = False\ndefault = \"default\"\nprint a or b or c or default # b\n\n", "How about this ? \na=None \nb=None \nc=None \nval= reduce(lambda x,y:x or y,(a,b,c,\"default\")) \nprint val \n\nThe above prints \"default\". If any of the inputs is defined, val would contain the first defined input.\n", "If by defined you mean ever assigned any value whatsoever to in any scope accessible from here, then trying to access an \"undefined\" variable will raise a NameError exception (or some subclass thereof, but catching NameError will catch the subclass too). So, the simplest way to perform, literally, the absolutely weird task you ask about, is:\nfor varname in ('a', 'b', 'c'):\n try: return eval(varname)\n except NameError: pass\nreturn default\n\nAny alleged solution lacking a try/except won't work under the above meaning for \"defined\". Approaches based on exploring specific scopes will either miss other scopes, or be quite complex by trying to replicate the scope-ordering logic that eval does for you so simply.\nIf by \"defined\" you actually mean \"assigned a value that evaluates to true (as opposed to false)\", i.e., all values are actually defined (but might happen to be false, and you want the first true value instead), then the already-proposed a or b or c or default becomes the simplest approach. But that's a totally different (and even weirder!) meaning for the word \"defined\"!-)\n" ]
[ 25, 17, 5, 2, 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001803302_python.txt
Q: Python JSON parse_float=decimal.Decimal not working I have a string with a floating point number in it, but I can't get JSON to load it as a decimal. x = u'{"14": [4.5899999999999999, "susan"]}' json.loads(x, parse_float = decimal.Decimal) This returns: {u'14': [Decimal('4.5899999999999999'), u'susan']} Any idea how I can make it into the actual "4.59"? A: You need to define a function that performs whatever rounding you desire, then uses the altered string to build the Decimal. Your current solution does work perfectly well: it just does exactly what you tell it to, i.e., use the entire string, as opposed to what you desire (and have not told either the code, or us;-). E.g.: >>> def doit(s): return decimal.Decimal(str(round(float(s), 2))) ... >>> json.loads(x, parse_float=doit) {u'14': [Decimal('4.59'), u'susan']} >>> A: You can't. That number isn't 4.59, it's 4.589999999999999999, as far as the json parser knows. You'd need to add some more complicated logic that rounds numbers like that, as a wrapper around decimal.Decimal.
Python JSON parse_float=decimal.Decimal not working
I have a string with a floating point number in it, but I can't get JSON to load it as a decimal. x = u'{"14": [4.5899999999999999, "susan"]}' json.loads(x, parse_float = decimal.Decimal) This returns: {u'14': [Decimal('4.5899999999999999'), u'susan']} Any idea how I can make it into the actual "4.59"?
[ "You need to define a function that performs whatever rounding you desire, then uses the altered string to build the Decimal. Your current solution does work perfectly well: it just does exactly what you tell it to, i.e., use the entire string, as opposed to what you desire (and have not told either the code, or us;-).\nE.g.:\n>>> def doit(s): return decimal.Decimal(str(round(float(s), 2)))\n... \n>>> json.loads(x, parse_float=doit)\n{u'14': [Decimal('4.59'), u'susan']}\n>>> \n\n", "You can't. That number isn't 4.59, it's 4.589999999999999999, as far as the json parser knows. You'd need to add some more complicated logic that rounds numbers like that, as a wrapper around decimal.Decimal.\n" ]
[ 11, 4 ]
[]
[]
[ "json", "python" ]
stackoverflow_0001805072_json_python.txt
Q: Django / Python / PIL / sorl-thumbnail generation in bulk - memory error I'm trying to bulk generate 4 thumnails for each of around 40k images with sorl-thumbnail for my django app. I iterate through all django objects with an ImageWithThumbnailsFieldFile, and then call its generate_thumbnails() function. This works fine, except that after a few hundred iterations, I run out of memory and my loop crashes with 'memory error'. Since sorl-thumbnail uses PIL to generate thumbs, it seems to be that PIL doesn't return all of the memory it used when generated a thumb. Does anybody how to avoid this problem, e.g. by forcing PIL to return the memory it no longer needs? my code simply looks like this: all = Picture.objects.all() for i in all: i.image.generate_thumbnails() The function generate-thumbnail starts here, line 129. Thanks in advance for any advice! Martin A: Your problem relates to how Django caches the results of a queryset as you loop through them. Django keeps all the objects in memory so that next time you iterate through the same queryset you don't have to hit the database again to get all the data. What you need to do is use the iterator() method. So: all = Picture.objects.all().iterator() for i in all: i.image.generate_thumbnails()
Django / Python / PIL / sorl-thumbnail generation in bulk - memory error
I'm trying to bulk generate 4 thumnails for each of around 40k images with sorl-thumbnail for my django app. I iterate through all django objects with an ImageWithThumbnailsFieldFile, and then call its generate_thumbnails() function. This works fine, except that after a few hundred iterations, I run out of memory and my loop crashes with 'memory error'. Since sorl-thumbnail uses PIL to generate thumbs, it seems to be that PIL doesn't return all of the memory it used when generated a thumb. Does anybody how to avoid this problem, e.g. by forcing PIL to return the memory it no longer needs? my code simply looks like this: all = Picture.objects.all() for i in all: i.image.generate_thumbnails() The function generate-thumbnail starts here, line 129. Thanks in advance for any advice! Martin
[ "Your problem relates to how Django caches the results of a queryset as you loop through them. Django keeps all the objects in memory so that next time you iterate through the same queryset you don't have to hit the database again to get all the data.\nWhat you need to do is use the iterator() method. So:\nall = Picture.objects.all().iterator()\nfor i in all:\n i.image.generate_thumbnails()\n\n" ]
[ 4 ]
[]
[]
[ "django", "memory", "python", "python_imaging_library", "sorl_thumbnail" ]
stackoverflow_0001805256_django_memory_python_python_imaging_library_sorl_thumbnail.txt
Q: How to fix value produced by Random? I got an issue which is, in my code,anyone can help will be great. this is the example code. from random import * from numpy import * r=array([uniform(-R,R),uniform(-R,R),uniform(-R,R)]) def Ft(r): for i in range(3): do something here, call r return something however I found that in python shell, every time I run function Ft, it gives me different result.....seems like within the function, in each iterate of the for loop,call r once, it gives random numbers once... but not fix the initial random number when I call the function....how can I fix it? how about use b=copy(r) then call b in the Ft function? Thanks A: Do you mean that you want the calls to randon.uniform() to return the same sequence of values each time you run the function? If so, you need to call random.seed() to set the start of the sequence to a fixed value. If you don't, the current system time is used to initialise the random number generator, which is intended to cause it to generate a different sequence every time. Something like this should work random.seed(42) # Set the random number generator to a fixed sequence. r = array([uniform(-R,R), uniform(-R,R), uniform(-R,R)]) A: I think you mean 'list' instead of 'array', you're trying to use functions when you really don't need to. If I understand you correctly, you want to edit a list of random floats: import random r=[random.uniform(-R,R) for x in range(3)] def ft(r): for i in range(len(r)): r[i]=???
How to fix value produced by Random?
I got an issue which is, in my code,anyone can help will be great. this is the example code. from random import * from numpy import * r=array([uniform(-R,R),uniform(-R,R),uniform(-R,R)]) def Ft(r): for i in range(3): do something here, call r return something however I found that in python shell, every time I run function Ft, it gives me different result.....seems like within the function, in each iterate of the for loop,call r once, it gives random numbers once... but not fix the initial random number when I call the function....how can I fix it? how about use b=copy(r) then call b in the Ft function? Thanks
[ "Do you mean that you want the calls to randon.uniform() to return the same sequence of values each time you run the function?\nIf so, you need to call random.seed() to set the start of the sequence to a fixed value. If you don't, the current system time is used to initialise the random number generator, which is intended to cause it to generate a different sequence every time.\nSomething like this should work\nrandom.seed(42) # Set the random number generator to a fixed sequence.\nr = array([uniform(-R,R), uniform(-R,R), uniform(-R,R)])\n\n", "I think you mean 'list' instead of 'array', you're trying to use functions when you really don't need to. If I understand you correctly, you want to edit a list of random floats:\n import random\n r=[random.uniform(-R,R) for x in range(3)]\n def ft(r):\n for i in range(len(r)):\n r[i]=???\n\n" ]
[ 15, 1 ]
[]
[]
[ "python", "random" ]
stackoverflow_0001805265_python_random.txt
Q: Scrapy spider index error This is the code for Spyder1 that I've been trying to write within Scrapy framework: from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.item import Item from firm.items import FirmItem class Spider1(CrawlSpider): domain_name = 'wc2' start_urls = ['http://www.whitecase.com/Attorneys/List.aspx?LastName=A'] rules = ( Rule(SgmlLinkExtractor(allow=["hxs.select( '//td[@class='altRow'][1]/a/@href').re('/.a\w+')"]), callback='parse'), ) def parse(self, response): hxs = HtmlXPathSelector(response) JD = FirmItem() JD['school'] = hxs.select( '//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)' ) return JD SPIDER = Spider1() The regex in the rules successfully pulls all the bio urls that I want from the start url: >>> hxs.select( ... '//td[@class="altRow"][1]/a/@href').re('/.a\w+') [u'/cabel', u'/jacevedo', u'/jacuna', u'/aadler', u'/zahmedani', u'/tairisto', u '/zalbert', u'/salberts', u'/aaleksandrova', u'/malhadeff', u'/nalivojvodic', u' /kallchurch', u'/jalleyne', u'/lalonzo', u'/malthoff', u'/valvarez', u'/camon', u'/randerson', u'/eandreeva', u'/pangeli', u'/jangland', u'/mantczak', u'/darany i', u'/carhold', u'/marora', u'/garrington', u'/jartzinger', u'/sasayama', u'/ma sschenfeldt', u'/dattanasio', u'/watterbury', u'/jaudrlicka', u'/caverch', u'/fa yanruoh', u'/razar'] >>> But when I run the code I get [wc2] ERROR: Error processing FirmItem(school=[]) - [Failure instance: Traceback: <type 'exceptions.IndexError'>: list index out of range This is the FirmItem in Items.py from scrapy.item import Item, Field class FirmItem(Item): school = Field() pass Can you help me understand where the index error occurs? It seems to me that it has something to do with SgmLinkExtractor. I've been trying to make this spider work for weeks with Scrapy. They have an excellent tutorial but I am new to python and web programming so I don't understand how for instance SgmlLinkExtractor works behind the scene. Would it be easier for me to try to write a spider with the same simple functionality with Python libraries? I would appreciate any comments and help. Thanks A: SgmlLinkExtractor doesn't support selectors in its "allow" argument. So this is wrong: SgmlLinkExtractor(allow=["hxs.select('//td[@class='altRow'] ...')"]) This is right: SgmlLinkExtractor(allow=[r"product\.php"]) A: The parse function is called for each match of your SgmlLinkExtractor. As Pablo mentioned you want to simplify your SgmlLinkExtractor. A: I also tried to put the names scraped from the initial url into a list and then pass each name to parse in the form of absolute url as http://www.whitecase.com/aabbas (for /aabbas). The following code loops over the list, but I don't know how to pass this to parse . Do you think this is a better idea? baseurl = 'http://www.whitecase.com' names = ['aabbas', '/cabel', '/jacevedo', '/jacuna', '/igbadegesin'] def makeurl(baseurl, names): for x in names: url = baseurl + x baseurl = 'http://www.whitecase.com' x = '' return url
Scrapy spider index error
This is the code for Spyder1 that I've been trying to write within Scrapy framework: from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.item import Item from firm.items import FirmItem class Spider1(CrawlSpider): domain_name = 'wc2' start_urls = ['http://www.whitecase.com/Attorneys/List.aspx?LastName=A'] rules = ( Rule(SgmlLinkExtractor(allow=["hxs.select( '//td[@class='altRow'][1]/a/@href').re('/.a\w+')"]), callback='parse'), ) def parse(self, response): hxs = HtmlXPathSelector(response) JD = FirmItem() JD['school'] = hxs.select( '//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)' ) return JD SPIDER = Spider1() The regex in the rules successfully pulls all the bio urls that I want from the start url: >>> hxs.select( ... '//td[@class="altRow"][1]/a/@href').re('/.a\w+') [u'/cabel', u'/jacevedo', u'/jacuna', u'/aadler', u'/zahmedani', u'/tairisto', u '/zalbert', u'/salberts', u'/aaleksandrova', u'/malhadeff', u'/nalivojvodic', u' /kallchurch', u'/jalleyne', u'/lalonzo', u'/malthoff', u'/valvarez', u'/camon', u'/randerson', u'/eandreeva', u'/pangeli', u'/jangland', u'/mantczak', u'/darany i', u'/carhold', u'/marora', u'/garrington', u'/jartzinger', u'/sasayama', u'/ma sschenfeldt', u'/dattanasio', u'/watterbury', u'/jaudrlicka', u'/caverch', u'/fa yanruoh', u'/razar'] >>> But when I run the code I get [wc2] ERROR: Error processing FirmItem(school=[]) - [Failure instance: Traceback: <type 'exceptions.IndexError'>: list index out of range This is the FirmItem in Items.py from scrapy.item import Item, Field class FirmItem(Item): school = Field() pass Can you help me understand where the index error occurs? It seems to me that it has something to do with SgmLinkExtractor. I've been trying to make this spider work for weeks with Scrapy. They have an excellent tutorial but I am new to python and web programming so I don't understand how for instance SgmlLinkExtractor works behind the scene. Would it be easier for me to try to write a spider with the same simple functionality with Python libraries? I would appreciate any comments and help. Thanks
[ "SgmlLinkExtractor doesn't support selectors in its \"allow\" argument.\nSo this is wrong:\nSgmlLinkExtractor(allow=[\"hxs.select('//td[@class='altRow'] ...')\"])\n\nThis is right:\nSgmlLinkExtractor(allow=[r\"product\\.php\"])\n\n", "The parse function is called for each match of your SgmlLinkExtractor.\nAs Pablo mentioned you want to simplify your SgmlLinkExtractor.\n", "I also tried to put the names scraped from the initial url into a list and then pass each name to parse in the form of absolute url as http://www.whitecase.com/aabbas (for /aabbas). \nThe following code loops over the list, but I don't know how to pass this to parse . Do you think this is a better idea?\nbaseurl = 'http://www.whitecase.com'\nnames = ['aabbas', '/cabel', '/jacevedo', '/jacuna', '/igbadegesin']\n\ndef makeurl(baseurl, names):\n for x in names:\n url = baseurl + x\n baseurl = 'http://www.whitecase.com'\n x = ''\n return url\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "scrapy", "web_crawler" ]
stackoverflow_0001805050_python_scrapy_web_crawler.txt
Q: What is the performance cost of named keys or "pre-generated" keys in Google App Engine? If you used named keys in Google App Engine, does this incur any additional cost? Put another way, is it any more expensive to create a new entity with a named key rather than a randomly generated id? In a similar line of reasoning, I note that you can ask Google App Engine to give you a set of keys that will not be used by Google App Engine as auto generated keys? Would generating a large number of these keys result in reduced performance? These questions both bother me for the following reason. Let us say Google App Engine was attempting to persist entity A, and as such it is creating a key for A. It would seem intuitively, that when a new key is randomly generated, Google App Engine would need to first check if the key was already in existence. If the key already existed, then Google App Engine might need to generate another randomly generated new key. It would continue to do this until it succeeded in generating a unique new key. It would then assign this key to entity A. Alright, that is fine and good. My problem with this is it seems to imply that keys cause some sort of application level lock? This would be neccesary when Google App Engine is checking if the randomly generated key already exist. This can't be right, as it isn't scalable at all? What is wrong about my reasoning? So, since this was long, I will re-iterate my 3 questions: Does Google App Engine create an application level lock when generating new keys? Do named keys incur any additional cost over automatically generated keys? If so, what cost (constant, linear, exponential,...)? Does asking app engine for keys that app engine promises not to use cause a degradation in key creation performance? If so, what would the cost for this be? A: There is no intrinsic penalty to using a key name instead of an auto-generated ID, except the overhead of a (potentially) longer key on the entity and any ReferenceProperties that reference it. In certain cases, in fact, using auto-allocated IDs can have a performance penalty: If you insert new entities at a very high rate (several hundred per second), since all the new entities have IDs in the same range, they will all be written to the same Bigtable tablet, and can cause contention and increased timeouts. The vast majority of apps never have to worry about this, though. There's no performance impact to allocating as many IDs as you want - App Engine simply increases the ID counter by the number you request. (This is a simplification, but generally accurate). In answer to your concerns, App Engine doesn't randomly generate keys. It either uses an auto-allocated id, which is allocated using a counter, and thus guaranteed unique, or it uses the key you supplied. So in answer to your last 3 bullet points: No. Only in storage for the (potentially) longer keys No, and the cost is roughly O(1) regardless of how many you ask for.
What is the performance cost of named keys or "pre-generated" keys in Google App Engine?
If you used named keys in Google App Engine, does this incur any additional cost? Put another way, is it any more expensive to create a new entity with a named key rather than a randomly generated id? In a similar line of reasoning, I note that you can ask Google App Engine to give you a set of keys that will not be used by Google App Engine as auto generated keys? Would generating a large number of these keys result in reduced performance? These questions both bother me for the following reason. Let us say Google App Engine was attempting to persist entity A, and as such it is creating a key for A. It would seem intuitively, that when a new key is randomly generated, Google App Engine would need to first check if the key was already in existence. If the key already existed, then Google App Engine might need to generate another randomly generated new key. It would continue to do this until it succeeded in generating a unique new key. It would then assign this key to entity A. Alright, that is fine and good. My problem with this is it seems to imply that keys cause some sort of application level lock? This would be neccesary when Google App Engine is checking if the randomly generated key already exist. This can't be right, as it isn't scalable at all? What is wrong about my reasoning? So, since this was long, I will re-iterate my 3 questions: Does Google App Engine create an application level lock when generating new keys? Do named keys incur any additional cost over automatically generated keys? If so, what cost (constant, linear, exponential,...)? Does asking app engine for keys that app engine promises not to use cause a degradation in key creation performance? If so, what would the cost for this be?
[ "There is no intrinsic penalty to using a key name instead of an auto-generated ID, except the overhead of a (potentially) longer key on the entity and any ReferenceProperties that reference it.\nIn certain cases, in fact, using auto-allocated IDs can have a performance penalty: If you insert new entities at a very high rate (several hundred per second), since all the new entities have IDs in the same range, they will all be written to the same Bigtable tablet, and can cause contention and increased timeouts. The vast majority of apps never have to worry about this, though.\nThere's no performance impact to allocating as many IDs as you want - App Engine simply increases the ID counter by the number you request. (This is a simplification, but generally accurate).\nIn answer to your concerns, App Engine doesn't randomly generate keys. It either uses an auto-allocated id, which is allocated using a counter, and thus guaranteed unique, or it uses the key you supplied. So in answer to your last 3 bullet points:\n\nNo.\nOnly in storage for the (potentially) longer keys\nNo, and the cost is roughly O(1) regardless of how many you ask for.\n\n" ]
[ 4 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001805555_google_app_engine_python.txt
Q: How to write a simple spider in Python? I've been trying to write this spider for weeks but without success. What is the best way for me to code this in Python: 1) Initial url: http://www.whitecase.com/Attorneys/List.aspx?LastName=A 2) from initial url pick up these urls with this regex: hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+') [u'/cabel', u'/jacevedo', u'/jacuna', u'/aadler', u'/zahmedani', u'/tairisto', u /zalbert', u'/salberts', u'/aaleksandrova', u'/malhadeff', u'/nalivojvodic', u' .... 3) Go to each of these urls and scrape the school info with this regex hxs.select('//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)' [u'JD, ', u'University of Florida Levin College of Law, <em>magna cum laude</em> , Order of the Coif, Symposium Editor, Florida Law Review, Awards for highest grades in Comparative Constitutional History, Legal Drafting, Real Property and Sales, ', u'2007'] 4) Write the scraped school info into schools.csv file Can you help me write this spider in Python? I've been trying to write it in Scrapy but without success. See my previous question. Thank you. A: http://www.ibm.com/developerworks/linux/library/l-spider/ IBM article with good description or http://code.activestate.com/recipes/576551/ Python cookbook, better code but less explanation A: Also, I suggest you read: RegEx match open tags except XHTML self-contained tags Before you try to parse HTML with a regular expression. Then think about what happens the first time someone's name forces the page to be unicode instead of latin-1. EDIT: To answer your question about a library to use in Python, I would suggest Beautiful Soup, which is a great HTML parser and supports unicode throughout (and does a really good job with malformed HTML, which you're going to find all over the place).
How to write a simple spider in Python?
I've been trying to write this spider for weeks but without success. What is the best way for me to code this in Python: 1) Initial url: http://www.whitecase.com/Attorneys/List.aspx?LastName=A 2) from initial url pick up these urls with this regex: hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+') [u'/cabel', u'/jacevedo', u'/jacuna', u'/aadler', u'/zahmedani', u'/tairisto', u /zalbert', u'/salberts', u'/aaleksandrova', u'/malhadeff', u'/nalivojvodic', u' .... 3) Go to each of these urls and scrape the school info with this regex hxs.select('//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)' [u'JD, ', u'University of Florida Levin College of Law, <em>magna cum laude</em> , Order of the Coif, Symposium Editor, Florida Law Review, Awards for highest grades in Comparative Constitutional History, Legal Drafting, Real Property and Sales, ', u'2007'] 4) Write the scraped school info into schools.csv file Can you help me write this spider in Python? I've been trying to write it in Scrapy but without success. See my previous question. Thank you.
[ "http://www.ibm.com/developerworks/linux/library/l-spider/ IBM article with good description \nor\nhttp://code.activestate.com/recipes/576551/ Python cookbook, better code but less explanation\n", "Also, I suggest you read:\nRegEx match open tags except XHTML self-contained tags\nBefore you try to parse HTML with a regular expression. Then think about what happens the first time someone's name forces the page to be unicode instead of latin-1.\nEDIT: To answer your question about a library to use in Python, I would suggest Beautiful Soup, which is a great HTML parser and supports unicode throughout (and does a really good job with malformed HTML, which you're going to find all over the place).\n" ]
[ 4, 0 ]
[]
[]
[ "python", "scrapy", "web_crawler" ]
stackoverflow_0001805231_python_scrapy_web_crawler.txt
Q: Django template URL function not matching in app I have a Django project set up with an app called pub. I'm trying to set it up so that I can include urls.py from each app (there will be more as I go) in the top-level urls.py. I've also got a template that uses the 'url' function to resolve a URL on a view, defined in the openidgae module. The problem is that after the httprequest is routed to pub.views.index (like it's supposed to), I try to respond by rendering a template that uses the template 'url' function. The code I'm showing below is also here: http://gist.github.com/243158 Here's my top-level urls.py: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'', include('openidgae.urls')), (r'^pub', include('pub.urls')), ) and pub/urls.py: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'', 'pub.views.index'), (r'^/$', 'pub.views.index'), ) and templates/base.html: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>{% block title %}My amazing site{% endblock %}</title> </head> <body> <div id="header"> {% if lip %} Welcome {{ lip.pretty_openid }} <a href="{% url openidgae.views.LogoutSubmit %}">logout</a> {% else %} <form id="login-form" action="{% url openidgae.views.OpenIDStartSubmit %}?continue={{continueUrl}}" method="post"> <input type="text" name="openid_identifier" id="openid_identifier" /> <input type="submit" value="Verify" /> </form> <!-- BEGIN ID SELECTOR --> <script type="text/javascript" id="__openidselector" src="https://www.idselector.com/selector/46b0e6d0c8ba5c8617f6f5b970865604c9f87da5" charset="utf-8"></script> <!-- END ID SELECTOR --> {% endif %} </div> {% block content %}{% endblock %} </body> </html> and templates/pub/index.html: {% extends "base.html" %} {% block title %}blahblah!{% endblock %} {% block content %} blahblahblah {% endblock %} and finally, pub/views.py: from django.shortcuts import render_to_response from django.http import HttpResponse from django import forms import openidgae def index(request): lip = openidgae.get_current_person(request, HttpResponse()) resp = render_to_response('pub/index.html', {'lip': lip}) return resp Now, if i set the second pattern in my top-level urls.py to point directly to 'pub.views.index', all works like it should, but not if I use the include function. Any ideas? I'm sure the problem has something to do with the urlpattern that would map the views I'm trying to resolve to URLs not being available to the template rendering functions when the HttpRequest is handled by the pub app rather than by the top-level, but I don't understand why or how to fix it. A: I don't understand what the problem is that you're facing, but just by looking at your urls.py files, you should probably change the top level urls.py to something like from django.conf.urls.defaults import * urlpatterns = patterns('', (r'', include('openidgae.urls')), (r'^pub/', include('pub.urls')), ) and pub/urls.py to: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^$', 'pub.views.index'), ) if you then use {% url pub.views.index %} somewhere in your templates, you should get the correct url. A: You post a lot of code, but you haven't been very clear about what the actual problem is. First of all having (r'', 'pub.views.index'), (r'^/$', 'pub.views.index'), can give some problems if you want to find the url. What should the url be for pub.views.index? From what you say, this is actually not be a problem, since you don't have a template tag that want to reverse the mentioned view. You don't actually say what is going wrong. But a way to fix the above problem, if you want to keep the two urls, would be to used named urls. I find it a bit redundant since you can just redirect example.com/pub to example.com/pub/, but you could transform the above to: url(r'', 'pub.views.index' name='pub_a'), url(r'^/$', 'pub.views.index', name='pub_b'), Doing this you are now able to reverse your url, as you can uniquely identify them doing {% url pub_a %} or {% url pub_b %}. This would also make your templates easier to read, as you can give names that mean something, so it's easier to remember what's going on, while being more concise.
Django template URL function not matching in app
I have a Django project set up with an app called pub. I'm trying to set it up so that I can include urls.py from each app (there will be more as I go) in the top-level urls.py. I've also got a template that uses the 'url' function to resolve a URL on a view, defined in the openidgae module. The problem is that after the httprequest is routed to pub.views.index (like it's supposed to), I try to respond by rendering a template that uses the template 'url' function. The code I'm showing below is also here: http://gist.github.com/243158 Here's my top-level urls.py: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'', include('openidgae.urls')), (r'^pub', include('pub.urls')), ) and pub/urls.py: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'', 'pub.views.index'), (r'^/$', 'pub.views.index'), ) and templates/base.html: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>{% block title %}My amazing site{% endblock %}</title> </head> <body> <div id="header"> {% if lip %} Welcome {{ lip.pretty_openid }} <a href="{% url openidgae.views.LogoutSubmit %}">logout</a> {% else %} <form id="login-form" action="{% url openidgae.views.OpenIDStartSubmit %}?continue={{continueUrl}}" method="post"> <input type="text" name="openid_identifier" id="openid_identifier" /> <input type="submit" value="Verify" /> </form> <!-- BEGIN ID SELECTOR --> <script type="text/javascript" id="__openidselector" src="https://www.idselector.com/selector/46b0e6d0c8ba5c8617f6f5b970865604c9f87da5" charset="utf-8"></script> <!-- END ID SELECTOR --> {% endif %} </div> {% block content %}{% endblock %} </body> </html> and templates/pub/index.html: {% extends "base.html" %} {% block title %}blahblah!{% endblock %} {% block content %} blahblahblah {% endblock %} and finally, pub/views.py: from django.shortcuts import render_to_response from django.http import HttpResponse from django import forms import openidgae def index(request): lip = openidgae.get_current_person(request, HttpResponse()) resp = render_to_response('pub/index.html', {'lip': lip}) return resp Now, if i set the second pattern in my top-level urls.py to point directly to 'pub.views.index', all works like it should, but not if I use the include function. Any ideas? I'm sure the problem has something to do with the urlpattern that would map the views I'm trying to resolve to URLs not being available to the template rendering functions when the HttpRequest is handled by the pub app rather than by the top-level, but I don't understand why or how to fix it.
[ "I don't understand what the problem is that you're facing, but just by looking at your urls.py files, you should probably change the top level urls.py to something like\nfrom django.conf.urls.defaults import *\n\nurlpatterns = patterns('',\n (r'', include('openidgae.urls')),\n (r'^pub/', include('pub.urls')), \n)\n\nand pub/urls.py to:\nfrom django.conf.urls.defaults import *\n\nurlpatterns = patterns('',\n (r'^$', 'pub.views.index'),\n)\n\nif you then use {% url pub.views.index %} somewhere in your templates, you should get the correct url.\n", "You post a lot of code, but you haven't been very clear about what the actual problem is. First of all having\n(r'', 'pub.views.index'),\n(r'^/$', 'pub.views.index'),\n\ncan give some problems if you want to find the url. What should the url be for pub.views.index? From what you say, this is actually not be a problem, since you don't have a template tag that want to reverse the mentioned view. You don't actually say what is going wrong. But a way to fix the above problem, if you want to keep the two urls, would be to used named urls. I find it a bit redundant since you can just redirect example.com/pub to example.com/pub/, but you could transform the above to:\nurl(r'', 'pub.views.index' name='pub_a'),\nurl(r'^/$', 'pub.views.index', name='pub_b'),\n\nDoing this you are now able to reverse your url, as you can uniquely identify them doing {% url pub_a %} or {% url pub_b %}. This would also make your templates easier to read, as you can give names that mean something, so it's easier to remember what's going on, while being more concise. \n" ]
[ 1, 0 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0001801165_django_google_app_engine_python.txt
Q: Where/How should I do validation and transformations on entities in Google App Engine? In Ruby on Rails, each model entity has a "validate_on_something" hook method, that will be called before the entity is actually persisted to the database. I would like similar functionality in Google App Engine. I am aware that you can do validation on individual Properties by passing arguments to them in their declarations. However, if I wish to do more validation than that, is there some place within the model class declaration within which I can do that? Also, along the same lines, sometimes a entity needs modification before it is actually persisted to the database. I might need to modify (transform) the entity right before it is actually written to the database. Is there some place in the entity class declaration that would allow me to do so? I am aware that I can put these transformations/validations outside of the class. Bu this hardly seems like good OO design. It really seems like there should be hook methods that would automatically be called in a model for these sort of needs. So my question is, what is the most appropriate way to handle the validation and transformation of entities before they are persisted? A: The best answer depends on what sort of transformations you need to do. There's no generalized pre-/post- put methods for models, but there are several other options: As you mentioned, you can pass validation functions to Property class constructors You can use a custom property class that generates values programmatically, such as this one. You can modify entities as they are stored at the lowest level using api call hooks. A: Are you using any kind of web framework on top of the raw app engine api's? Rails is a very high level framework. Have you looked into Django or any of the other web frameworks? You may find those are closer to rails than raw appengine entities. Alternatively, if you want something lower level, have a look at this article on hooks
Where/How should I do validation and transformations on entities in Google App Engine?
In Ruby on Rails, each model entity has a "validate_on_something" hook method, that will be called before the entity is actually persisted to the database. I would like similar functionality in Google App Engine. I am aware that you can do validation on individual Properties by passing arguments to them in their declarations. However, if I wish to do more validation than that, is there some place within the model class declaration within which I can do that? Also, along the same lines, sometimes a entity needs modification before it is actually persisted to the database. I might need to modify (transform) the entity right before it is actually written to the database. Is there some place in the entity class declaration that would allow me to do so? I am aware that I can put these transformations/validations outside of the class. Bu this hardly seems like good OO design. It really seems like there should be hook methods that would automatically be called in a model for these sort of needs. So my question is, what is the most appropriate way to handle the validation and transformation of entities before they are persisted?
[ "The best answer depends on what sort of transformations you need to do. There's no generalized pre-/post- put methods for models, but there are several other options:\n\nAs you mentioned, you can pass validation functions to Property class constructors\nYou can use a custom property class that generates values programmatically, such as this one.\nYou can modify entities as they are stored at the lowest level using api call hooks.\n\n", "Are you using any kind of web framework on top of the raw app engine api's? Rails is a very high level framework. Have you looked into Django or any of the other web frameworks? You may find those are closer to rails than raw appengine entities.\nAlternatively, if you want something lower level, have a look at this article on \nhooks\n" ]
[ 2, 1 ]
[]
[]
[ "google_app_engine", "python", "transformation", "validation" ]
stackoverflow_0001805830_google_app_engine_python_transformation_validation.txt
Q: How to deal with rounding errors of floating types for financial calculations in Python SQLite? I'm creating a financial app and it seems my floats in sqlite are floating around. Sometimes a 4.0 will be a 4.000009, and a 6.0 will be a 6.00006, things like that. How can I make these more exact and not affect my financial calculations? Values are coming from Python if that matters. Not sure which area the messed up numbers are coming from. A: Please use Decimal http://docs.python.org/library/decimal.html A: Seeing as this is a financial application, if you only have calculations up to 2 or 3 decimal places, you can store all the data internally as integers, and only convert them to float for presentation purposes. E.g. 6.00 -> 600 4.35 -> 435 A: This is a common problem using SQLite as it does not have a Currency type. As S.Mark said you can use the Decimal representation library. However SQLite 3 only supports binary floating point numbers (sqlite type REAL) so you would have to store the Decimal encoded float as either TEXT or a BLOB or convert to REAL(but then you'd be back to a 64bit binary float) So consider the range of numbers that you need to represent and whether you need to be able to perform calculations from within the Database. You may be better off using a different DB which supports NUMERIC types e.g. MYSql, PostgreSQL, Firebird A: Most people would probably use Decimal for this, however if this doesn't map onto a database type you may take a performance hit. If performance is important you might want to consider using Integers to represent an appropriate currency unit - often cents or tenths of cents is ok. There should be business rules about how amounts are to be rounded in various situations and you should have tests covering each scenario. A: Use Decimal to manipulate your figures, then use pickle to save it and load it from SQLite as text, since it doesn't handle numeric types. Finaly, use unitest and doctest, for financial operations, you want to ensure all the code does what it is suppose to do in any circonstances. You can't fix bugs on the way like with, let's say, a social network... A: You have to use decimal numbers. Decimal numbers can be represented exactly. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is 5.5511151231257827e-017. So, just try decimal: import decimal
How to deal with rounding errors of floating types for financial calculations in Python SQLite?
I'm creating a financial app and it seems my floats in sqlite are floating around. Sometimes a 4.0 will be a 4.000009, and a 6.0 will be a 6.00006, things like that. How can I make these more exact and not affect my financial calculations? Values are coming from Python if that matters. Not sure which area the messed up numbers are coming from.
[ "Please use Decimal \nhttp://docs.python.org/library/decimal.html\n", "Seeing as this is a financial application, if you only have calculations up to 2 or 3 decimal places, you can store all the data internally as integers, and only convert them to float for presentation purposes.\nE.g.\n6.00 -> 600\n4.35 -> 435\n\n", "This is a common problem using SQLite as it does not have a Currency type.\nAs S.Mark said you can use the Decimal representation library. However SQLite 3 only supports binary floating point numbers (sqlite type REAL) so you would have to store the Decimal encoded float as either TEXT or a BLOB or convert to REAL(but then you'd be back to a 64bit binary float)\nSo consider the range of numbers that you need to represent and whether you need to be able to perform calculations from within the Database.\nYou may be better off using a different DB which supports NUMERIC types e.g. MYSql, PostgreSQL, Firebird\n", "Most people would probably use Decimal for this, however if this doesn't map onto a database type you may take a performance hit.\nIf performance is important you might want to consider using Integers to represent an appropriate currency unit - often cents or tenths of cents is ok.\nThere should be business rules about how amounts are to be rounded in various situations and you should have tests covering each scenario.\n", "Use Decimal to manipulate your figures, then use pickle to save it and load it from SQLite as text, since it doesn't handle numeric types.\nFinaly, use unitest and doctest, for financial operations, you want to ensure all the code does what it is suppose to do in any circonstances. You can't fix bugs on the way like with, let's say, a social network...\n", "You have to use decimal numbers.\nDecimal numbers can be represented exactly. \nIn decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is 5.5511151231257827e-017.\nSo, just try decimal:\nimport decimal\n\n" ]
[ 9, 4, 3, 1, 1, 0 ]
[]
[]
[ "floating_point", "python", "sqlite" ]
stackoverflow_0001801307_floating_point_python_sqlite.txt
Q: Problems PUTting binary data to Django I am trying to build a RESTful api with Django to share mp3s -- right up front: it's a toy app, never going into production, so it doesn't need to scale or worry (I hope) about copyright devils. My problem now is that I have a Django view that I want to be the endpoint for HTTP PUT requests. The headers of the PUT will contain the metadata, and the body will exclusively be the binary. Here's the actual view that I am (trying) to hit. Please note that logging indicates that control flow never enters the put() method, which I believe is correct, if not especially robust: class UserSong(RESTView): logging.debug('entering UserSong.put') def put(self, request, username=''): if request.META['Content-Type'] != 'octet/stream': raise Http400() title = request.META['X-BD-TITLE'] if 'X-BD-TITLE' in request.META else 'title unknown' artist = request.META['X-BD-ARTIST'] if 'X-BD-ARTIST' in request.META else 'artist unknown' album = request.META['X-BD-ALBUM'] if 'X-BD-ALBUM' in request.META else 'album unknown' song_data = b6decode(request.raw_post_data) song = Song(title=title, artist=artist, playcount=playcount, is_sample=is_sample, song_data=song_data, album=album) song.save() return HttpResponse('OK', 'text/plain' , 201) def __call__(self, request, *args, **kwargs): logging.basicConfig(filename=LOGFILE,level=logging.DEBUG) try: if request.method == 'DELETE': return self.delete(request, *args, **kwargs) elif request.method == 'GET': return self.get(request, *args, **kwargs) elif request.method == 'POST': return self.post(request, *args, **kwargs) elif request.method == 'PUT': return self.put(request, *args, **kwargs) except: raise Http404() In testing this, I was able to get unittests to pass using Django's unittesting framework, but I do not trust that it was accurately mimicking Real Life. So, I cracked open httplib, and constructed a PUT my own self. This is that code, which I executed interactively: >>>method = 'PUT' >>>url = 'accounts/test/songs/' >>>f = open('/Users/bendean/Documents/BEARBOT.mp3') >>>data = f.read() >>>body = data >>>headers = {'X-BD-ARTIST' : 'BEARBOT' , 'X-BD-ALBUM':'','X-BD-TITLE':'LightningSPRKS'} >>>headers['CONTENT-TYPE'] = 'octet/stream' >>>import httplib >>>c = httplib.HTTPConnection('localhost:8000') >>>c.request(method, url, body, headers) the response I get is not pretty Traceback (most recent call last): File "<console>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 880, in request File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 914, in _send_request File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 719, in send File "<string>", line 1, in sendall error: [Errno 54] Connection reset by peer though sometimes I get Traceback (most recent call last): File "<console>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 880, in request File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 914, in _send_request File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 719, in send File "<string>", line 1, in sendall error: [Errno 32] Broken pipe I'm fairly confident that my URLs are working (the GET handler is doing just fine, thank you). Logging indicates that the request is not actually making it to the handler code. googling around brings me to issue trackers suggesting that the issue is in httplib's handling of an error while uploading a big file (this one is 3.7 mb). So, I am not ashamed to admit that I am out of my depth here-- how can I determine what is causing the error? Am I formatting my request properly (p.s. I also tried b64encoding the body, with the same results)?. In a larger sense, is what I'm doing (to test, not in life) reasonable? Does it have anything to do with configurable settings on the dev server? Would these problems go away if I were to try putting this on Apache? Your help is very much appreciated. A: It does appear that the issue is in the dev server's handling of large requests. After deploying to apache with mod_wsgi, this problem goes away. Still lots of open questions for me about RESTful file uploads...
Problems PUTting binary data to Django
I am trying to build a RESTful api with Django to share mp3s -- right up front: it's a toy app, never going into production, so it doesn't need to scale or worry (I hope) about copyright devils. My problem now is that I have a Django view that I want to be the endpoint for HTTP PUT requests. The headers of the PUT will contain the metadata, and the body will exclusively be the binary. Here's the actual view that I am (trying) to hit. Please note that logging indicates that control flow never enters the put() method, which I believe is correct, if not especially robust: class UserSong(RESTView): logging.debug('entering UserSong.put') def put(self, request, username=''): if request.META['Content-Type'] != 'octet/stream': raise Http400() title = request.META['X-BD-TITLE'] if 'X-BD-TITLE' in request.META else 'title unknown' artist = request.META['X-BD-ARTIST'] if 'X-BD-ARTIST' in request.META else 'artist unknown' album = request.META['X-BD-ALBUM'] if 'X-BD-ALBUM' in request.META else 'album unknown' song_data = b6decode(request.raw_post_data) song = Song(title=title, artist=artist, playcount=playcount, is_sample=is_sample, song_data=song_data, album=album) song.save() return HttpResponse('OK', 'text/plain' , 201) def __call__(self, request, *args, **kwargs): logging.basicConfig(filename=LOGFILE,level=logging.DEBUG) try: if request.method == 'DELETE': return self.delete(request, *args, **kwargs) elif request.method == 'GET': return self.get(request, *args, **kwargs) elif request.method == 'POST': return self.post(request, *args, **kwargs) elif request.method == 'PUT': return self.put(request, *args, **kwargs) except: raise Http404() In testing this, I was able to get unittests to pass using Django's unittesting framework, but I do not trust that it was accurately mimicking Real Life. So, I cracked open httplib, and constructed a PUT my own self. This is that code, which I executed interactively: >>>method = 'PUT' >>>url = 'accounts/test/songs/' >>>f = open('/Users/bendean/Documents/BEARBOT.mp3') >>>data = f.read() >>>body = data >>>headers = {'X-BD-ARTIST' : 'BEARBOT' , 'X-BD-ALBUM':'','X-BD-TITLE':'LightningSPRKS'} >>>headers['CONTENT-TYPE'] = 'octet/stream' >>>import httplib >>>c = httplib.HTTPConnection('localhost:8000') >>>c.request(method, url, body, headers) the response I get is not pretty Traceback (most recent call last): File "<console>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 880, in request File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 914, in _send_request File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 719, in send File "<string>", line 1, in sendall error: [Errno 54] Connection reset by peer though sometimes I get Traceback (most recent call last): File "<console>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 880, in request File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 914, in _send_request File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 719, in send File "<string>", line 1, in sendall error: [Errno 32] Broken pipe I'm fairly confident that my URLs are working (the GET handler is doing just fine, thank you). Logging indicates that the request is not actually making it to the handler code. googling around brings me to issue trackers suggesting that the issue is in httplib's handling of an error while uploading a big file (this one is 3.7 mb). So, I am not ashamed to admit that I am out of my depth here-- how can I determine what is causing the error? Am I formatting my request properly (p.s. I also tried b64encoding the body, with the same results)?. In a larger sense, is what I'm doing (to test, not in life) reasonable? Does it have anything to do with configurable settings on the dev server? Would these problems go away if I were to try putting this on Apache? Your help is very much appreciated.
[ "It does appear that the issue is in the dev server's handling of large requests. After deploying to apache with mod_wsgi, this problem goes away. Still lots of open questions for me about RESTful file uploads...\n" ]
[ 0 ]
[]
[]
[ "django", "http", "python" ]
stackoverflow_0001793556_django_http_python.txt
Q: Calculate percent at runtime I have this problem where I have to "audit" a percent of my transtactions. If percent is 100 I have to audit them all, if is 0 I have to skip them all and if 50% I have to review the half etc. The problem ( or the opportunity ) is that I have to perform the check at runtime. What I tried was: audit = 100/percent So if percent is 50 audit = 100 / 50 ( which is 2 ) So I have to audit 1 and skip 1 audit 1 and skip 1 .. If is 30 audit = 100 / 30 ( 3.3 ) I audit 2 and skip the third. Question I'm having problems with numbers beyond 50% ( like 75% ) because it gives me 1.333, ... When would be the correct algorithm to know how many to audit as they go?... I also have problems with 0 ( due to division by 0 :P ) but I have fixed that already, and with 100 etc. Any suggestion is greatly appreciated. A: Why not do it randomly. For each transaction, pick a random number between 0 and 100. If that number is less than your "percent", then audit the transaction. If the number is greater than your "percent", then don't. I don't know if this satisfies your requirements, but over an extended period of time, you will have the right percentage audited. If you need an exact "skip 2, audit one, skip 2 audit one" type of algorithm, you'll likely have luck adapting a line-drawing algorithm. A: Try this: 1) Keep your audit percentage as a decimal. 2) For every transaction, associate a random number (between 0 and 1) with it 3) If the random number is less than the percentage, audit the transaction. A: To follow your own algorithm: just keep adding that 1.333333 (or other quotient) to a counter. Have two counters: an integer one and a real one. If the truncated part of the real counter = the integer counter, the audit is carried out, otherwise it isn't, like this: Integer counter Real counter 1 1.333333: audit transaction 2 2.666666: audit transaction 3 3.999999: audit transaction 4 truncated(5.333333) = 5 > 4 => do NOT audit transaction 5 5.333333: audit transaction Only increment the real counter when its truncated version = the integer counter. Always increment the integer counter. In code: var p, pc: double; c: integer; begin p := 100 / Percentage; pc := p; for c := 1 to NrOfTransactions do begin if trunc(pc) = c then begin pc := pc + p; Do audit on transaction c end end; end; A: if percent > random.randint(1,100): print("audit") else: print("skip") A: If you need to audit these transactions in real time (as they are received) perhaps you could use a random number generator to check if you need to audit the transaction. So if for example you want to audit 50% of transactions, for every transaction received you would generate a random number between 0 and 1, and if the number was greater than 0.5, audit that transaction. While for low numbers this would not work, for large numbers of transactions this would give you very close to the required percentage. This is better than your initial suggestion because if does not allow a method to 'game' the audit process - if you are auditing every second transaction this allows bad transactions to slip through. Another possibility is to keep a running total of the total transactions and as this changes the total number of transactions that need to be audited (according to your percentage) you can pipe transactions into the auditing process. This however still opens the slight possibility of someone detecting the pattern and circumventing the audit. A: For a high throughput system the random method is best, but if you don't want randomness, the this algorithm will do the job. Don't forget to test it in a unit test! // setup int transactionCount = 0; int auditCount = 0; double targetAuditRatio = auditPercent/100.0; // start of processing transactionCount++; double actualAuditRatio = auditCount/transactionCount; if (actualAuditRatio < targetAuditRatio) { auditCount++; // do audit } // do processing A: You can constantly "query" each audit using counter. For example ctr = 0; percent = 50 while(1) { ctr += percent; if (ctr >= 100) { audit; ctr = ctr - 100; } else skip } You can use floats (however this will bring some unpredictability) or multiply 100 percent by sth to get better resolution. There is really no need to use random number generator. A: Not tested, but in the random module there is a function sample. If transactions was a list of transactions, you would do something like: import random to_be_audited = random.sample(transactions,len(transactions*100/percentage)) This would generate a list to_be_audited which would be a random, non-duplicating sample of the transactions. See documentation on random
Calculate percent at runtime
I have this problem where I have to "audit" a percent of my transtactions. If percent is 100 I have to audit them all, if is 0 I have to skip them all and if 50% I have to review the half etc. The problem ( or the opportunity ) is that I have to perform the check at runtime. What I tried was: audit = 100/percent So if percent is 50 audit = 100 / 50 ( which is 2 ) So I have to audit 1 and skip 1 audit 1 and skip 1 .. If is 30 audit = 100 / 30 ( 3.3 ) I audit 2 and skip the third. Question I'm having problems with numbers beyond 50% ( like 75% ) because it gives me 1.333, ... When would be the correct algorithm to know how many to audit as they go?... I also have problems with 0 ( due to division by 0 :P ) but I have fixed that already, and with 100 etc. Any suggestion is greatly appreciated.
[ "Why not do it randomly. For each transaction, pick a random number between 0 and 100. If that number is less than your \"percent\", then audit the transaction. If the number is greater than your \"percent\", then don't. I don't know if this satisfies your requirements, but over an extended period of time, you will have the right percentage audited.\nIf you need an exact \"skip 2, audit one, skip 2 audit one\" type of algorithm, you'll likely have luck adapting a line-drawing algorithm. \n", "Try this:\n1) Keep your audit percentage as a decimal.\n2) For every transaction, associate a random number (between 0 and 1) with it\n3) If the random number is less than the percentage, audit the transaction.\n", "To follow your own algorithm: just keep adding that 1.333333 (or other quotient) to a counter.\nHave two counters: an integer one and a real one. If the truncated part of the real counter = the integer counter, the audit is carried out, otherwise it isn't, like this:\nInteger counter Real counter\n\n1 1.333333: audit transaction\n2 2.666666: audit transaction\n3 3.999999: audit transaction\n4 truncated(5.333333) = 5 > 4 => do NOT audit transaction\n5 5.333333: audit transaction\n\nOnly increment the real counter when its truncated version = the integer counter. Always increment the integer counter.\nIn code:\nvar p, pc: double;\n c: integer;\nbegin\n p := 100 / Percentage;\n pc := p;\n for c := 1 to NrOfTransactions do begin\n if trunc(pc) = c then begin\n pc := pc + p;\n Do audit on transaction c\n end \n end;\nend;\n\n", " if percent > random.randint(1,100):\n print(\"audit\")\n else:\n print(\"skip\")\n\n", "If you need to audit these transactions in real time (as they are received) perhaps you could use a random number generator to check if you need to audit the transaction.\nSo if for example you want to audit 50% of transactions, for every transaction received you would generate a random number between 0 and 1, and if the number was greater than 0.5, audit that transaction.\nWhile for low numbers this would not work, for large numbers of transactions this would give you very close to the required percentage.\nThis is better than your initial suggestion because if does not allow a method to 'game' the audit process - if you are auditing every second transaction this allows bad transactions to slip through.\nAnother possibility is to keep a running total of the total transactions and as this changes the total number of transactions that need to be audited (according to your percentage) you can pipe transactions into the auditing process. This however still opens the slight possibility of someone detecting the pattern and circumventing the audit.\n", "For a high throughput system the random method is best, but if you don't want randomness, the this algorithm will do the job. Don't forget to test it in a unit test!\n// setup\nint transactionCount = 0;\nint auditCount = 0;\ndouble targetAuditRatio = auditPercent/100.0;\n\n// start of processing\ntransactionCount++;\ndouble actualAuditRatio = auditCount/transactionCount;\n\nif (actualAuditRatio < targetAuditRatio) {\n auditCount++;\n // do audit\n}\n// do processing\n\n", "You can constantly \"query\" each audit using counter. For example\nctr = 0;\npercent = 50\nwhile(1) {\n ctr += percent;\n if (ctr >= 100) {\n audit;\n ctr = ctr - 100;\n } else\n skip\n}\n\nYou can use floats (however this will bring some unpredictability) or multiply 100 percent by sth to get better resolution.\nThere is really no need to use random number generator.\n", "Not tested, but in the random module there is a function sample. If transactions was a list of transactions, you would do something like:\nimport random\n\nto_be_audited = random.sample(transactions,len(transactions*100/percentage))\n\nThis would generate a list to_be_audited which would be a random, non-duplicating sample of the transactions.\nSee documentation on random\n" ]
[ 17, 3, 3, 2, 1, 1, 0, 0 ]
[]
[]
[ "algorithm", "c#", "java", "language_agnostic", "python" ]
stackoverflow_0001806143_algorithm_c#_java_language_agnostic_python.txt
Q: Suppose I have 2 vectors. What algorithms can I use to compare them? Company 1 has this vector: ['books','video','photography','food','toothpaste','burgers'] ... ... Company 2 has this vector: ['video','processor','photography','LCD','power supply', 'books'] ... ... Suppose this is a frequency distribution (I could make it a tuple but too much to type). As you can see...these vectors have things that overlap. "video" and "photography" seem to be "similar" between two vectors due to the fact that they are in similar positions. And..."books" is obviously a strong point for company 1. Ordering and positioning does matter, as this is a frequency distribution. What algorithms could you use to play around with this? What algorithms could you use that could provide valuable data for these companies, using these vectors? I am new to text-mining and information-retrieval. Could someone guide me about those topics in relation to this question? A: I would suggest you a book called Programming Collective Intelligence. It's a very nice book on how you can retrieve information from simple data like this one. There are code examples included (in Python :) Edit: Just replying to gbjbaanb: This is Python! a = ['books','video','photography','food','toothpaste','burgers'] b = ['video','processor','photography','LCD','power supply', 'books'] a = set(a) b = set(b) a.intersection(b) set(['photography', 'books', 'video']) b.intersection(a) set(['photography', 'books', 'video']) b.difference(a) set(['LCD', 'power supply', 'processor']) a.difference(b) set(['food', 'toothpaste', 'burgers']) A: Is position is very important, as you emphasize, then the crucial metric will be based on the difference of positions between the same items in the different vectors (you can, for example, sum the absolute values of the differences, or their squares). The big issue that needs to be solved is -- how much to weigh an item that's present (say it's the N-th one) in one vector, and completely absent in the other. Is that a relatively minor issue -- as if the missing item was actually present right after the actual ones, for example -- or a really, really big deal? That's impossible to say without more understanding of the actual application area. You can try various ways to deal with this issue and see what results they give on example cases you care about! For example, suppose "a missing item is roughly the same as if it were present, right after the actual ones". Then, you can preprocess each input vector into a dict mapping item to position (crucial optimization if you have to compare many pairs of input vectors!): def makedict(avector): return dict((item, i) for i, item in enumerate(avector)) and then, to compare two such dicts: def comparedicts(d1, d2): allitems = set(d1) | set(d2) distances = [d1.get(x, len(d1)) - d2.get(x, len(d2)) for x in allitems] return sum(d * d for d in distances) (or, abs(d) instead of the squaring in the last statement). To make missing items weigh more (make dicts, i.e. vectors, be considered further away), you could use twice the lengths instead of just the lengths, or some large constant such as 100, in an otherwise similarly structured program. A: Take a look at Hamming Distance A: As mbg mentioned, the hamming distance is a good start. It's basically assigning a bitmask for every possible item whether it is contained in the companies value. Eg. toothpaste is 1 for company A, but 0 for company B. You then count the bits which differ between the companies. The Jaccard coefficient is related to this. Hamming distance will actually not be able to capture similarity between things like "video" and "photography". Obviously, a company that sells one does sell the other also with higher probability than a company that sells toothpaste. For this, you can use stuff like LSI (it's also used for dimensionality reduction) or factorial codes (e.g. neural network stuff as Restricted Boltzman Machines, Autoencoders or Predictablity Minimization) to get more compact representations which you can then compare using the euclidean distance. A: pick the rank of each entry (higher rank is better) and make sum of geometric means between matches for two vectors sum(sqrt(vector_multiply(x,y))) //multiply matches Sum of ranks for each value over vector should be same for each vector (preferrebly 1) That way you can make compares between more than 2 vectors. If you apply ikkebr's metod you can find how a is simmilar to b in that case just use sum( b( b.intersection(a) ))
Suppose I have 2 vectors. What algorithms can I use to compare them?
Company 1 has this vector: ['books','video','photography','food','toothpaste','burgers'] ... ... Company 2 has this vector: ['video','processor','photography','LCD','power supply', 'books'] ... ... Suppose this is a frequency distribution (I could make it a tuple but too much to type). As you can see...these vectors have things that overlap. "video" and "photography" seem to be "similar" between two vectors due to the fact that they are in similar positions. And..."books" is obviously a strong point for company 1. Ordering and positioning does matter, as this is a frequency distribution. What algorithms could you use to play around with this? What algorithms could you use that could provide valuable data for these companies, using these vectors? I am new to text-mining and information-retrieval. Could someone guide me about those topics in relation to this question?
[ "I would suggest you a book called Programming Collective Intelligence. It's a very nice book on how you can retrieve information from simple data like this one. There are code examples included (in Python :)\nEdit:\nJust replying to gbjbaanb: This is Python!\na = ['books','video','photography','food','toothpaste','burgers']\nb = ['video','processor','photography','LCD','power supply', 'books']\na = set(a)\nb = set(b)\n\na.intersection(b)\n set(['photography', 'books', 'video'])\n\nb.intersection(a)\n set(['photography', 'books', 'video'])\n\nb.difference(a)\n set(['LCD', 'power supply', 'processor'])\n\na.difference(b)\n set(['food', 'toothpaste', 'burgers'])\n\n\n", "Is position is very important, as you emphasize, then the crucial metric will be based on the difference of positions between the same items in the different vectors (you can, for example, sum the absolute values of the differences, or their squares). The big issue that needs to be solved is -- how much to weigh an item that's present (say it's the N-th one) in one vector, and completely absent in the other. Is that a relatively minor issue -- as if the missing item was actually present right after the actual ones, for example -- or a really, really big deal? That's impossible to say without more understanding of the actual application area. You can try various ways to deal with this issue and see what results they give on example cases you care about!\nFor example, suppose \"a missing item is roughly the same as if it were present, right after the actual ones\". Then, you can preprocess each input vector into a dict mapping item to position (crucial optimization if you have to compare many pairs of input vectors!):\ndef makedict(avector):\n return dict((item, i) for i, item in enumerate(avector))\n\nand then, to compare two such dicts:\ndef comparedicts(d1, d2):\n allitems = set(d1) | set(d2) \n distances = [d1.get(x, len(d1)) - d2.get(x, len(d2)) for x in allitems]\n return sum(d * d for d in distances)\n\n(or, abs(d) instead of the squaring in the last statement). To make missing items weigh more (make dicts, i.e. vectors, be considered further away), you could use twice the lengths instead of just the lengths, or some large constant such as 100, in an otherwise similarly structured program.\n", "Take a look at Hamming Distance\n", "As mbg mentioned, the hamming distance is a good start. It's basically assigning a bitmask for every possible item whether it is contained in the companies value. \nEg. toothpaste is 1 for company A, but 0 for company B. You then count the bits which differ between the companies. The Jaccard coefficient is related to this.\nHamming distance will actually not be able to capture similarity between things like \"video\" and \"photography\". Obviously, a company that sells one does sell the other also with higher probability than a company that sells toothpaste.\nFor this, you can use stuff like LSI (it's also used for dimensionality reduction) or factorial codes (e.g. neural network stuff as Restricted Boltzman Machines, Autoencoders or Predictablity Minimization) to get more compact representations which you can then compare using the euclidean distance.\n", "pick the rank of each entry (higher rank is better) and make sum of geometric means between matches\nfor two vectors\nsum(sqrt(vector_multiply(x,y))) //multiply matches\n\nSum of ranks for each value over vector should be same for each vector (preferrebly 1)\nThat way you can make compares between more than 2 vectors.\nIf you apply ikkebr's metod you can find how a is simmilar to b\nin that case just use\nsum( b( b.intersection(a) ))\n\n" ]
[ 3, 3, 2, 0, 0 ]
[ "You could use the set_intersection algorithm. The 2 vectors must be sorted first (use sort call), then pass in 4 iterators and you'll get a collection back with the common elements inserted into it. There are a few others that operate similarly. \n" ]
[ -1 ]
[ "list", "python", "text", "vector" ]
stackoverflow_0001805987_list_python_text_vector.txt
Q: Grouping by Nested Object Keys in MongoDB Is it possible to group results by a key found in an array of objects in a list? For example, lets say I have a table of survey responses (survey_responses), and each entry represents a single response. One or more of the questions in the survey is a multiple choice, so the answers stored could resemble: survey_responses.insert({ 'name': "Joe Surveylover", 'ip': "127.0.0.1", 'favorite_songs_of_2009': [ {'rank': 1, 'points': 5, 'title': "Atlas Sound: Quick Canals"}, {'rank': 2, 'points': 4, 'title': "Here We Go Magic: Fangela"}, {'rank': 3, 'points': 3, 'title': "Girls: Hellhole Ratrace"}, {'rank': 4, 'points': 2, 'title': "Fever Ray: If I Had A Heart"}, {'rank': 5, 'points': 1, 'title': "Bear in Heaven: Lovesick Teenagers"}], 'favorite_albums_of_2009': [ # and so on ]}) how can I group by the title of favorite_songs_in_2009 to get the total number of points for each song in the array? A: It seems that your only option is to do this in your own Python code: song_points = {} for response in survey_responses.find(): for song in response['favorite_songs_of_2009']: title = song['title'] song_points[title] = song_points.get(title, 0) + song['points'] You'll get your results in the song_points variable.
Grouping by Nested Object Keys in MongoDB
Is it possible to group results by a key found in an array of objects in a list? For example, lets say I have a table of survey responses (survey_responses), and each entry represents a single response. One or more of the questions in the survey is a multiple choice, so the answers stored could resemble: survey_responses.insert({ 'name': "Joe Surveylover", 'ip': "127.0.0.1", 'favorite_songs_of_2009': [ {'rank': 1, 'points': 5, 'title': "Atlas Sound: Quick Canals"}, {'rank': 2, 'points': 4, 'title': "Here We Go Magic: Fangela"}, {'rank': 3, 'points': 3, 'title': "Girls: Hellhole Ratrace"}, {'rank': 4, 'points': 2, 'title': "Fever Ray: If I Had A Heart"}, {'rank': 5, 'points': 1, 'title': "Bear in Heaven: Lovesick Teenagers"}], 'favorite_albums_of_2009': [ # and so on ]}) how can I group by the title of favorite_songs_in_2009 to get the total number of points for each song in the array?
[ "It seems that your only option is to do this in your own Python code:\nsong_points = {}\nfor response in survey_responses.find():\n for song in response['favorite_songs_of_2009']:\n title = song['title']\n song_points[title] = song_points.get(title, 0) + song['points']\n\nYou'll get your results in the song_points variable.\n" ]
[ 0 ]
[]
[]
[ "mongodb", "python" ]
stackoverflow_0001806705_mongodb_python.txt
Q: "Permission Denied" in Django template using Djapian I've followed the Djapian tutorial and setup everything "by the book" so that the indexshell commandline supplied by Djapian shows successful queries. However, when integrating the sample search from the Djapian tutorial I get this nonsense error: TemplateSyntaxError at /search/ Caught an exception while rendering: (13, 'Permission denied') It points to this line: {% if results %} Changing or omitting the line will yield the next (same) error at whichever line that references a field from "results". The stacktrace shows this exception: OSError(13, 'Permission denied') in: /usr/local/lib/python2.6/dist-packages/django/template/debug.py in render_node django-debug-toolbar shows for results: <djapian.resultset.ResultSet object at 0x7f7142affcd0> Is this an issue with Djapian? In any case, why would it yield a "Permission denied" error? A: Please figure out what is the exact file path involved in this error. I guess it involves a write operation to some template cache, but you should make sure. Then you just need to check the UNIX permissions on the file accessed or on the directory for that file in the case of a newly created file. Another possibility is to run your application via strace (it is a command line tool, see man strace) and try to search for such an error (13) in its output. It'll show you the exact path involved in the problem.
"Permission Denied" in Django template using Djapian
I've followed the Djapian tutorial and setup everything "by the book" so that the indexshell commandline supplied by Djapian shows successful queries. However, when integrating the sample search from the Djapian tutorial I get this nonsense error: TemplateSyntaxError at /search/ Caught an exception while rendering: (13, 'Permission denied') It points to this line: {% if results %} Changing or omitting the line will yield the next (same) error at whichever line that references a field from "results". The stacktrace shows this exception: OSError(13, 'Permission denied') in: /usr/local/lib/python2.6/dist-packages/django/template/debug.py in render_node django-debug-toolbar shows for results: <djapian.resultset.ResultSet object at 0x7f7142affcd0> Is this an issue with Djapian? In any case, why would it yield a "Permission denied" error?
[ "Please figure out what is the exact file path involved in this error. I guess it involves a write operation to some template cache, but you should make sure.\nThen you just need to check the UNIX permissions on the file accessed or on the directory for that file in the case of a newly created file.\nAnother possibility is to run your application via strace (it is a command line tool, see man strace) and try to search for such an error (13) in its output. It'll show you the exact path involved in the problem.\n" ]
[ 2 ]
[]
[]
[ "django", "django_templates", "python", "search", "xapian" ]
stackoverflow_0001806449_django_django_templates_python_search_xapian.txt
Q: Openlayers + Mapnik + Tilecache configuration problem I am trying to setup Mapnik + tilecache but can't see any tiles in the browser when I set bbox parameters in both Tilecache.cfg and Openlayers but when I don't specify the bbox everything works fine and I can see actual map tiles. I was wondering if anyone can point out the problem in the code. I think I have tried everything ( in my limited capability) and not really understanding why would it not work. By the way all map layers ( for mapnik styling) are sourced from a PostGIS database and have different projections and transformed on the fly by Mapnik. OpenLayers code: var map, layer; function init(){ var map, layer; var options = { numZoomLevels:20, maxResolution: 360/512, projection: "EPSG:4326", maxExtent: new OpenLayers.Bounds(-2.0,50.0,2.0,54.0) //not working when uncommented }; map = new OpenLayers.Map( 'map', options); layer = new OpenLayers.Layer.WMS( "Map24","tilecache.py?", { layers:'mapnik24', format: 'image/png', srs: 'EPSG:4326' } ); map.addLayer(layer); map.addControl( new OpenLayers.Control.PanZoomBar()); map.addControl( new OpenLayers.Control.MousePosition()); map.addControl( new OpenLayers.Control.LayerSwitcher()); map.addControl( new OpenLayers.Control.Permalink("permalink")); if (!map.getCenter()) map.zoomToMaxExtent(); } Tilecache.cfg: [mapnik24] type=Mapnik mapfile=/somedit/map24.xml bbox=-2.0,50.0,2.0,54.0 levels=20 srs=EPSG:4326 projection=+proj=latlong +datum=WGS84 -- Thanks, A A: The OpenLayers.Bounds constructor parameters are in the order left, bottom, right top. Taking the bounds that you're using change your JavaScript to be: var options = { numZoomLevels:20, maxResolution: 360/512, projection: "EPSG:4326", maxExtent: new OpenLayers.Bounds(50.0,-2.0,54.0,2.0) //not working when uncommented }; Have you tried plugging in the parameters for tilecache.py directly to see if a tile is generated? A: Looking at your code I think you are asking for the region bounded by 50 and 54 degrees east, and 2 degrees north and south. Is this correct? If it is, then I think your bounds are the wrong way around. -2 degrees (south) should be at the bottom, and 2 degrees (north) should be at the top. So the bbox should be 2.0,50.0,-2.0,54.0. Also, looking at that region in OpenStreetMap it looks like there's not much there, is that really what you intend?
Openlayers + Mapnik + Tilecache configuration problem
I am trying to setup Mapnik + tilecache but can't see any tiles in the browser when I set bbox parameters in both Tilecache.cfg and Openlayers but when I don't specify the bbox everything works fine and I can see actual map tiles. I was wondering if anyone can point out the problem in the code. I think I have tried everything ( in my limited capability) and not really understanding why would it not work. By the way all map layers ( for mapnik styling) are sourced from a PostGIS database and have different projections and transformed on the fly by Mapnik. OpenLayers code: var map, layer; function init(){ var map, layer; var options = { numZoomLevels:20, maxResolution: 360/512, projection: "EPSG:4326", maxExtent: new OpenLayers.Bounds(-2.0,50.0,2.0,54.0) //not working when uncommented }; map = new OpenLayers.Map( 'map', options); layer = new OpenLayers.Layer.WMS( "Map24","tilecache.py?", { layers:'mapnik24', format: 'image/png', srs: 'EPSG:4326' } ); map.addLayer(layer); map.addControl( new OpenLayers.Control.PanZoomBar()); map.addControl( new OpenLayers.Control.MousePosition()); map.addControl( new OpenLayers.Control.LayerSwitcher()); map.addControl( new OpenLayers.Control.Permalink("permalink")); if (!map.getCenter()) map.zoomToMaxExtent(); } Tilecache.cfg: [mapnik24] type=Mapnik mapfile=/somedit/map24.xml bbox=-2.0,50.0,2.0,54.0 levels=20 srs=EPSG:4326 projection=+proj=latlong +datum=WGS84 -- Thanks, A
[ "The OpenLayers.Bounds constructor parameters are in the order left, bottom, right top. Taking the bounds that you're using change your JavaScript to be:\n var options = {\n numZoomLevels:20,\n maxResolution: 360/512,\n projection: \"EPSG:4326\",\n maxExtent: new OpenLayers.Bounds(50.0,-2.0,54.0,2.0)\n //not working when uncommented\n };\n\nHave you tried plugging in the parameters for tilecache.py directly to see if a tile is generated?\n", "Looking at your code I think you are asking for the region bounded by 50 and 54 degrees east, and 2 degrees north and south. Is this correct?\nIf it is, then I think your bounds are the wrong way around. -2 degrees (south) should be at the bottom, and 2 degrees (north) should be at the top. So the bbox should be 2.0,50.0,-2.0,54.0.\nAlso, looking at that region in OpenStreetMap it looks like there's not much there, is that really what you intend?\n" ]
[ 3, 1 ]
[]
[]
[ "maps", "openlayers", "proj4js", "python", "tilecache" ]
stackoverflow_0001783081_maps_openlayers_proj4js_python_tilecache.txt
Q: Terminate long running python threads What is the recommended way to terminate unexpectedly long running threads in python ? I can't use SIGALRM, since Some care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform signal() operations in the main thread of execution. Any thread can perform an alarm(), getsignal(), pause(), setitimer() or getitimer(); only the main thread can set a new signal handler, and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads). This means that signals can’t be used as a means of inter-thread communication.Use locks instead. Update: each thread in my case blocks -- it is downloading a web page using urllib2 module and sometimes operation takes too many time on an extremely slow sites. That's why I want to terminate such slow threads A: Since abruptly killing a thread that's in a blocking call is not feasible, a better approach, when possible, is to avoid using threads in favor of other multi-tasking mechanisms that don't suffer from such issues. For the OP's specific case (the threads' job is to download web pages, and some threads block forever due to misbehaving sites), the ideal solution is twisted -- as it generally is for networking tasks. In other cases, multiprocessing might be better. More generally, when threads give unsolvable issues, I recommend switching to other multitasking mechanisms rather than trying heroic measures in the attempt to make threads perform tasks for which, at least in CPython, they're unsuitable. A: As Alex Martelli suggested, you could use the multiprocessing module. It is very similar to the Threading module so that should get you off to a start easily. Your code could be like this for example: import multiprocessing def get_page(*args, **kwargs): # your web page downloading code goes here def start_get_page(timeout, *args, **kwargs): p = multiprocessing.Process(target=get_page, args=args, kwargs=kwargs) p.start() p.join(timeout) if p.is_alive(): # stop the downloading 'thread' p.terminate() # and then do any post-error processing here if __name__ == "__main__": start_get_page(timeout, *args, **kwargs) Of course you need to somehow get the return values of your page downloading code. For that you could use multiprocessing.Pipe or multiprocessing.Queue (or other ways available with multiprocessing). There's more information, as well as samples you could check here. Lastly, the multiprocessing module is included in python 2.6. It is also available for python 2.5 and 2.4 at pypi (you can use easy_install multiprocessing) or just visit pypi and download and install the packages manually. Note: I realize this has been posted awhile ago. I was having a similar problem to this and stumbled here and saw Alex Martelli's suggestion. Had it implemented for my problem and decided to share it. (I'd like to thank Alex for pointing me in the right direction.) A: If you are trying to kill a thread whose code you do not have control over, it depends if the thread is in a blocking call or not. In my experience if the thread is properly blocking, there is no recommended and portable way of doing this. I've run up against this when trying to work with code in the standard library (multiprocessing.manager I'm looking at you) with loops coded with no exit condition: nice! There are some interuptable thread implementations out there (see here for an example), but then, if you have the control of the threaded code yourself, you should be able to write them in a manner where you can interupt them with a condition variable of some sort. A: Use synchronization objects and ask the thread to terminate. Basically, write co-operative handling of this. If you start yanking out the thread beneath the python interpreter, all sorts of odd things can occur, and it's not just in Python either, most runtimes have this problem. For instance, let's say you kill a thread after it has opened a file, there's no way that file will be closed until the application terminates.
Terminate long running python threads
What is the recommended way to terminate unexpectedly long running threads in python ? I can't use SIGALRM, since Some care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform signal() operations in the main thread of execution. Any thread can perform an alarm(), getsignal(), pause(), setitimer() or getitimer(); only the main thread can set a new signal handler, and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads). This means that signals can’t be used as a means of inter-thread communication.Use locks instead. Update: each thread in my case blocks -- it is downloading a web page using urllib2 module and sometimes operation takes too many time on an extremely slow sites. That's why I want to terminate such slow threads
[ "Since abruptly killing a thread that's in a blocking call is not feasible, a better approach, when possible, is to avoid using threads in favor of other multi-tasking mechanisms that don't suffer from such issues.\nFor the OP's specific case (the threads' job is to download web pages, and some threads block forever due to misbehaving sites), the ideal solution is twisted -- as it generally is for networking tasks. In other cases, multiprocessing might be better.\nMore generally, when threads give unsolvable issues, I recommend switching to other multitasking mechanisms rather than trying heroic measures in the attempt to make threads perform tasks for which, at least in CPython, they're unsuitable.\n", "As Alex Martelli suggested, you could use the multiprocessing module. It is very similar to the Threading module so that should get you off to a start easily. Your code could be like this for example:\nimport multiprocessing\n\ndef get_page(*args, **kwargs):\n # your web page downloading code goes here\n\ndef start_get_page(timeout, *args, **kwargs):\n p = multiprocessing.Process(target=get_page, args=args, kwargs=kwargs)\n p.start()\n p.join(timeout)\n if p.is_alive():\n # stop the downloading 'thread'\n p.terminate()\n # and then do any post-error processing here\n\nif __name__ == \"__main__\":\n start_get_page(timeout, *args, **kwargs)\n\nOf course you need to somehow get the return values of your page downloading code. For that you could use multiprocessing.Pipe or multiprocessing.Queue (or other ways available with multiprocessing). There's more information, as well as samples you could check here.\nLastly, the multiprocessing module is included in python 2.6. It is also available for python 2.5 and 2.4 at pypi (you can use easy_install multiprocessing) or just visit pypi and download and install the packages manually.\nNote: I realize this has been posted awhile ago. I was having a similar problem to this and stumbled here and saw Alex Martelli's suggestion. Had it implemented for my problem and decided to share it. (I'd like to thank Alex for pointing me in the right direction.)\n", "If you are trying to kill a thread whose code you do not have control over, it depends if the thread is in a blocking call or not. In my experience if the thread is properly blocking, there is no recommended and portable way of doing this.\nI've run up against this when trying to work with code in the standard library (multiprocessing.manager I'm looking at you) with loops coded with no exit condition: nice!\nThere are some interuptable thread implementations out there (see here for an example), but then, if you have the control of the threaded code yourself, you should be able to write them in a manner where you can interupt them with a condition variable of some sort.\n", "Use synchronization objects and ask the thread to terminate. Basically, write co-operative handling of this.\nIf you start yanking out the thread beneath the python interpreter, all sorts of odd things can occur, and it's not just in Python either, most runtimes have this problem.\nFor instance, let's say you kill a thread after it has opened a file, there's no way that file will be closed until the application terminates.\n" ]
[ 6, 5, 1, 1 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0001226091_multithreading_python.txt
Q: Python: urllib2 multipart/form-data and proxies The Objective: A script which cycles through a list of proxies and sends a post request, containing a file to a PHP page on my server, which then calculates delivery time. It's a pretty useless script, but I am using it to teach myself about urllib2. The Problem: So far I have got multipart/form-data sending correctly using Poster, but I can't get it to send through a proxy, let alone a cycling list of proxies. I have tried using an OpenerDirector with urllib2.ProxyHandler, but I believe Poster defines it's own opener to perform it's magic. Below is the code to send a multipart/form-data request with poster. import urllib2 from poster.encode import multipart_encode from poster.streaminghttp import register_openers fields = {"type": "image", "fileup": open("/home/chaz/pictures/test.jpg", "rb") } register_openers() #I believe this is the key datagen, headers = multipart_encode(fields) request = urllib2.Request("http://foo.net", datagen, headers) lastResponse = urllib2.urlopen(request).read() Any help would be much appreciated as I am stumped. A: you could add proxy installer like this, before requesting the page. from urllib2 import ProxyHandler,build_opener,install_opener PROXY="http://USERNAME:PASSWD@ADDRESS:PORT" opener = build_opener(ProxyHandler({"http" : PROXY})) install_opener(opener)
Python: urllib2 multipart/form-data and proxies
The Objective: A script which cycles through a list of proxies and sends a post request, containing a file to a PHP page on my server, which then calculates delivery time. It's a pretty useless script, but I am using it to teach myself about urllib2. The Problem: So far I have got multipart/form-data sending correctly using Poster, but I can't get it to send through a proxy, let alone a cycling list of proxies. I have tried using an OpenerDirector with urllib2.ProxyHandler, but I believe Poster defines it's own opener to perform it's magic. Below is the code to send a multipart/form-data request with poster. import urllib2 from poster.encode import multipart_encode from poster.streaminghttp import register_openers fields = {"type": "image", "fileup": open("/home/chaz/pictures/test.jpg", "rb") } register_openers() #I believe this is the key datagen, headers = multipart_encode(fields) request = urllib2.Request("http://foo.net", datagen, headers) lastResponse = urllib2.urlopen(request).read() Any help would be much appreciated as I am stumped.
[ "you could add proxy installer like this, before requesting the page. \nfrom urllib2 import ProxyHandler,build_opener,install_opener\n\nPROXY=\"http://USERNAME:PASSWD@ADDRESS:PORT\"\n\nopener = build_opener(ProxyHandler({\"http\" : PROXY}))\n\ninstall_opener(opener)\n\n" ]
[ 5 ]
[]
[]
[ "multipartform_data", "poster", "proxy", "python", "urllib2" ]
stackoverflow_0001806729_multipartform_data_poster_proxy_python_urllib2.txt
Q: Read from source.sql write to destiantion.sql with python script? I Have a file source.sql INSERT INTO `Tbl_ABC` VALUES (1, 0, 'MMB', '2 MB INTERNATIONAL', NULL, NULL, 0) INSERT INTO `Tbl_ABC` VALUES (2, 12, '3D STRUCTURES', '3D STRUCTURES', NULL, NULL, 0) INSERT INTO `Tbl_ABC` VALUES (2, 0, '2 STRUCTURES', '2D STRUCTURES', NULL, NULL, 0) INSERT INTO `Tbl_ABC` VALUES (2, 111, '2D STRUCTURES', '3D STRUCTURES', NULL, NULL, 1) I am going to write a new file called destination.sql.It will contains: The new file will ignore `INSERT INTO `Tbl_ABC` VALUES (1, dont wirte if !=0, 'MMB', '2 MB INTERNATIONAL', NULL, NULL, don't write if !=0) My sql may loger than this.but positon is keep in general. In this case the first number 0 at position[1] and the second 0 at the position[6] (start count from 0) That the results should be like this . INSERT INTO `Tbl_ABC` VALUES (1, 0, 'MMB', '2 MB INTERNATIONAL', NULL, NULL, 0) INSERT INTO `Tbl_ABC` VALUES (2, 0, '2 STRUCTURES', '2D STRUCTURES', NULL, NULL, 0) Anybody Here Could help me to find the way to format the source.sql file and write new file. destination.sql Thanks .. A: You can read line by line and check if it is ends with 0) and match with regex for the other one. import re dest=open("destination.sql","w+") for line in open("source.sql","r"): if line.strip().endswith("0)") and re.search("\(\d+, 0,",line): dest.write(line) A: Something like this: import re RX = re.compile(r'^.*?\(\d+,\s0,.*\s0\)\s*$') outfile = open('destination.sql', 'w') for ln in open('source.sql', 'r').xreadlines(): if RX.match(ln): outfile.write(ln) A: My sql may loger than this.but positon is keep in general. In this case the first number 0 at position[1] and the second 0 at the position[6] (start count from 0) to S.Mark, fviktor I have tried like this import re RX = re.compile(r'^.*?\(\d+,\s0,.*\s0\)\s*$') outfile = open('destination.sql', 'w') for ln in open('source.sql', 'r').xreadlines(): replace1 = ln.replace("INSERT INTO `Tbl_ABC` VALUES (", "") replace2 = replace1.replace(")", "") list_replace = replace2.split(',') #print list_replace #print '%s ,%s' % (list_replace[1], list_replace[6]) if list_replace[6]==0 and list_replace[1] == 0: #start write line to destination.sql!!!!!!!! NEED HELP #if RX.match(ln): outfile.write(ln) (2, x1, '2D STRUCTURES', '2D STRUCTURES', NULL, NULL, x6, 15, 2, '', NULL, NULL, NULL, NULL, '2D STRUCTURES', 'MAILLOT 12/08/05', -1, 'tata 20/05/02', 0, NULL, 0, NULL, NULL) Of cause I don't need to write to destination.sql if the position[1] != 0 and position[6] != 0 in this case are x1 and x6. thanks for your
Read from source.sql write to destiantion.sql with python script?
I Have a file source.sql INSERT INTO `Tbl_ABC` VALUES (1, 0, 'MMB', '2 MB INTERNATIONAL', NULL, NULL, 0) INSERT INTO `Tbl_ABC` VALUES (2, 12, '3D STRUCTURES', '3D STRUCTURES', NULL, NULL, 0) INSERT INTO `Tbl_ABC` VALUES (2, 0, '2 STRUCTURES', '2D STRUCTURES', NULL, NULL, 0) INSERT INTO `Tbl_ABC` VALUES (2, 111, '2D STRUCTURES', '3D STRUCTURES', NULL, NULL, 1) I am going to write a new file called destination.sql.It will contains: The new file will ignore `INSERT INTO `Tbl_ABC` VALUES (1, dont wirte if !=0, 'MMB', '2 MB INTERNATIONAL', NULL, NULL, don't write if !=0) My sql may loger than this.but positon is keep in general. In this case the first number 0 at position[1] and the second 0 at the position[6] (start count from 0) That the results should be like this . INSERT INTO `Tbl_ABC` VALUES (1, 0, 'MMB', '2 MB INTERNATIONAL', NULL, NULL, 0) INSERT INTO `Tbl_ABC` VALUES (2, 0, '2 STRUCTURES', '2D STRUCTURES', NULL, NULL, 0) Anybody Here Could help me to find the way to format the source.sql file and write new file. destination.sql Thanks ..
[ "You can read line by line and check if it is ends with 0) and match with regex for the other one.\nimport re\ndest=open(\"destination.sql\",\"w+\")\nfor line in open(\"source.sql\",\"r\"):\n if line.strip().endswith(\"0)\") and re.search(\"\\(\\d+, 0,\",line):\n dest.write(line)\n\n", "Something like this:\nimport re\n\nRX = re.compile(r'^.*?\\(\\d+,\\s0,.*\\s0\\)\\s*$')\n\noutfile = open('destination.sql', 'w')\nfor ln in open('source.sql', 'r').xreadlines():\n if RX.match(ln):\n outfile.write(ln)\n\n", "\nMy sql may loger than this.but positon is keep in general. In this case the first number 0 at position[1] and the second 0 at the position[6] (start count from 0)\n\nto S.Mark, fviktor\nI have tried like this \nimport re\nRX = re.compile(r'^.*?\\(\\d+,\\s0,.*\\s0\\)\\s*$')\n\noutfile = open('destination.sql', 'w')\nfor ln in open('source.sql', 'r').xreadlines():\n replace1 = ln.replace(\"INSERT INTO `Tbl_ABC` VALUES (\", \"\")\n replace2 = replace1.replace(\")\", \"\")\n list_replace = replace2.split(',')\n #print list_replace\n #print '%s ,%s' % (list_replace[1], list_replace[6])\n if list_replace[6]==0 and list_replace[1] == 0:\n #start write line to destination.sql!!!!!!!! NEED HELP\n #if RX.match(ln):\n outfile.write(ln)\n\n(2, x1, '2D STRUCTURES', '2D STRUCTURES', NULL, NULL, x6, 15, 2, '', NULL, NULL, NULL, NULL, '2D STRUCTURES', 'MAILLOT 12/08/05', -1, 'tata 20/05/02', 0, NULL, 0, NULL, NULL)\nOf cause I don't need to write to destination.sql if the position[1] != 0 and position[6] != 0 in this case are x1 and x6. thanks for your\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001806761_python.txt
Q: CDN options for image resizing Background: I working on an application on Google App Engine. Its been going really well until I hit one of their limitations in file size -- 1MB. One of the components of my application resizes images, which have been uploaded by users. The files are directly uploaded to S3 (http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1434) via POST. I was planning on using a CDN to delivered the resized images. Question: I was wondering if there was CDN that provided an API for resizing images through HTTP call. I found out that SimpleCDN once provided the service, but has sense removed it. I would like to tell the CDN to resize the image I am requesting from the URL. For example, original URL: http://cdn.example.com/images/large_picture.jpg resized image to 125x100: http://cdn.example.com/images/large_picture.jpg/125/100 Does anyone know of CDN that provides a functionality like this? Or have a suggestion to get around the 1MB limit on Google App Engine (not a hack, but alternative method of code). A: Looks like I have found a service that provides what I am indeed looking for. Nirvanix provides an image resize API and even has a nice library for Google App Engine to use with their API. Just thought I would share my findings. A: SteadyOffload does it as well.
CDN options for image resizing
Background: I working on an application on Google App Engine. Its been going really well until I hit one of their limitations in file size -- 1MB. One of the components of my application resizes images, which have been uploaded by users. The files are directly uploaded to S3 (http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1434) via POST. I was planning on using a CDN to delivered the resized images. Question: I was wondering if there was CDN that provided an API for resizing images through HTTP call. I found out that SimpleCDN once provided the service, but has sense removed it. I would like to tell the CDN to resize the image I am requesting from the URL. For example, original URL: http://cdn.example.com/images/large_picture.jpg resized image to 125x100: http://cdn.example.com/images/large_picture.jpg/125/100 Does anyone know of CDN that provides a functionality like this? Or have a suggestion to get around the 1MB limit on Google App Engine (not a hack, but alternative method of code).
[ "Looks like I have found a service that provides what I am indeed looking for. Nirvanix provides an image resize API and even has a nice library for Google App Engine to use with their API. Just thought I would share my findings.\n", "SteadyOffload does it as well.\n" ]
[ 3, 3 ]
[]
[]
[ "cdn", "google_app_engine", "image_manipulation", "python", "rest" ]
stackoverflow_0000493981_cdn_google_app_engine_image_manipulation_python_rest.txt
Q: Initialize a list of objects in Python I'm a looking to initialize an array/list of objects that are not empty -- the class constructor generates data. In C++ and Java I would do something like this: Object lst = new Object[100]; I've dug around, but is there a Pythonic way to get this done? This doesn't work like I thought it would (I get 100 references to the same object): lst = [Object()]*100 But this seems to work in the way I want: lst = [Object() for i in range(100)] List comprehension seems (intellectually) like "a lot" of work for something that's so simple in Java. A: There isn't a way to implicitly call an Object() constructor for each element of an array like there is in C++ (recall that in Java, each element of a new array is initialised to null for reference types). I would say that your list comprehension method is the most Pythonic: lst = [Object() for i in range(100)] If you don't want to step on the lexical variable i, then a convention in Python is to use _ for a dummy variable whose value doesn't matter: lst = [Object() for _ in range(100)] For an equivalent of the similar construct in Java, you can of course use *: lst = [None] * 100 A: You should note that Python's equvalent for Java code (creating array of 100 null references to Object): Object arr = new Object[100]; or C++ code: Object **arr = new Object*[100]; is: arr = [None]*100 not: arr = [Object() for _ in range(100)] The second would be the same as Java's: Object arr = new Object[100]; for (int i = 0; i < arr.lenght; i++) { arr[i] = new Object(); } In fact Python's capabilities to initialize complex data structures are far better then Java's. Note: C++ code: Object *arr = new Object[100]; would have to do as much work as Python's list comprehension: allocate continuous memory for 100 Objects call Object::Object() for each of this Objects And the result would be a completely different data structure. A: I think the list comprehension is the simplest way, but, if you don't like it, it's obviously not the only way to obtain what you desire -- calling a given callable 100 times with no arguments to form the 100 items of a new list. For example, itertools can obviously do it: >>> import itertools as it >>> lst = list(it.starmap(Object, it.repeat((), 100))) or, if you're really a traditionalist, map and apply: >>> lst = map(apply, 100*[Object], 100*[()]) Note that this is essentially the same (tiny, both conceptually and actually;-) amount of work it would take if, instead of needing to be called without arguments, Object needed to be called with one argument -- or, say, if Object was in fact a function rather than a type. From your surprise that it might take "as much as a list comprehension" to perform this task, you appear to think that every language should special-case the need to perform "calls to a type, without arguments" over other kinds of calls to over callables, but I fail to see what's so crucial and special about this very specific case, to warrant treating it differently from all others; and, as a consequence, I'm pretty happy, personally, that Python doesn't single this one case out for peculiar and weird treatment, but handles just as regularly and easily as any other similar use case!-) A: lst = [Object() for i in range(100)] Since an array is it's own first class object in python I think this is the only way to get what you're looking for. * does something crazy.
Initialize a list of objects in Python
I'm a looking to initialize an array/list of objects that are not empty -- the class constructor generates data. In C++ and Java I would do something like this: Object lst = new Object[100]; I've dug around, but is there a Pythonic way to get this done? This doesn't work like I thought it would (I get 100 references to the same object): lst = [Object()]*100 But this seems to work in the way I want: lst = [Object() for i in range(100)] List comprehension seems (intellectually) like "a lot" of work for something that's so simple in Java.
[ "There isn't a way to implicitly call an Object() constructor for each element of an array like there is in C++ (recall that in Java, each element of a new array is initialised to null for reference types).\nI would say that your list comprehension method is the most Pythonic:\nlst = [Object() for i in range(100)]\n\nIf you don't want to step on the lexical variable i, then a convention in Python is to use _ for a dummy variable whose value doesn't matter:\nlst = [Object() for _ in range(100)]\n\nFor an equivalent of the similar construct in Java, you can of course use *:\nlst = [None] * 100\n\n", "You should note that Python's equvalent for Java code\n(creating array of 100 null references to Object):\nObject arr = new Object[100];\n\nor C++ code:\nObject **arr = new Object*[100];\n\nis:\narr = [None]*100\n\nnot:\narr = [Object() for _ in range(100)]\n\nThe second would be the same as Java's:\nObject arr = new Object[100];\nfor (int i = 0; i < arr.lenght; i++) {\n arr[i] = new Object();\n}\n\nIn fact Python's capabilities to initialize complex data structures are far better then Java's.\n\nNote:\nC++ code:\nObject *arr = new Object[100];\n\nwould have to do as much work as Python's list comprehension:\n\nallocate continuous memory for 100 Objects\ncall Object::Object() for each of this Objects\n\nAnd the result would be a completely different data structure.\n", "I think the list comprehension is the simplest way, but, if you don't like it, it's obviously not the only way to obtain what you desire -- calling a given callable 100 times with no arguments to form the 100 items of a new list. For example, itertools can obviously do it:\n>>> import itertools as it\n>>> lst = list(it.starmap(Object, it.repeat((), 100)))\n\nor, if you're really a traditionalist, map and apply:\n>>> lst = map(apply, 100*[Object], 100*[()])\n\nNote that this is essentially the same (tiny, both conceptually and actually;-) amount of work it would take if, instead of needing to be called without arguments, Object needed to be called with one argument -- or, say, if Object was in fact a function rather than a type.\nFrom your surprise that it might take \"as much as a list comprehension\" to perform this task, you appear to think that every language should special-case the need to perform \"calls to a type, without arguments\" over other kinds of calls to over callables, but I fail to see what's so crucial and special about this very specific case, to warrant treating it differently from all others; and, as a consequence, I'm pretty happy, personally, that Python doesn't single this one case out for peculiar and weird treatment, but handles just as regularly and easily as any other similar use case!-)\n", "lst = [Object() for i in range(100)]\n\nSince an array is it's own first class object in python I think this is the only way to get what you're looking for. * does something crazy.\n" ]
[ 45, 16, 2, 0 ]
[]
[]
[ "arrays", "initialization", "list", "python" ]
stackoverflow_0001807026_arrays_initialization_list_python.txt
Q: Scrapy BaseSpider: How does it work? This is the BaseSpider example from the Scrapy tutorial: from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from dmoz.items import DmozItem class DmozSpider(BaseSpider): domain_name = "dmoz.org" start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//ul[2]/li') items = [] for site in sites: item = DmozItem() item['title'] = site.select('a/text()').extract() item['link'] = site.select('a/@href').extract() item['desc'] = site.select('text()').extract() items.append(item) return items SPIDER = DmozSpider() I copied it with changes for my project: from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.item import Item from firm.items import FirmItem class Spider1(CrawlSpider): domain_name = 'wc2' start_urls = ['http://www.whitecase.com/Attorneys/List.aspx?LastName=A'] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+') items = [] for site in sites: item = FirmItem item['school'] = hxs.select('//td[@class="mainColumnTDa"]').re('(JD)(.*?)(\d+)') items.append(item) return items SPIDER = Spider1() and I get the error [wc2] ERROR: Spider exception caught while processing <http://www.whitecase.com/Attorneys/List.aspx?LastName=A> (referer: <None>): [Failure instance: Traceback: <type 'exceptions.TypeError'>: 'ItemMeta' object does not support item assignment I would greatly appreciate it if experts here take a look at the code and give me a clue about where I am going wrong. Thank you A: Probably you meant item = FirmItem() instead of item = FirmItem?
Scrapy BaseSpider: How does it work?
This is the BaseSpider example from the Scrapy tutorial: from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from dmoz.items import DmozItem class DmozSpider(BaseSpider): domain_name = "dmoz.org" start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//ul[2]/li') items = [] for site in sites: item = DmozItem() item['title'] = site.select('a/text()').extract() item['link'] = site.select('a/@href').extract() item['desc'] = site.select('text()').extract() items.append(item) return items SPIDER = DmozSpider() I copied it with changes for my project: from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.item import Item from firm.items import FirmItem class Spider1(CrawlSpider): domain_name = 'wc2' start_urls = ['http://www.whitecase.com/Attorneys/List.aspx?LastName=A'] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+') items = [] for site in sites: item = FirmItem item['school'] = hxs.select('//td[@class="mainColumnTDa"]').re('(JD)(.*?)(\d+)') items.append(item) return items SPIDER = Spider1() and I get the error [wc2] ERROR: Spider exception caught while processing <http://www.whitecase.com/Attorneys/List.aspx?LastName=A> (referer: <None>): [Failure instance: Traceback: <type 'exceptions.TypeError'>: 'ItemMeta' object does not support item assignment I would greatly appreciate it if experts here take a look at the code and give me a clue about where I am going wrong. Thank you
[ "Probably you meant item = FirmItem() instead of item = FirmItem?\n" ]
[ 17 ]
[]
[]
[ "python", "scrapy", "web_crawler" ]
stackoverflow_0001806235_python_scrapy_web_crawler.txt
Q: App Engine model filtering with Django hi i am using django app engine patch i have set up a simple model as follows class Intake(db.Model): intake=db.StringProperty(multiline=False, required=True) #@permerlink def get_absolute_url(self): return "/timekeeper/%s/" % self.intake class Meta: db_table = "Intake" verbose_name_plural = "Intakes" ordering = ['intake'] i am using the following views to check if some thing exist in data base and add to database from ragendja.template import render_to_response from django.http import HttpResponse, Http404 from google.appengine.ext import db from timekeeper.forms import * from timekeeper.models import * def checkintake(request, key): intake = Intake.all().filter('intake=',key).count() if intake<1: return HttpResponse('ok') else: return HttpResponse('Exist in database') def addintake(request,key): if Intake.all().filter('intake=',key).count()>1: return HttpResponse('Item already Exist in Database') else: data = Intake(intake=cleaned_data[key]) data.put() return HttpResponse('Ok') i can add to database with no problem (when i do a Intake.all().count() it increases) but when i check if the key exist in the database by filtering i am getting a count of zero any one have any idea why i am not able to filter by keys ? A: You need to insert a space between the field name and the operator in your filter arguments - eg, use .filter('intake =') instead of .filter('intake='). With an equality filter, you can also leave it out entirely, as in .filter('intake'). Without the space, the equals sign is taken to be part of the field name.
App Engine model filtering with Django
hi i am using django app engine patch i have set up a simple model as follows class Intake(db.Model): intake=db.StringProperty(multiline=False, required=True) #@permerlink def get_absolute_url(self): return "/timekeeper/%s/" % self.intake class Meta: db_table = "Intake" verbose_name_plural = "Intakes" ordering = ['intake'] i am using the following views to check if some thing exist in data base and add to database from ragendja.template import render_to_response from django.http import HttpResponse, Http404 from google.appengine.ext import db from timekeeper.forms import * from timekeeper.models import * def checkintake(request, key): intake = Intake.all().filter('intake=',key).count() if intake<1: return HttpResponse('ok') else: return HttpResponse('Exist in database') def addintake(request,key): if Intake.all().filter('intake=',key).count()>1: return HttpResponse('Item already Exist in Database') else: data = Intake(intake=cleaned_data[key]) data.put() return HttpResponse('Ok') i can add to database with no problem (when i do a Intake.all().count() it increases) but when i check if the key exist in the database by filtering i am getting a count of zero any one have any idea why i am not able to filter by keys ?
[ "You need to insert a space between the field name and the operator in your filter arguments - eg, use .filter('intake =') instead of .filter('intake='). With an equality filter, you can also leave it out entirely, as in .filter('intake'). Without the space, the equals sign is taken to be part of the field name.\n" ]
[ 2 ]
[]
[]
[ "django", "django_models", "google_app_engine", "python" ]
stackoverflow_0001807545_django_django_models_google_app_engine_python.txt
Q: Why do I need to save this model before adding it to another one? In django, I'm trying to do something like this: # if form is valid ... article = form.save(commit=False) article.author = req.user product_name = form.cleaned_data['product_name'] try: article.product = Component.objects.get(name=product_name) except: article.product = Component(name=product_name) article.save() # do some more form processing ... But then it tells me: null value in column "product_id" violates not-null constraint But I don't understand why this is a problem. When article.save() is called, it should be able the create the product then (and generate an id). I can get around this problem by using this code in the except block: product = Component(name=product_name) product.save() article.product = product But the reason this concerns me is because if article.save() fails, it will already have created a new component/product. I want them to succeed or fail together. Is there a nice way to get around this? A: The way the Django ManyToManyField works is that it creates an extra table. So say you have two models, ModelA and ModelB. If you did... ModelA.model_b = models.ManyToManyField(ModelB) What Django actually does behind the scenes is it creates a table... app_modela_modelb with three columns: id, model_a_id, model_b_id. Hold that thought in your mind. Regarding the saving of ModelB, Django does not assign it an ID until it's saved. You could technically manually assign it an ID and avoid this problem. It seems you're letting django handle that which is perfectly acceptable. Django has a problem then doing the M2M. Why? If ModelB doesn't have an id yet, what goes in the model_b_id column on the M2M table? The error for null product_id is more than likely a null constraint error on the M2M field, not the ModelB record id. If you would like them to "succeed together" or "fail together" perhaps it's time to look into transactions. You, for example, wrap the whole thing in a transaction, and do a rollback in the case of a partial failure. I haven't done a whole lot of work personally in this area so hopefully someone else will be of assistance on that topic. A: You could get around this by using : target_product, created_flag = Component.objects.get_or_create(name=product_name) article.product = target_product as I'm pretty sure get_or_create() will set the id of an object, if it has to create one. Alternatively, if you don't mind empty FK relations on the Article table, you could add null=True to the definition. A: There's little value in including a code snippet on transactions, as you should read the Django documentation to gain a good understanding.
Why do I need to save this model before adding it to another one?
In django, I'm trying to do something like this: # if form is valid ... article = form.save(commit=False) article.author = req.user product_name = form.cleaned_data['product_name'] try: article.product = Component.objects.get(name=product_name) except: article.product = Component(name=product_name) article.save() # do some more form processing ... But then it tells me: null value in column "product_id" violates not-null constraint But I don't understand why this is a problem. When article.save() is called, it should be able the create the product then (and generate an id). I can get around this problem by using this code in the except block: product = Component(name=product_name) product.save() article.product = product But the reason this concerns me is because if article.save() fails, it will already have created a new component/product. I want them to succeed or fail together. Is there a nice way to get around this?
[ "The way the Django ManyToManyField works is that it creates an extra table. So say you have two models, ModelA and ModelB. If you did...\nModelA.model_b = models.ManyToManyField(ModelB)\n\nWhat Django actually does behind the scenes is it creates a table... app_modela_modelb with three columns: id, model_a_id, model_b_id.\nHold that thought in your mind. Regarding the saving of ModelB, Django does not assign it an ID until it's saved. You could technically manually assign it an ID and avoid this problem. It seems you're letting django handle that which is perfectly acceptable.\nDjango has a problem then doing the M2M. Why? If ModelB doesn't have an id yet, what goes in the model_b_id column on the M2M table? The error for null product_id is more than likely a null constraint error on the M2M field, not the ModelB record id.\nIf you would like them to \"succeed together\" or \"fail together\" perhaps it's time to look into transactions. You, for example, wrap the whole thing in a transaction, and do a rollback in the case of a partial failure. I haven't done a whole lot of work personally in this area so hopefully someone else will be of assistance on that topic.\n", "You could get around this by using :\ntarget_product, created_flag = Component.objects.get_or_create(name=product_name)\narticle.product = target_product\n\nas I'm pretty sure get_or_create() will set the id of an object, if it has to create one.\nAlternatively, if you don't mind empty FK relations on the Article table, you could add null=True to the definition.\n", "There's little value in including a code snippet on transactions, as you should read the Django documentation to gain a good understanding.\n" ]
[ 4, 1, 1 ]
[]
[]
[ "django", "models", "python" ]
stackoverflow_0001806937_django_models_python.txt
Q: Split UTF-8 encoded string got from unichr I have a set of unicode numbers , I need to convert them to UTF-8 and print the result in to split them in to hex values. eg: Unicode 0x80 should be converted to UTF-8 and printed as (0xc2,0x80) I tried following str(unichr(0x80).encode('utf-8')).split(r'\x')[0] But it does get split in to ['c2','80']. But it gives me ['\xc2\x80']. I need this for code generation. A: You want like this? could be done with list comprehensions. >>> ["%x"%ord(x) for x in unichr(0x80).encode('utf-8')] ['c2', '80'] A: To generate a list of the hexadecimal values of the characters in your UTF8-encoded string, use the following: >>> [hex(ord(x)) for x in unichr(0x80).encode('utf-8')] ['0xc2', '0x80'] A: You try to split with \x, but \x doesn't exist in the string. \xc2\x80 are just the escape codes (like \n for newline) on your screen, I think what you want is this: print hex(ord(unichr(0x80).encode('utf-8')[0]))
Split UTF-8 encoded string got from unichr
I have a set of unicode numbers , I need to convert them to UTF-8 and print the result in to split them in to hex values. eg: Unicode 0x80 should be converted to UTF-8 and printed as (0xc2,0x80) I tried following str(unichr(0x80).encode('utf-8')).split(r'\x')[0] But it does get split in to ['c2','80']. But it gives me ['\xc2\x80']. I need this for code generation.
[ "You want like this? could be done with list comprehensions.\n>>> [\"%x\"%ord(x) for x in unichr(0x80).encode('utf-8')]\n['c2', '80']\n\n", "To generate a list of the hexadecimal values of the characters in your UTF8-encoded string, use the following:\n>>> [hex(ord(x)) for x in unichr(0x80).encode('utf-8')]\n['0xc2', '0x80']\n\n", "You try to split with \\x, but \\x doesn't exist in the string. \\xc2\\x80 are just the escape codes (like \\n for newline) on your screen, I think what you want is this:\nprint hex(ord(unichr(0x80).encode('utf-8')[0]))\n\n" ]
[ 2, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001808223_python.txt
Q: how pylons decorator works from decorator import decorator from pylons.decorators.util import get_pylons def allowed_roles(roles): def wrapper(func, *args, **kwargs): session = get_pylons(args).session # edit pylons session here. return func(*args, **kwargs) return decorator(wrapper) Can anyone explain how it works? A: Like any other decorator works - A decorator is a function which receives a function as an argument, and returns another function. The returned function will "take the place" from the original function. Since the desired effect with a decoratos is usually to be able to run some code before and after the original function (the one being decorated) runs, decorators create a new function which takes any number of anonymous and named parameters (the * prefixing "args" and the ** prefixing "kwargs" are responsible to store the parameters in a list and a dictionary, respectively) Inside this new function, you have a place to write your verification code - and then it calls the original function - which in this context is called "func", and returns its original value. the "decorator.decorator" call is not strictly needed: it jsut modifies some ttrbitues of the wrapper function so that it appears more closely to be the original funciton (like the 'func_name' attribute) - but the code should work without it. After definning a decorator, you have to apply it to a function or method you wish to decorate: just put an @allowed_roles in a line prefixing the function definition you want to decorate.
how pylons decorator works
from decorator import decorator from pylons.decorators.util import get_pylons def allowed_roles(roles): def wrapper(func, *args, **kwargs): session = get_pylons(args).session # edit pylons session here. return func(*args, **kwargs) return decorator(wrapper) Can anyone explain how it works?
[ "Like any other decorator works - \nA decorator is a function which receives a function as an argument, and returns another function. \nThe returned function will \"take the place\" from the original function.\nSince the desired effect with a decoratos is usually to be able to run some code before and after the original function (the one being decorated) runs, decorators create a new function which takes any number of anonymous and named parameters (the * prefixing \"args\" and the ** prefixing \"kwargs\" are responsible to store the parameters in a list and a dictionary, respectively) \nInside this new function, you have a place to write your verification code - and then it calls the original function - which in this context is called \"func\", and returns its original value.\nthe \"decorator.decorator\" call is not strictly needed: it jsut modifies some ttrbitues of the wrapper function so that it appears more closely to be the original funciton (like the 'func_name' attribute) - but the code should work without it.\nAfter definning a decorator, you have to apply it to a function or method you wish to decorate: just put an @allowed_roles in a line prefixing the function definition you want to decorate. \n" ]
[ 2 ]
[]
[]
[ "pylons", "python" ]
stackoverflow_0001807760_pylons_python.txt
Q: python and Oracle I would like to be able to connect to Oracle 10.1.0.2.0 (which is installed on different machine) via python. My comp is running on Ubuntu 9.04 Jaunty with Python 2.6 installed. I have downloaded and unpacked instantclient-basic-linux32-10.1.0.5-20060511.zip , set LD_LIBRARY_PATH and ORACLE_HOME to point to the directory where I unpacked it. Then I've downloaded cx_Oracle-5.0.2-10g-py26-1.i386.rpm and installed it: $sudo alien -i cx_Oracle-5.0.2-10g-py26-1.i386.rpm When I run $python -c 'import cx_Oracle' I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: /usr/lib/python2.6/cx_Oracle.so: undefined symbol: OCIClientVersion Help would be very appreciated. A: I believe OCIClientVersion requires Oracle 10g release 2, but you're using release 1. It looks like cx_Oracle binary you downloaded has been compiled with -DORACLE_10GR2 which makes it include the OCIClientVersion call. Since this is a compile-time-only option there should really be downloads for 10g and 10gR2 separately, but it would seem there aren't: This module has been built with Oracle 9.2.0, 10.2.0, 11.1.0 on Linux So you may have to download the cx_Oracle sources and build them yourself. (Consequently you'll need the Python and Oracle client headers.) Alternatively you could try the cx_Oracle build for Oracle 9i instead. This sounds a bit dodgy but is apparently supposed to work. A: Thanks to bobince for his answer, I will just try to summarize possible solutions to make it more readable for the others. Both LD_LIBRARY_PATH and ORACLE_HOME need to point to directory where Oracle InstantClient is unpacked. You can use cx_Oracle 4.4.1 for Oracle 9i together with Oracle InstantClient 10.1 After installing cx_Oracle 4.4.1 sudo alien -i cx_Oracle-4.4.1-9i-py26-1.i386.rpm cx_Oracle.so is placed in /usr/local/lib/python2.6/site-packages, so following symbolic link needs to be created sudo ln -s /usr/local/lib/python2.6/site-packages/cx_Oracle.so /usr/lib/python2.6 Also since you are using cx_Oracle for Oracle 9i you need to create symbolic link in the InstantClient directory sudo ln -s libclntsh.so.10.1 libclntsh.so.9.0 Alternatively you can use cx_Oracle 5.0.2 for Oracle 10g with Oracle InstantClient 10.2 Installation procedure is similar. sudo alien -i cx_Oracle-5.0.2-10g-py26-1.i386.rpm sudo ln -s /usr/lib/python2.6/site-packages/cx_Oracle.so /usr/lib/python2.6 Building cx_Oracle 10g sources to work with Oracle InstantClient 10.1 is not an option since cx_Oracle 10g uses code specific to Oracle 10g Release 2. Note: It is hard to predict whether these solutions work without any flaws (further testing is needed).
python and Oracle
I would like to be able to connect to Oracle 10.1.0.2.0 (which is installed on different machine) via python. My comp is running on Ubuntu 9.04 Jaunty with Python 2.6 installed. I have downloaded and unpacked instantclient-basic-linux32-10.1.0.5-20060511.zip , set LD_LIBRARY_PATH and ORACLE_HOME to point to the directory where I unpacked it. Then I've downloaded cx_Oracle-5.0.2-10g-py26-1.i386.rpm and installed it: $sudo alien -i cx_Oracle-5.0.2-10g-py26-1.i386.rpm When I run $python -c 'import cx_Oracle' I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: /usr/lib/python2.6/cx_Oracle.so: undefined symbol: OCIClientVersion Help would be very appreciated.
[ "I believe OCIClientVersion requires Oracle 10g release 2, but you're using release 1.\nIt looks like cx_Oracle binary you downloaded has been compiled with -DORACLE_10GR2 which makes it include the OCIClientVersion call. Since this is a compile-time-only option there should really be downloads for 10g and 10gR2 separately, but it would seem there aren't:\nThis module has been built with Oracle 9.2.0, 10.2.0, 11.1.0 on Linux\n\nSo you may have to download the cx_Oracle sources and build them yourself. (Consequently you'll need the Python and Oracle client headers.)\nAlternatively you could try the cx_Oracle build for Oracle 9i instead. This sounds a bit dodgy but is apparently supposed to work.\n", "Thanks to bobince for his answer, I will just try to summarize possible solutions to make it more readable for the others.\nBoth LD_LIBRARY_PATH and ORACLE_HOME need to point to directory where Oracle InstantClient is unpacked.\n\nYou can use cx_Oracle 4.4.1 for Oracle 9i together with Oracle InstantClient 10.1\n\nAfter installing cx_Oracle 4.4.1\nsudo alien -i cx_Oracle-4.4.1-9i-py26-1.i386.rpm\n\ncx_Oracle.so is placed in /usr/local/lib/python2.6/site-packages, so following symbolic link needs to be created\nsudo ln -s /usr/local/lib/python2.6/site-packages/cx_Oracle.so /usr/lib/python2.6\n\nAlso since you are using cx_Oracle for Oracle 9i you need to create symbolic link in the InstantClient directory\nsudo ln -s libclntsh.so.10.1 libclntsh.so.9.0\n\n\nAlternatively you can use cx_Oracle 5.0.2 for Oracle 10g with Oracle InstantClient 10.2\n\nInstallation procedure is similar.\nsudo alien -i cx_Oracle-5.0.2-10g-py26-1.i386.rpm\nsudo ln -s /usr/lib/python2.6/site-packages/cx_Oracle.so /usr/lib/python2.6\n\n\nBuilding cx_Oracle 10g sources to work with Oracle InstantClient 10.1 is not an option since cx_Oracle 10g uses code specific to Oracle 10g Release 2.\n\nNote: It is hard to predict whether these solutions work without any flaws (further testing is needed).\n" ]
[ 4, 2 ]
[]
[]
[ "django", "oracle", "python" ]
stackoverflow_0001796198_django_oracle_python.txt