content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: On App Engine, what does optimization for reads mean? In the documentation for Google App Engine, it says that when designing data models for the datastore, you should "optimize for reads, not writes". What exactly does this mean? What is more 'expensive', CPU intensive or time consuming? A: It means that "reads" are cheaper than "writes". "Writes" takes more time and more resources. For more information check the presentation "Building Scalable Web Applications with Google App Engine" by Brett Slatkin from Google I/0 2008 (slides 7-8) A: "Optimize for reads, not writes" means that you should expect to see far more reads than writes, and so you should strive to make it as easy as possible to read your data, even if that might slow down the writes a little. Easy for the computer, that is, meaning that for example if you want to show names in all lowercase, you should lowercase them when they're written to the database rather than lowercasing them everytime you read them from the database. That's just an example but hopefully it makes things clear. A: agreed with @redtuna (expecting more reads than writes) and @Ilian Iliev (reads cheaper than writes & writes take more resources). another way you can optimize for reads is by using the Memcache service. since reads (usually) happen more often than writes, caching that data means that you don't even have to take a hit of a datastore access. also, items that stay active (see fetches/hits) stay in the cache longer as it employs an LRU strategy.
On App Engine, what does optimization for reads mean?
In the documentation for Google App Engine, it says that when designing data models for the datastore, you should "optimize for reads, not writes". What exactly does this mean? What is more 'expensive', CPU intensive or time consuming?
[ "It means that \"reads\" are cheaper than \"writes\". \"Writes\" takes more time and more resources. For more information check the presentation \"Building Scalable Web Applications with Google App Engine\" by Brett Slatkin from Google I/0 2008 (slides 7-8)\n", "\"Optimize for reads, not writes\" means that you should expect to see far more reads than writes, and so you should strive to make it as easy as possible to read your data, even if that might slow down the writes a little. Easy for the computer, that is, meaning that for example if you want to show names in all lowercase, you should lowercase them when they're written to the database rather than lowercasing them everytime you read them from the database. That's just an example but hopefully it makes things clear.\n", "agreed with @redtuna (expecting more reads than writes) and @Ilian Iliev (reads cheaper than writes & writes take more resources). another way you can optimize for reads is by using the Memcache service. since reads (usually) happen more often than writes, caching that data means that you don't even have to take a hit of a datastore access. also, items that stay active (see fetches/hits) stay in the cache longer as it employs an LRU strategy.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0002694542_google_app_engine_python.txt
Q: `strip`ing the results of a split in python i'm trying to do something pretty simple: line = "name : bob" k, v = line.lower().split(':') k = k.strip() v = v.strip() is there a way to combine this into one line somehow? i found myself writing this over and over again when making parsers, and sometimes this involves way more than just two variables. i know i can use regexp, but this is simple enough to not really have to require it... A: k, v = [x.strip() for x in line.lower().split(':')] A: import 're' k,v = re.split(r'\s*:\s*', line) line = ':'.join((k,v)) A: >>> map(str.strip,line.lower().split(":")) ['name', 'bob']
`strip`ing the results of a split in python
i'm trying to do something pretty simple: line = "name : bob" k, v = line.lower().split(':') k = k.strip() v = v.strip() is there a way to combine this into one line somehow? i found myself writing this over and over again when making parsers, and sometimes this involves way more than just two variables. i know i can use regexp, but this is simple enough to not really have to require it...
[ "k, v = [x.strip() for x in line.lower().split(':')]\n\n", "import 're'\nk,v = re.split(r'\\s*:\\s*', line)\nline = ':'.join((k,v))\n\n", ">>> map(str.strip,line.lower().split(\":\"))\n['name', 'bob']\n\n" ]
[ 7, 1, 1 ]
[ "\":\".join([k, v])\n\n" ]
[ -1 ]
[ "parsing", "python" ]
stackoverflow_0002695464_parsing_python.txt
Q: Convert a list of strings [ '3', '1', '2' ] to a list of sorted integers [1, 2, 3] I have a list of integers in string representation, similar to the following: L1 = ['11', '10', '13', '12', '15', '14', '1', '3', '2', '5', '4', '7', '6', '9', '8'] I need to make it a list of integers like: L2 = [11, 10, 13, 12, 15, 14, 1, 3, 2, 5, 4, 7, 6, 9, 8] Finally I will sort it like below: L3 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] # by L2.sort() Please let me know what is the best way to get from L1 to L3? A: You could do it in one step like this: L3 = sorted(map(int, L1)) In more detail, here are the steps: >>> L1 = ['11', '10', '13', '12', '15', '14', '1', '3', '2', '5', '4', '7', '6', '9', '8'] >>> L1 ['11', '10', '13', '12', '15', '14', '1', '3', '2', '5', '4', '7', '6', '9', '8'] >>> map(int, L1) [11, 10, 13, 12, 15, 14, 1, 3, 2, 5, 4, 7, 6, 9, 8] >>> sorted(_) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] >>> A: >>> L1 = ['11', '10', '13', '12', '15', '14', '1', '3', '2', '5', '4', '7', '6', '9', '8'] >>> L1 = [int(x) for x in L1] >>> L1 [11, 10, 13, 12, 15, 14, 1, 3, 2, 5, 4, 7, 6, 9, 8] >>> L1.sort() >>> L1 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] >>> L3 = L1 A: L3 = sorted(int(x) for x in L1)
Convert a list of strings [ '3', '1', '2' ] to a list of sorted integers [1, 2, 3]
I have a list of integers in string representation, similar to the following: L1 = ['11', '10', '13', '12', '15', '14', '1', '3', '2', '5', '4', '7', '6', '9', '8'] I need to make it a list of integers like: L2 = [11, 10, 13, 12, 15, 14, 1, 3, 2, 5, 4, 7, 6, 9, 8] Finally I will sort it like below: L3 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] # by L2.sort() Please let me know what is the best way to get from L1 to L3?
[ "You could do it in one step like this:\nL3 = sorted(map(int, L1))\n\nIn more detail, here are the steps:\n>>> L1 = ['11', '10', '13', '12', '15', '14', '1', '3', '2', '5', '4', '7', '6', '9', '8']\n>>> L1\n['11', '10', '13', '12', '15', '14', '1', '3', '2', '5', '4', '7', '6', '9', '8']\n>>> map(int, L1)\n[11, 10, 13, 12, 15, 14, 1, 3, 2, 5, 4, 7, 6, 9, 8]\n>>> sorted(_)\n[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]\n>>>\n\n", ">>> L1 = ['11', '10', '13', '12', '15', '14', '1', '3', '2', '5', '4', '7', '6', '9', '8'] \n>>> L1 = [int(x) for x in L1]\n>>> L1\n[11, 10, 13, 12, 15, 14, 1, 3, 2, 5, 4, 7, 6, 9, 8]\n>>> L1.sort()\n>>> L1\n[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]\n>>> L3 = L1\n\n", "L3 = sorted(int(x) for x in L1)\n\n" ]
[ 19, 5, 4 ]
[]
[]
[ "integer", "list", "python", "sorting", "string" ]
stackoverflow_0002695472_integer_list_python_sorting_string.txt
Q: Can you access registers from python functions in vim It seems vims python sripting is designed to edit buffer and files rather than work nicely with vims registers. You can use some of the vim packages commands to get access to the registers but its not pretty. My solution for creating a vim function using python that uses a register is something like this. function printUnnamedRegister() python <<EOF print vim.eval('@@') EOF endfunction Setting registers may also be possible using something like function setUnnamedRegsiter() python <<EOF s = "Some \"crazy\" string\nwith interesting characters" vim.command('let @@="%s"' % myescapefn(s) ) EOF endfunction However this feels a bit cumbersome and I'm not sure exactly what myescapefn should be. So I've never been able to get the setting version to work properly. So if there's a way to do something more like function printUnnamedRegister() python <<EOF print vim.getRegister('@') EOF endfunction function setUnnamedRegsiter() python <<EOF s = "Some \"crazy\" string\nwith interesting characters" vim.setRegister('@',s) EOF endfunction Or even a nice version of myescapefn I could use then that would be very handy. UPDATE: Based on the solution by ZyX I'm using this piece of python def setRegister(reg, value): vim.command( "let @%s='%s'" % (reg, value.replace("'","''") ) ) A: If you use single quotes everything you need is to replace every occurence of single quote with two single quotes. Something like that: python import vim, re python def senclose(str): return "'"+re.sub(re.compile("'"), "''", str)+"'" python vim.command("let @r="+senclose("string with single 'quotes'")) Update: this method relies heavily on an (undocumented) feature of the difference between let abc='string with newline' and execute "let abc='string\nwith newline'" : while the first fails the second succeeds (and it is not the single example of differences between newline handling in :execute and plain files). On the other hand, eval() is somewhat more expected to handle this since string("string\nwith newline") returns exactly the same thing senclose does, so I write this things now only using vim.eval: python senclose = lambda str: "'"+str.replace("'", "''")+"'" python vim.eval("setreg('@r', {0})".format(senclose("string with single 'quotes'")))
Can you access registers from python functions in vim
It seems vims python sripting is designed to edit buffer and files rather than work nicely with vims registers. You can use some of the vim packages commands to get access to the registers but its not pretty. My solution for creating a vim function using python that uses a register is something like this. function printUnnamedRegister() python <<EOF print vim.eval('@@') EOF endfunction Setting registers may also be possible using something like function setUnnamedRegsiter() python <<EOF s = "Some \"crazy\" string\nwith interesting characters" vim.command('let @@="%s"' % myescapefn(s) ) EOF endfunction However this feels a bit cumbersome and I'm not sure exactly what myescapefn should be. So I've never been able to get the setting version to work properly. So if there's a way to do something more like function printUnnamedRegister() python <<EOF print vim.getRegister('@') EOF endfunction function setUnnamedRegsiter() python <<EOF s = "Some \"crazy\" string\nwith interesting characters" vim.setRegister('@',s) EOF endfunction Or even a nice version of myescapefn I could use then that would be very handy. UPDATE: Based on the solution by ZyX I'm using this piece of python def setRegister(reg, value): vim.command( "let @%s='%s'" % (reg, value.replace("'","''") ) )
[ "If you use single quotes everything you need is to replace every occurence of single quote with two single quotes.\nSomething like that:\npython import vim, re\npython def senclose(str): return \"'\"+re.sub(re.compile(\"'\"), \"''\", str)+\"'\"\npython vim.command(\"let @r=\"+senclose(\"string with single 'quotes'\"))\n\n\nUpdate: this method relies heavily on an (undocumented) feature of the difference between\nlet abc='string\nwith newline'\n\nand\nexecute \"let abc='string\\nwith newline'\"\n\n: while the first fails the second succeeds (and it is not the single example of differences between newline handling in :execute and plain files). On the other hand, eval() is somewhat more expected to handle this since string(\"string\\nwith newline\") returns exactly the same thing senclose does, so I write this things now only using vim.eval:\npython senclose = lambda str: \"'\"+str.replace(\"'\", \"''\")+\"'\"\npython vim.eval(\"setreg('@r', {0})\".format(senclose(\"string with single 'quotes'\")))\n\n" ]
[ 6 ]
[]
[]
[ "delimiter", "python", "vim" ]
stackoverflow_0002695443_delimiter_python_vim.txt
Q: "from _json import..." - python I am inspecting the JSON module of python 3.1, and am currently in /Lib/json/scanner.py. At the top of the file is the following line: from _json import make_scanner as c_make_scanner There are five .py files in the module's directory: __init__ (two leading and trailing underscores, it's formatting as bold), decoder, encoder, scanner and tool. There is no file called "json". My question is: when doing the import, where exactly is "make_scanner" coming from? Yes, I am very new to Python! A: It's coming from a C-compiled _json.pyd (or _json.so, etc, etc, depending on the platform) that lives elsewhere on the sys.path. You can always find out where that is in your specific Python installation by importing the module yourself and looking at its __file__, e.g.: >>> import _json >>> _json.__file__ '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_json.so' As you see, in my installation of Python 2.6, _json comes from the lib-dynload subdirectory of lib/python2.6, and the extension used on this platform is .so. A: It may be coming from a file, or it may be built-in. On Windows, it appears to be built-in. Python 3.1.2 (r312:79149, Mar 21 2010, 00:41:52) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import _json >>> _json.__file__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute '__file__' and there is no _json.pyd or _json.dll in the offing. If you want to see the source, having a binary file on your machine or not is irrelevant -- you'll need the SVN browser.
"from _json import..." - python
I am inspecting the JSON module of python 3.1, and am currently in /Lib/json/scanner.py. At the top of the file is the following line: from _json import make_scanner as c_make_scanner There are five .py files in the module's directory: __init__ (two leading and trailing underscores, it's formatting as bold), decoder, encoder, scanner and tool. There is no file called "json". My question is: when doing the import, where exactly is "make_scanner" coming from? Yes, I am very new to Python!
[ "It's coming from a C-compiled _json.pyd (or _json.so, etc, etc, depending on the platform) that lives elsewhere on the sys.path. You can always find out where that is in your specific Python installation by importing the module yourself and looking at its __file__, e.g.:\n>>> import _json\n>>> _json.__file__\n'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_json.so'\n\nAs you see, in my installation of Python 2.6, _json comes from the lib-dynload subdirectory of lib/python2.6, and the extension used on this platform is .so.\n", "It may be coming from a file, or it may be built-in. On Windows, it appears to be built-in.\nPython 3.1.2 (r312:79149, Mar 21 2010, 00:41:52) [MSC v.1500 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import _json\n>>> _json.__file__\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'module' object has no attribute '__file__'\n\nand there is no _json.pyd or _json.dll in the offing.\nIf you want to see the source, having a binary file on your machine or not is irrelevant -- you'll need the SVN browser. \n" ]
[ 6, 1 ]
[]
[]
[ "import", "json", "module", "python" ]
stackoverflow_0002696125_import_json_module_python.txt
Q: How to insert and call by row and column into sqlite3 python Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y? A: So you have a dictionary of dictionaries, that you want to convert into a SQL table. Steps I'd take Find the columns you'll need. Create the table schema. Loop through each row. Compile the set of values for each column. Insert it. So: import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() dic = { 'x1':{'y1':1.0,'y2':0.0}, 'x2':{'y1':0.0,'y2':2.0,'y3':1.5}, 'x3':{'y2':2.0,'y3':1.5} } # 1. Find the unique column names. columns = set() for cols in dic.values(): for key in cols: columns.add(key) # 2. Create the schema. col_defs = [ # Start with the column for our key name '"row_name" VARCHAR(2) NOT NULL PRIMARY KEY' ] for column in columns: col_defs.append('"%s" REAL NULL' % column) schema = "CREATE TABLE simple (%s);" % ",".join(col_defs) c.execute(schema) # 3. Loop through each row for row_name, cols in dic.items(): # Compile the data we have for this row. col_names = cols.keys() col_values = [str(val) for val in cols.values()] # Insert it. sql = 'INSERT INTO simple ("row_name", "%s") VALUES ("%s", "%s");' % ( '","'.join(col_names), row_name, '","'.join(col_values) ) c.execute(sql) Then your other questions are pretty simple: ## update value at a specific row x and column y? def set_cell(connection, x_name, y_name, value): sql = 'UPDATE simple SET %s="%s" WHERE row_name="%s"' % ( y_name, value, x_name ) connection.execute(sql) ## select a value at a specific row x and column y? def get_cell(connection, x_name, y_name): sql = 'SELECT %s FROM simple WHERE row_name="%s"' % ( y_name, x_name ) # Return the first row of results (there should be only one) # and the first column from that row return list(connection.execute(sql))[0][0]
How to insert and call by row and column into sqlite3 python
Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?
[ "So you have a dictionary of dictionaries, that you want to convert into a SQL table.\nSteps I'd take\n\nFind the columns you'll need.\nCreate the table schema.\nLoop through each row.\n\n\nCompile the set of values for each column.\nInsert it.\n\n\nSo:\nimport sqlite3\ncon = sqlite3.connect('simple.db')\nc = con.cursor()\n\ndic = {\n 'x1':{'y1':1.0,'y2':0.0},\n 'x2':{'y1':0.0,'y2':2.0,'y3':1.5},\n 'x3':{'y2':2.0,'y3':1.5}\n }\n\n# 1. Find the unique column names.\ncolumns = set()\nfor cols in dic.values():\n for key in cols:\n columns.add(key)\n\n# 2. Create the schema.\ncol_defs = [\n # Start with the column for our key name\n '\"row_name\" VARCHAR(2) NOT NULL PRIMARY KEY'\n ]\nfor column in columns:\n col_defs.append('\"%s\" REAL NULL' % column)\nschema = \"CREATE TABLE simple (%s);\" % \",\".join(col_defs)\nc.execute(schema)\n\n# 3. Loop through each row\nfor row_name, cols in dic.items():\n\n # Compile the data we have for this row.\n col_names = cols.keys()\n col_values = [str(val) for val in cols.values()]\n\n # Insert it.\n sql = 'INSERT INTO simple (\"row_name\", \"%s\") VALUES (\"%s\", \"%s\");' % (\n '\",\"'.join(col_names),\n row_name,\n '\",\"'.join(col_values)\n )\n c.execute(sql)\n\nThen your other questions are pretty simple:\n## update value at a specific row x and column y?\ndef set_cell(connection, x_name, y_name, value):\n sql = 'UPDATE simple SET %s=\"%s\" WHERE row_name=\"%s\"' % (\n y_name, value, x_name\n )\n connection.execute(sql)\n\n## select a value at a specific row x and column y?\ndef get_cell(connection, x_name, y_name):\n sql = 'SELECT %s FROM simple WHERE row_name=\"%s\"' % (\n y_name, x_name\n )\n # Return the first row of results (there should be only one)\n # and the first column from that row\n return list(connection.execute(sql))[0][0]\n\n" ]
[ 2 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0002694442_python_sqlite.txt
Q: problem plotting on logscale in matplotlib in python I am trying to plot the following numbers on a log scale as a scatter plot in matplotlib. Both the quantities on the x and y axes have very different scales, and one of the variables has a huge dynamic range (nearly 0 to 12 million roughly) while the other is between nearly 0 and 2. I think it might be good to plot both on a log scale. I tried the following, for a subset of the values of the two variables: fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(1, 1, 1) ax.set_yscale('log') ax.set_xscale('log') plt.scatter([1.341, 0.1034, 0.6076, 1.4278, 0.0374], [0.37, 0.12, 0.22, 0.4, 0.08]) The x-axes appear log scaled but the points do not appear -- only two points appear. Any idea how to fix this? Also, how can I make this log scale appear on a square axes, so that the correlation between the two variables can be interpreted from the scatter plot? thanks. A: I don't know why you only get those two points. For this case, you can manually adjust the limits to make sure all your points fit. I ran: import matplotlib.pyplot as plt fig = plt.figure(figsize=(8, 8)) # You were missing the = ax = fig.add_subplot(1, 1, 1) ax.set_yscale('log') ax.set_xscale('log') plt.scatter([1.341, 0.1034, 0.6076, 1.4278, 0.0374], [0.37, 0.12, 0.22, 0.4, 0.08]) plt.xlim(0.01, 10) # Fix the x limits to fit all the points plt.show() I'm not sure I understand understand what "Also, how can I make this log scale appear on a square axes, so that the correlation between the two variables can be interpreted from the scatter plot?" means. Perhaps someone else will understand, or maybe you can clarify? A: You can also just do, plt.loglog([1.341, 0.1034, 0.6076, 1.4278, 0.0374], [0.37, 0.12, 0.22, 0.4, 0.08], 'o') This produces the plot you want with properly scaled axes, though it doesn't have all the flexibility of a true scatter plot.
problem plotting on logscale in matplotlib in python
I am trying to plot the following numbers on a log scale as a scatter plot in matplotlib. Both the quantities on the x and y axes have very different scales, and one of the variables has a huge dynamic range (nearly 0 to 12 million roughly) while the other is between nearly 0 and 2. I think it might be good to plot both on a log scale. I tried the following, for a subset of the values of the two variables: fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(1, 1, 1) ax.set_yscale('log') ax.set_xscale('log') plt.scatter([1.341, 0.1034, 0.6076, 1.4278, 0.0374], [0.37, 0.12, 0.22, 0.4, 0.08]) The x-axes appear log scaled but the points do not appear -- only two points appear. Any idea how to fix this? Also, how can I make this log scale appear on a square axes, so that the correlation between the two variables can be interpreted from the scatter plot? thanks.
[ "I don't know why you only get those two points. For this case, you can manually adjust the limits to make sure all your points fit. I ran:\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(8, 8)) # You were missing the =\nax = fig.add_subplot(1, 1, 1)\nax.set_yscale('log')\nax.set_xscale('log')\nplt.scatter([1.341, 0.1034, 0.6076, 1.4278, 0.0374],\n [0.37, 0.12, 0.22, 0.4, 0.08])\nplt.xlim(0.01, 10) # Fix the x limits to fit all the points\nplt.show()\n\nI'm not sure I understand understand what \"Also, how can I make this log scale appear on a square axes, so that the correlation between the two variables can be interpreted from the scatter plot?\" means. Perhaps someone else will understand, or maybe you can clarify? \n", "You can also just do,\nplt.loglog([1.341, 0.1034, 0.6076, 1.4278, 0.0374], \n [0.37, 0.12, 0.22, 0.4, 0.08], 'o')\n\nThis produces the plot you want with properly scaled axes, though it doesn't have all the flexibility of a true scatter plot.\n" ]
[ 3, 2 ]
[]
[]
[ "graphing", "numpy", "plot", "python", "scipy" ]
stackoverflow_0002695598_graphing_numpy_plot_python_scipy.txt
Q: Getting an entry before and after a given entry in a Django Queryset I am creating a simple blog as part of a website and I am getting stuck on something that I am assuming is simple. If I call any blog post, say by it's title, from a queryset, how can I get the entry before and after the post in it's published order. I can iterate over the whole thing, get the position of the entry I have and use that to call the one before and the one after. But that is a long bit of code for something that I am sure I can do more simply. What I want would be something like this: next_post = Posts.object.filter(title=current_title).order_by("-published")[-1] Of course because of the filter, it is not going to work, but just to give you the idea of what I am looking for. A: You're looking for Posts.get_{next,previous}_by_FOO().
Getting an entry before and after a given entry in a Django Queryset
I am creating a simple blog as part of a website and I am getting stuck on something that I am assuming is simple. If I call any blog post, say by it's title, from a queryset, how can I get the entry before and after the post in it's published order. I can iterate over the whole thing, get the position of the entry I have and use that to call the one before and the one after. But that is a long bit of code for something that I am sure I can do more simply. What I want would be something like this: next_post = Posts.object.filter(title=current_title).order_by("-published")[-1] Of course because of the filter, it is not going to work, but just to give you the idea of what I am looking for.
[ "You're looking for Posts.get_{next,previous}_by_FOO().\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002696677_django_python.txt
Q: Python, concise way to test membership in collection using partial match What is the pythonic way to test if there is a tuple starting with another tuple in collection? actually, I am really after the index of match, but I can probably figure out from test example for example: c = ((0,1),(2,3)) # (0,) should match first element, (3,)should match no element I should add my python is 2.4 and/or 2.5 thanks A: Edit: Thanks to the OP for the addition explanation of the problem. S.Mark's nested list comprehensions are pretty wicked; check 'em out. I might opt to use an auxiliary function: def tup_cmp(mytup, mytups): return any(x for x in mytups if mytup == x[:len(mytup)]) >>> c = ((0, 1, 2, 3), (2, 3, 4, 5)) >>> tup_cmp((0,2),c) False >>> tup_cmp((0,1),c) True >>> tup_cmp((0,1,2,3),c) True >>> tup_cmp((0,1,2),c) True >>> tup_cmp((2,3,),c) True >>> tup_cmp((2,4,),c) False Original answer: Does using a list-comprehension work for you?: c = ((0,1),(2,3)) [i for i in c if i[0] == 0] # result: [(0, 1)] [i for i in c if i[0] == 3] # result: [] List comps were introduced in 2.0. A: >>> c = ((0,1),(2,3)) >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((0,),x))] [(0, 1)] >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((0,1),x))] [(0, 1)] >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((2,),x))] [(2, 3)] >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((2,3),x))] [(2, 3)] >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((4,),x))] [] With larger Tuple >>> c=((0,1,2,3),(2,3,4,5)) >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((0,1),x))] [(0, 1, 2, 3)] >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((0,2),x))] [] >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((2,),x))] [(2, 3, 4, 5)] >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((2,3,4),x))] [(2, 3, 4, 5)] >>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((4,),x))] [] >>> Edit: more compact one would be >>> [x for x in c if all(len(set(y))==1 for y in zip((0,),x))] [(0, 1, 2, 3)] A: my own solution, combination of two other answers f = lambda c, t: [x for x in c if t == x[:len(t)]]
Python, concise way to test membership in collection using partial match
What is the pythonic way to test if there is a tuple starting with another tuple in collection? actually, I am really after the index of match, but I can probably figure out from test example for example: c = ((0,1),(2,3)) # (0,) should match first element, (3,)should match no element I should add my python is 2.4 and/or 2.5 thanks
[ "Edit:\nThanks to the OP for the addition explanation of the problem.\nS.Mark's nested list comprehensions are pretty wicked; check 'em out.\nI might opt to use an auxiliary function:\ndef tup_cmp(mytup, mytups):\n return any(x for x in mytups if mytup == x[:len(mytup)])\n\n>>> c = ((0, 1, 2, 3), (2, 3, 4, 5))\n>>> tup_cmp((0,2),c)\nFalse\n>>> tup_cmp((0,1),c)\nTrue\n>>> tup_cmp((0,1,2,3),c)\nTrue\n>>> tup_cmp((0,1,2),c)\nTrue\n>>> tup_cmp((2,3,),c)\nTrue\n>>> tup_cmp((2,4,),c)\nFalse\n\n\nOriginal answer:\nDoes using a list-comprehension work for you?:\nc = ((0,1),(2,3))\n\n[i for i in c if i[0] == 0]\n# result: [(0, 1)]\n\n[i for i in c if i[0] == 3]\n# result: []\n\nList comps were introduced in 2.0.\n", ">>> c = ((0,1),(2,3))\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((0,),x))]\n[(0, 1)]\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((0,1),x))]\n[(0, 1)]\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((2,),x))]\n[(2, 3)]\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((2,3),x))]\n[(2, 3)]\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((4,),x))]\n[]\n\nWith larger Tuple\n>>> c=((0,1,2,3),(2,3,4,5))\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((0,1),x))]\n[(0, 1, 2, 3)]\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((0,2),x))]\n[]\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((2,),x))]\n[(2, 3, 4, 5)]\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((2,3,4),x))]\n[(2, 3, 4, 5)]\n>>> [x for x in c if all(1 if len(set(y)) is 1 else 0 for y in zip((4,),x))]\n[]\n>>>\n\nEdit: more compact one would be\n>>> [x for x in c if all(len(set(y))==1 for y in zip((0,),x))]\n[(0, 1, 2, 3)]\n\n", "my own solution, combination of two other answers\nf = lambda c, t: [x for x in c if t == x[:len(t)]]\n\n" ]
[ 3, 2, 1 ]
[]
[]
[ "collections", "membership", "python" ]
stackoverflow_0002696432_collections_membership_python.txt
Q: how to make a thread of never stop, and write something to database every 10 second i using gae and django this is my code: class LogText(db.Model): content = db.StringProperty(multiline=True) class MyThread(threading.Thread): def __init__(self,threadname): threading.Thread.__init__(self, name=threadname) def run(self,request): log=LogText() log.content=request.POST.get('content',None) log.put() def Log(request): thr = MyThread('haha') thr.run(request) return HttpResponse('') A: It's impossible to do in GAE since all requests (including cron job) have 30 seconds deadline.
how to make a thread of never stop, and write something to database every 10 second
i using gae and django this is my code: class LogText(db.Model): content = db.StringProperty(multiline=True) class MyThread(threading.Thread): def __init__(self,threadname): threading.Thread.__init__(self, name=threadname) def run(self,request): log=LogText() log.content=request.POST.get('content',None) log.put() def Log(request): thr = MyThread('haha') thr.run(request) return HttpResponse('')
[ "It's impossible to do in GAE since all requests (including cron job) have 30 seconds deadline.\n" ]
[ 1 ]
[]
[]
[ "django", "google_app_engine", "multithreading", "python" ]
stackoverflow_0002696644_django_google_app_engine_multithreading_python.txt
Q: How to override inner class methods if the inner class is defined as a property of the top class I have a code snippet like this class A(object): class b: def print_hello(self): print "Hello world" b = property(b) And I want to override the inner class b (please dont worry about the lowercase name) behaviour. Say, I want to add a new method or I want to change an existing method, like: class C(A): class b(A.b): def print_hello(self): print "Inner Class: Hello world" b = property(b) Now if I create C's object as c = C(), and call c.b I get TypeError: 'property' object is not callable error. How would I get pass this and call print_hello of the extended inner class? Disclaimer: I dont want to change the code for A class. Update: The base class 'A' tries to simulate a class which I have seen in one of the opensource apis. So thats why I dont want to change the core api, but extend it for my own requirement. A: I'm not really sure why you would define the inner class as a property of the outer class. (I'm no Python expert, so perhaps there's a reason I'm not aware of). It seems to work fine without the properties: class A(object): class b: def print_hello(self): print "A Hello world" class C(A): class b(A.b): def print_hello(self): print "C Hello world" outer = C() inner = outer.b() inner.print_hello() # prints "C Hello world" A: The reason why this fails is that due to the b = property(b) line A.b is not the class you defined, but a property object. I'm actually a bit surprised that it doesn't fail on class creation. It would work if you used A.b.fget as the base class. But the correct answer would be: don't do that. Python doesn't have inner classes in any regular sense, a class defined in class definition scope is no different from a regular class. In fact this code has the exact same end result: class A(object): pass class b(object): def print_hello(self): print("Hello world") A.b = property(b) I'm not sure what exactly it is that you are trying to achieve. If I understand correctly you have a parent class that has a property that returns instances of another class and you want to have a subclass that has a property that returns instances of a subclass of the other class. To subclass the returned class you need to get a reference to that class to use it as the base. Then you can just override the property to return instances of your subclass. If you use a property that looks up the class from the instance you actually don't need to override the property definition: class A(object): class B(object): def print_hello(self): print("Hello from A.B") b = property(lambda self: self.B()) class C(A): class B(A.B): def print_hello(self): print("Hello from C.B") A().b.print_hello() C().b.print_hello() A: Whenever you access an A's 'b' property, a reference to the A instance is passed to the getter - which is then passed as an argument to b's constructor. That breaks your construction right from the start, because b's (default) constructor doesn't take any extra arguments. It looks like your C class doesn't work because it's inner 'b' class inherits from A's 'b' property object. I'm curious as to why you would want a property here - it does nothing but complicating things. In fact, removing it would allow you to replace A's inner 'b' class with another class on a per-instance base, without having to resort to inheritance at all. Can you give us some more information to work with, e.g. the intentions and purpose behind this construction?
How to override inner class methods if the inner class is defined as a property of the top class
I have a code snippet like this class A(object): class b: def print_hello(self): print "Hello world" b = property(b) And I want to override the inner class b (please dont worry about the lowercase name) behaviour. Say, I want to add a new method or I want to change an existing method, like: class C(A): class b(A.b): def print_hello(self): print "Inner Class: Hello world" b = property(b) Now if I create C's object as c = C(), and call c.b I get TypeError: 'property' object is not callable error. How would I get pass this and call print_hello of the extended inner class? Disclaimer: I dont want to change the code for A class. Update: The base class 'A' tries to simulate a class which I have seen in one of the opensource apis. So thats why I dont want to change the core api, but extend it for my own requirement.
[ "I'm not really sure why you would define the inner class as a property of the outer class. (I'm no Python expert, so perhaps there's a reason I'm not aware of).\nIt seems to work fine without the properties:\nclass A(object):\n class b:\n def print_hello(self):\n print \"A Hello world\"\n\nclass C(A):\n class b(A.b):\n def print_hello(self):\n print \"C Hello world\"\n\n\nouter = C()\ninner = outer.b()\ninner.print_hello() # prints \"C Hello world\"\n\n", "The reason why this fails is that due to the b = property(b) line A.b is not the class you defined, but a property object. I'm actually a bit surprised that it doesn't fail on class creation. It would work if you used A.b.fget as the base class. But the correct answer would be: don't do that. Python doesn't have inner classes in any regular sense, a class defined in class definition scope is no different from a regular class. In fact this code has the exact same end result:\nclass A(object):\n pass\n\nclass b(object):\n def print_hello(self):\n print(\"Hello world\")\n\nA.b = property(b)\n\nI'm not sure what exactly it is that you are trying to achieve. If I understand correctly you have a parent class that has a property that returns instances of another class and you want to have a subclass that has a property that returns instances of a subclass of the other class. To subclass the returned class you need to get a reference to that class to use it as the base. Then you can just override the property to return instances of your subclass. If you use a property that looks up the class from the instance you actually don't need to override the property definition:\nclass A(object):\n class B(object):\n def print_hello(self):\n print(\"Hello from A.B\")\n b = property(lambda self: self.B())\n\nclass C(A):\n class B(A.B):\n def print_hello(self):\n print(\"Hello from C.B\")\n\nA().b.print_hello()\nC().b.print_hello()\n\n", "Whenever you access an A's 'b' property, a reference to the A instance is passed to the getter - which is then passed as an argument to b's constructor. That breaks your construction right from the start, because b's (default) constructor doesn't take any extra arguments.\nIt looks like your C class doesn't work because it's inner 'b' class inherits from A's 'b' property object.\nI'm curious as to why you would want a property here - it does nothing but complicating things. In fact, removing it would allow you to replace A's inner 'b' class with another class on a per-instance base, without having to resort to inheritance at all. Can you give us some more information to work with, e.g. the intentions and purpose behind this construction?\n" ]
[ 1, 1, 0 ]
[]
[]
[ "inheritance", "inner_classes", "python" ]
stackoverflow_0002697062_inheritance_inner_classes_python.txt
Q: How to Disassemble an object creation in Python? Having a class like this: class Spam(object): def __init__(self, name=''): self.name = name eggs = Spam('systempuntoout') using dis, is it possible to see how an instance of a class and the respective hex Identity are created? A: Yes, but it isn't obvious from the output, which is at the level of Python bytecode, e.g.: >>> class Foo(object): ... def f(x): return x * x ... >>> dis.dis(Foo) Disassembly of f: 2 0 LOAD_FAST 0 (x) 3 LOAD_FAST 0 (x) 6 BINARY_MULTIPLY 7 RETURN_VALUE It doesn't take much to figure out what Foo.f is doing from the above dump, but it quickly becomes unreadable to most people as the size of the code grows.
How to Disassemble an object creation in Python?
Having a class like this: class Spam(object): def __init__(self, name=''): self.name = name eggs = Spam('systempuntoout') using dis, is it possible to see how an instance of a class and the respective hex Identity are created?
[ "Yes, but it isn't obvious from the output, which is at the level of Python bytecode, e.g.:\n>>> class Foo(object):\n... def f(x): return x * x\n... \n>>> dis.dis(Foo)\nDisassembly of f:\n 2 0 LOAD_FAST 0 (x)\n 3 LOAD_FAST 0 (x)\n 6 BINARY_MULTIPLY \n 7 RETURN_VALUE \n\nIt doesn't take much to figure out what Foo.f is doing from the above dump, but it quickly becomes unreadable to most people as the size of the code grows.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0002697355_python.txt
Q: how to dispose a incoming email and then send some words back using google-app-engine I read the doc: from google.appengine.api import mail mail.send_mail(sender="support@example.com", to="Albert Johnson <Albert.Johnson@example.com>", subject="Your account has been approved", body=""" Dear Albert: Your example.com account has been approved. You can now visit http://www.example.com/ and sign in using your Google Account to access new features. Please let us know if you have any questions. The example.com Team """) I know how to send an email using GAE, but how to check an incoming email and then do something? Thanks A: Here's the page in the docs that deals with how to receive email.
how to dispose a incoming email and then send some words back using google-app-engine
I read the doc: from google.appengine.api import mail mail.send_mail(sender="support@example.com", to="Albert Johnson <Albert.Johnson@example.com>", subject="Your account has been approved", body=""" Dear Albert: Your example.com account has been approved. You can now visit http://www.example.com/ and sign in using your Google Account to access new features. Please let us know if you have any questions. The example.com Team """) I know how to send an email using GAE, but how to check an incoming email and then do something? Thanks
[ "Here's the page in the docs that deals with how to receive email.\n" ]
[ 1 ]
[]
[]
[ "django", "google_app_engine", "incoming_mail", "python" ]
stackoverflow_0002696955_django_google_app_engine_incoming_mail_python.txt
Q: Urllib and concurrency - Python I'm serving a python script through WSGI. The script accesses a web resource through urllib, computes the resource and then returns a value. Problem is that urllib doesn't seem to handle many concurrent requests to a precise URL. As soon as the requests go up to 30 concurrent request, the requests slow to a crawl! :( Help would be much appreciated! :D A: Yeah, urllib doesn't do much concurrency. Every time you urlopen, it has to set up the connection, send the HTTP request, and get the status code and headers from the response (and possibly handle a redirect from there). So although you get to read the body of the response at your own pace, the majority of the waiting time for the request will have already happened. If you need more concurrency, you'll probably have to pick up some kind of asynchronous network IO tool (eg. Eventlet seems to have a suitable example on its front page), or just launch each urlopen in its own thread.
Urllib and concurrency - Python
I'm serving a python script through WSGI. The script accesses a web resource through urllib, computes the resource and then returns a value. Problem is that urllib doesn't seem to handle many concurrent requests to a precise URL. As soon as the requests go up to 30 concurrent request, the requests slow to a crawl! :( Help would be much appreciated! :D
[ "Yeah, urllib doesn't do much concurrency. Every time you urlopen, it has to set up the connection, send the HTTP request, and get the status code and headers from the response (and possibly handle a redirect from there). So although you get to read the body of the response at your own pace, the majority of the waiting time for the request will have already happened.\nIf you need more concurrency, you'll probably have to pick up some kind of asynchronous network IO tool (eg. Eventlet seems to have a suitable example on its front page), or just launch each urlopen in its own thread.\n" ]
[ 3 ]
[]
[]
[ "concurrency", "http", "python", "urllib", "wsgi" ]
stackoverflow_0002697349_concurrency_http_python_urllib_wsgi.txt
Q: How GAE emulator limits list of available Python modules? I installed Python Mock module using PIP. When I try to import mock running under 'dev_appserver', GAE says that it can't find module 'mock'. import mock works perfectly in Python interpreter. I understand that dev_appserver behaves absolutely correctly because I can't install modules with PIP on GAE servers. My question is how technically dev_appserver filters list of modules that can be loaded? A: The dev_appserver uses import hooks to prevent importing modules that shouldn't be available. The relevant code is here, but be warned - it's easily the most complicated bit of the dev_appserver!
How GAE emulator limits list of available Python modules?
I installed Python Mock module using PIP. When I try to import mock running under 'dev_appserver', GAE says that it can't find module 'mock'. import mock works perfectly in Python interpreter. I understand that dev_appserver behaves absolutely correctly because I can't install modules with PIP on GAE servers. My question is how technically dev_appserver filters list of modules that can be loaded?
[ "The dev_appserver uses import hooks to prevent importing modules that shouldn't be available. The relevant code is here, but be warned - it's easily the most complicated bit of the dev_appserver!\n" ]
[ 2 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0002697457_google_app_engine_python.txt
Q: Python syntax error: can't assign to operator in module but works in interpreter I have a string a and I would like to split it in half depending on its length, so I have a-front = len(a) / 2 + len(a) % 2 this works fine in the interpreter but when i run the module from the command line python gives me a SyntaxError: can't assign to operator. What could be the issue here. A: You might mistype hyphen and underscore, try a_front = len(a) / 2 + len(a) % 2
Python syntax error: can't assign to operator in module but works in interpreter
I have a string a and I would like to split it in half depending on its length, so I have a-front = len(a) / 2 + len(a) % 2 this works fine in the interpreter but when i run the module from the command line python gives me a SyntaxError: can't assign to operator. What could be the issue here.
[ "You might mistype hyphen and underscore, try\na_front = len(a) / 2 + len(a) % 2\n\n" ]
[ 13 ]
[]
[]
[ "python", "syntax" ]
stackoverflow_0002697610_python_syntax.txt
Q: Multiprocessing Bomb I was working the following example from Doug Hellmann tutorial on multiprocessing: import multiprocessing def worker(): """worker function""" print 'Worker' return if __name__ == '__main__': jobs = [] for i in range(5): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() When I tried to run it outside the if statement: import multiprocessing def worker(): """worker function""" print 'Worker' return jobs = [] for i in range(5): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() It started spawning processes non-stop, and the only way to stop it was reboot! Why would that happen? Why it did not generate 5 processes and exit? Why do I need the if statement? A: On Windows there is no fork() routine, so multiprocessing imports the current module to get access to the worker function. Without the if statement the child process starts its own children and so on. A: Note that the documentation mentions that you need the if statement on windows (here). However, the documentation doesn't say that this kills your machine almost instantly, requiring a reboot. So this can be quite confusing, especially if the use of multiprocessing happens in some function deep inside the code. No matter how deeply hidden it is, you still need the if check in the main program file. This pretty much rules out using multiprocessing in any kind of library. multiprocessing in general seems a bit rough. It might have the interface of the thread interface, but there is just no simple way around the GIL. For more complex parallelization problems I would also look at the subprocess module or some other libraries (like mpi4py or Parallel Python). A: I don't know about multiprocessing, but I suspect that it spawns child processes that have a different __name__ global. By removing the test, you are making every child start the spawning process again.
Multiprocessing Bomb
I was working the following example from Doug Hellmann tutorial on multiprocessing: import multiprocessing def worker(): """worker function""" print 'Worker' return if __name__ == '__main__': jobs = [] for i in range(5): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() When I tried to run it outside the if statement: import multiprocessing def worker(): """worker function""" print 'Worker' return jobs = [] for i in range(5): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() It started spawning processes non-stop, and the only way to stop it was reboot! Why would that happen? Why it did not generate 5 processes and exit? Why do I need the if statement?
[ "On Windows there is no fork() routine, so multiprocessing imports the current module to get access to the worker function. Without the if statement the child process starts its own children and so on.\n", "Note that the documentation mentions that you need the if statement on windows (here).\nHowever, the documentation doesn't say that this kills your machine almost instantly, requiring a reboot. So this can be quite confusing, especially if the use of multiprocessing happens in some function deep inside the code. No matter how deeply hidden it is, you still need the if check in the main program file. This pretty much rules out using multiprocessing in any kind of library.\nmultiprocessing in general seems a bit rough. It might have the interface of the thread interface, but there is just no simple way around the GIL.\nFor more complex parallelization problems I would also look at the subprocess module or some other libraries (like mpi4py or Parallel Python).\n", "I don't know about multiprocessing, but I suspect that it spawns child processes that have a different __name__ global. By removing the test, you are making every child start the spawning process again.\n" ]
[ 47, 10, 4 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0002697640_multiprocessing_python.txt
Q: Socket Lose Connection I know Twisted can do this well but what about just plain socket? How'd you tell if you randomly lost your connection in socket? Like, If my internet was to go out of a second and come back on. A: I'm assuming you're talking about TCP. If your internet connection is out for a second, you might not lose the TCP connection at all, it'll just retransmit and resume operation. There's ofcourse 100's of other reasons you could lose the connection(e.g. a NAT gateway inbetween decided to throw out the connection silently. The other end gets hit by a nuke. Your router burns up. The guy at the other end yanks out his network cable, etc. etc.) Here's what you should do if you need to detect dead peers/closed sockets etc.: Read from the socket or in any other way wait for events of incoming data on it. This allows you to detect when the connection was gracefully closed, or an error occured on it (reading on it returns 0 or -1) - atleast if the other end is still able to send a TCP FIN/RST or ICMP packet to your host. Write to the socket - e.g. send some heartbeats every N seconds. Just reading from the socket won't detect the problem when the other end fails silently. If that PC goes offline, it can obviously not tell you that it did - so you'll have to send it something and see if it responds. If you don't want to write heartbeats every N seconds, you can atleast turn on TCP keepalive - and you'll eventually get notified if the peer is dead. You still have to read from the socket, and the keepalive are usually sent every 2 hours by default. That's still better than keeping dead sockets around for months though. A: If the internet comes and goes momentarily, you might not actually lose the TCP session. If you do, the socket API will throw some kind of exception, usually socket.timeout.
Socket Lose Connection
I know Twisted can do this well but what about just plain socket? How'd you tell if you randomly lost your connection in socket? Like, If my internet was to go out of a second and come back on.
[ "I'm assuming you're talking about TCP.\nIf your internet connection is out for a second, you might not lose the TCP connection at all, it'll just retransmit and resume operation.\nThere's ofcourse 100's of other reasons you could lose the connection(e.g. a NAT gateway inbetween decided to throw out the connection silently. The other end gets hit by a nuke. Your router burns up. The guy at the other end yanks out his network cable, etc. etc.)\nHere's what you should do if you need to detect dead peers/closed sockets etc.:\n\nRead from the socket or in any other way wait for events of incoming data on it. This allows you to detect when the connection was gracefully closed, or an error occured on it (reading on it returns 0 or -1) - atleast if the other end is still able to send a TCP FIN/RST or ICMP packet to your host.\nWrite to the socket - e.g. send some heartbeats every N seconds. Just reading from the socket won't detect the problem when the other end fails silently. If that PC goes offline, it can obviously not tell you that it did - so you'll have to send it something and see if it responds.\nIf you don't want to write heartbeats every N seconds, you can atleast turn on TCP keepalive - and you'll eventually get notified if the peer is dead. You still have to read from the socket, and the keepalive are usually sent every 2 hours by default. That's still better than keeping dead sockets around for months though.\n\n", "If the internet comes and goes momentarily, you might not actually lose the TCP session. If you do, the socket API will throw some kind of exception, usually socket.timeout.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0002697989_python_sockets.txt
Q: How to start a program with Python? How to start a program with Python? I thougt this would be very easy like: open(r"C:\Program Files\Mozilla Firefox\Firefox.exe") But nothing happens. How to do this? Thanks in advance. A: In general you can do that using subprocess.call >>> from subprocess import call >>> call(r"C:\Program Files\Mozilla Firefox\Firefox.exe") But if all you want to do is open a page in a browser you can do: >>> import webbrowser >>> webbrowser.open('http://stackoverflow.com/') True See http://docs.python.org/library/subprocess.html and http://docs.python.org/library/webbrowser.html . A: You are opening the file to read its content, instead try subprocess module http://docs.python.org/library/subprocess.html import subprocess subprocess.Popen([r"C:\Program Files\Mozilla Firefox\Firefox.exe"]) A: try os.system() and read up on alternatives in the subprocess module.
How to start a program with Python?
How to start a program with Python? I thougt this would be very easy like: open(r"C:\Program Files\Mozilla Firefox\Firefox.exe") But nothing happens. How to do this? Thanks in advance.
[ "In general you can do that using subprocess.call \n>>> from subprocess import call\n>>> call(r\"C:\\Program Files\\Mozilla Firefox\\Firefox.exe\")\n\nBut if all you want to do is open a page in a browser you can do:\n>>> import webbrowser\n>>> webbrowser.open('http://stackoverflow.com/')\nTrue\n\nSee http://docs.python.org/library/subprocess.html and http://docs.python.org/library/webbrowser.html .\n", "You are opening the file to read its content, instead try subprocess module\nhttp://docs.python.org/library/subprocess.html\nimport subprocess\nsubprocess.Popen([r\"C:\\Program Files\\Mozilla Firefox\\Firefox.exe\"])\n\n", "try os.system() and read up on alternatives in the subprocess module.\n" ]
[ 13, 7, 2 ]
[]
[]
[ "load", "python" ]
stackoverflow_0002698331_load_python.txt
Q: Why does Fabric display the disconnect from server message for almost 2 minutes? Fabric displays Disconnecting from username@server... done. for almost 2 minutes prior to showing a new command prompt whenever I issue a fab command. This problem exists when using Fabric commands issued to both an internal server and a Rackspace cloud server. Below I've included the auth.log from the server, and I didn't see anything in the logs on my MacBook. Any thoughts as to what the problem is? Server's SSH auth.log with LogLevel VERBOSE Apr 21 13:30:52 qsandbox01 sshd[19503]: Accepted password for mrankin from 10.10.100.106 port 52854 ssh2 Apr 21 13:30:52 qsandbox01 sshd[19503]: pam_unix(sshd:session): session opened for user mrankin by (uid=0) Apr 21 13:30:52 qsandbox01 sudo: mrankin : TTY=unknown ; PWD=/home/mrankin ; USER=root ; COMMAND=/bin/bash -l -c apache2ctl graceful Apr 21 13:30:53 qsandbox01 sshd[19503]: pam_unix(sshd:session): session closed for user mrankin Server Configuration OS: Ubuntu 9.10 and Ubuntu 6.10 (tested 4 servers with those OSes) OpenSSH: Ubuntu package version 1.5.1p1-6ubuntu2 Client Configuration OS: Mac OS X 10.6.3 Fabric ver 0.9 Vritualenv ver 1.4.7 pip ver 0.7 Simple fabfile.py Used for Testing The problem persists even when I just run fab -H server_ip host_type with the following fabfile. from fabric.api import run def host_type(): run('uname -s') Thoughts on Cause of the Issue I'm not certain how long this problem has persisted, but below are some things that have changed since I started to notice the slow server disconnect using Fabric. I recreated my virtualenv's using virtualenv 1.4.7, virtualenvwrapper 2.1, and pip 0.7. Not sure if this is related, but it is a thought since I run my fabfiles from within a virtualenv. I enabled OS X's firewall. I disabled OS X's firewall and the problem persisted, so this is not the issue. A: Solution The problem no longer persists after I issued the following command in my virtualenv: pip install -U paramiko This installed paramiko-1.7.6 and pycrypto-2.0.1. Previously, I had paramiko-1.7.4 and pycrypto-2.0.1. Appears that paramiko was the culprit given that the pycrypto version didn't change. At a minimum there appears to be an interaction between paramiko 1.7.4 and Fabric 0.9 that is fixed by upgrading paramiko to 1.7.6. Note: I upgraded to paramiko-1.7.6 in one virtualenv and confirmed that the problem went away. I then activated another virtualenv that still had paramiko-1.7.4 and confirmed that the problem still persisted, which it did. Then I upgraded paramiko from 1.7.4 to 1.7.6 and confirmed that the problem went away in that virtualenv as well. A: Thanks for keeping track of this here. I just want to note for any readers that Paramiko 1.7.4 has been previously known to be stable with Fabric 0.9, but in the last week or two several users have started exhibiting this or similar problems (disconnect timeouts) so I'm guessing some other component (Python upgrade, or remote server package upgrade, or something) is coming into play that is tipping off a bug in 1.7.4. I will be checking out the changelogs for Paramiko 1.7.5/1.7.6 and gathering more info about peoples' platforms/Python versions/etc, to try and see if a pattern emerges. EDIT: Newly created Redmine ticket for this issue is here: http://code.fabfile.org/issues/show/158
Why does Fabric display the disconnect from server message for almost 2 minutes?
Fabric displays Disconnecting from username@server... done. for almost 2 minutes prior to showing a new command prompt whenever I issue a fab command. This problem exists when using Fabric commands issued to both an internal server and a Rackspace cloud server. Below I've included the auth.log from the server, and I didn't see anything in the logs on my MacBook. Any thoughts as to what the problem is? Server's SSH auth.log with LogLevel VERBOSE Apr 21 13:30:52 qsandbox01 sshd[19503]: Accepted password for mrankin from 10.10.100.106 port 52854 ssh2 Apr 21 13:30:52 qsandbox01 sshd[19503]: pam_unix(sshd:session): session opened for user mrankin by (uid=0) Apr 21 13:30:52 qsandbox01 sudo: mrankin : TTY=unknown ; PWD=/home/mrankin ; USER=root ; COMMAND=/bin/bash -l -c apache2ctl graceful Apr 21 13:30:53 qsandbox01 sshd[19503]: pam_unix(sshd:session): session closed for user mrankin Server Configuration OS: Ubuntu 9.10 and Ubuntu 6.10 (tested 4 servers with those OSes) OpenSSH: Ubuntu package version 1.5.1p1-6ubuntu2 Client Configuration OS: Mac OS X 10.6.3 Fabric ver 0.9 Vritualenv ver 1.4.7 pip ver 0.7 Simple fabfile.py Used for Testing The problem persists even when I just run fab -H server_ip host_type with the following fabfile. from fabric.api import run def host_type(): run('uname -s') Thoughts on Cause of the Issue I'm not certain how long this problem has persisted, but below are some things that have changed since I started to notice the slow server disconnect using Fabric. I recreated my virtualenv's using virtualenv 1.4.7, virtualenvwrapper 2.1, and pip 0.7. Not sure if this is related, but it is a thought since I run my fabfiles from within a virtualenv. I enabled OS X's firewall. I disabled OS X's firewall and the problem persisted, so this is not the issue.
[ "Solution\nThe problem no longer persists after I issued the following command in my virtualenv:\npip install -U paramiko\n\nThis installed paramiko-1.7.6 and pycrypto-2.0.1. Previously, I had paramiko-1.7.4 and pycrypto-2.0.1.\nAppears that paramiko was the culprit given that the pycrypto version didn't change. At a minimum there appears to be an interaction between paramiko 1.7.4 and Fabric 0.9 that is fixed by upgrading paramiko to 1.7.6.\nNote: I upgraded to paramiko-1.7.6 in one virtualenv and confirmed that the problem went away. I then activated another virtualenv that still had paramiko-1.7.4 and confirmed that the problem still persisted, which it did. Then I upgraded paramiko from 1.7.4 to 1.7.6 and confirmed that the problem went away in that virtualenv as well.\n", "Thanks for keeping track of this here. I just want to note for any readers that Paramiko 1.7.4 has been previously known to be stable with Fabric 0.9, but in the last week or two several users have started exhibiting this or similar problems (disconnect timeouts) so I'm guessing some other component (Python upgrade, or remote server package upgrade, or something) is coming into play that is tipping off a bug in 1.7.4.\nI will be checking out the changelogs for Paramiko 1.7.5/1.7.6 and gathering more info about peoples' platforms/Python versions/etc, to try and see if a pattern emerges.\nEDIT: Newly created Redmine ticket for this issue is here: http://code.fabfile.org/issues/show/158\n" ]
[ 6, 2 ]
[]
[]
[ "fabric", "paramiko", "python", "ssh", "virtualenv" ]
stackoverflow_0002685788_fabric_paramiko_python_ssh_virtualenv.txt
Q: removing pairs of elements from numpy arrays that are NaN (or another value) in Python I have an array with two columns in numpy. For example: a = array([[1, 5, nan, 6], [10, 6, 6, nan]]) a = transpose(a) I want to efficiently iterate through the two columns, a[:, 0] and a[:, 1] and remove any pairs that meet a certain condition, in this case if they are NaN. The obvious way I can think of is: new_a = [] for val1, val2 in a: if val2 == nan or val2 == nan: new_a.append([val1, val2]) But that seems clunky. What's the pythonic numpy way of doing this? thanks. A: If you want to take only the rows that have no NANs, this is the expression you need: >>> import numpy as np >>> a[~np.isnan(a).any(1)] array([[ 1., 10.], [ 5., 6.]]) If you want the rows that do not have a specific number among its elements, e.g. 5: >>> a[~(a == 5).any(1)] array([[ 1., 10.], [ NaN, 6.], [ 6., NaN]]) The latter is clearly equivalent to >>> a[(a != 5).all(1)] array([[ 1., 10.], [ NaN, 6.], [ 6., NaN]]) Explanation: Let's first create your example input >>> import numpy as np >>> a = np.array([[1, 5, np.nan, 6], ... [10, 6, 6, np.nan]]).transpose() >>> a array([[ 1., 10.], [ 5., 6.], [ NaN, 6.], [ 6., NaN]]) This determines which elements are NAN >>> np.isnan(a) array([[False, False], [False, False], [ True, False], [False, True]], dtype=bool) This identifies which rows have any element which are True >>> np.isnan(a).any(1) array([False, False, True, True], dtype=bool) Since we don't want these, we negate the last expression: >>> ~np.isnan(a).any(1) array([ True, True, False, False], dtype=bool) And finally we use the boolean array to select the rows we want: >>> a[~np.isnan(a).any(1)] array([[ 1., 10.], [ 5., 6.]]) A: You could convert the array into a masked array, and use the compress_rows method: import numpy as np a = np.array([[1, 5, np.nan, 6], [10, 6, 6, np.nan]]) a = np.transpose(a) print(a) # [[ 1. 10.] # [ 5. 6.] # [ NaN 6.] # [ 6. NaN]] b=np.ma.compress_rows(np.ma.fix_invalid(a)) print(b) # [[ 1. 10.] # [ 5. 6.]] A: Not to detract from ig0774's answer, which is perfectly valid and Pythonic and is in fact the normal way of doing these things in plain Python, but: numpy supports a boolean indexing system which could also do the job. new_a = a[(a==a).all(1)] I'm not sure offhand which way would be more efficient (or faster to execute). If you wanted to use a different condition to select the rows, this would have to be changed, and precisely how depends on the condition. If it's something that can be evaluated for each array element independently, you could just replace the a==a with the appropriate test, for example to eliminate all rows with numbers larger than 100 you could do new_a = a[(a<=100).all(1)] But if you're trying to do something fancy that involves all the elements in a row (like eliminating all rows that sum to more than 100), it might be more complicated. If that's the case, I can try to edit in a more specific answer if you want to share your exact condition. A: I think list comprehensions should do this. E.g., new_a = [(val1, val2) for (val1, val2) in a if math.isnan(val1) or math.isnan(val2)]
removing pairs of elements from numpy arrays that are NaN (or another value) in Python
I have an array with two columns in numpy. For example: a = array([[1, 5, nan, 6], [10, 6, 6, nan]]) a = transpose(a) I want to efficiently iterate through the two columns, a[:, 0] and a[:, 1] and remove any pairs that meet a certain condition, in this case if they are NaN. The obvious way I can think of is: new_a = [] for val1, val2 in a: if val2 == nan or val2 == nan: new_a.append([val1, val2]) But that seems clunky. What's the pythonic numpy way of doing this? thanks.
[ "If you want to take only the rows that have no NANs, this is the expression you need:\n>>> import numpy as np\n>>> a[~np.isnan(a).any(1)]\narray([[ 1., 10.],\n [ 5., 6.]])\n\nIf you want the rows that do not have a specific number among its elements, e.g. 5:\n>>> a[~(a == 5).any(1)]\narray([[ 1., 10.],\n [ NaN, 6.],\n [ 6., NaN]])\n\nThe latter is clearly equivalent to\n>>> a[(a != 5).all(1)]\narray([[ 1., 10.],\n [ NaN, 6.],\n [ 6., NaN]])\n\nExplanation:\nLet's first create your example input\n>>> import numpy as np\n>>> a = np.array([[1, 5, np.nan, 6],\n... [10, 6, 6, np.nan]]).transpose()\n>>> a\narray([[ 1., 10.],\n [ 5., 6.],\n [ NaN, 6.],\n [ 6., NaN]])\n\nThis determines which elements are NAN\n>>> np.isnan(a)\narray([[False, False],\n [False, False],\n [ True, False],\n [False, True]], dtype=bool)\n\nThis identifies which rows have any element which are True\n>>> np.isnan(a).any(1)\narray([False, False, True, True], dtype=bool)\n\nSince we don't want these, we negate the last expression:\n>>> ~np.isnan(a).any(1)\narray([ True, True, False, False], dtype=bool)\n\nAnd finally we use the boolean array to select the rows we want:\n>>> a[~np.isnan(a).any(1)]\narray([[ 1., 10.],\n [ 5., 6.]])\n\n", "You could convert the array into a masked array, and use the compress_rows method:\nimport numpy as np\na = np.array([[1, 5, np.nan, 6],\n [10, 6, 6, np.nan]])\na = np.transpose(a)\nprint(a)\n# [[ 1. 10.]\n# [ 5. 6.]\n# [ NaN 6.]\n# [ 6. NaN]]\nb=np.ma.compress_rows(np.ma.fix_invalid(a))\nprint(b)\n# [[ 1. 10.]\n# [ 5. 6.]]\n\n", "Not to detract from ig0774's answer, which is perfectly valid and Pythonic and is in fact the normal way of doing these things in plain Python, but: numpy supports a boolean indexing system which could also do the job.\nnew_a = a[(a==a).all(1)]\n\nI'm not sure offhand which way would be more efficient (or faster to execute).\nIf you wanted to use a different condition to select the rows, this would have to be changed, and precisely how depends on the condition. If it's something that can be evaluated for each array element independently, you could just replace the a==a with the appropriate test, for example to eliminate all rows with numbers larger than 100 you could do\nnew_a = a[(a<=100).all(1)]\n\nBut if you're trying to do something fancy that involves all the elements in a row (like eliminating all rows that sum to more than 100), it might be more complicated. If that's the case, I can try to edit in a more specific answer if you want to share your exact condition.\n", "I think list comprehensions should do this. E.g.,\nnew_a = [(val1, val2) for (val1, val2) in a if math.isnan(val1) or math.isnan(val2)]\n\n" ]
[ 31, 3, 3, 2 ]
[]
[]
[ "arrays", "numpy", "python", "scipy" ]
stackoverflow_0002695503_arrays_numpy_python_scipy.txt
Q: file output in python giving me garbage When I write the following code I get garbage for an output. It is just a simple program to find prime numbers. It works when the first for loops range only goes up to 1000 but once the range becomes large the program fail's to output meaningful data output = open("output.dat", 'w') for i in range(2, 10000): prime = 1 for j in range(2, i-1): if i%j == 0: prime = 0 j = i-1 if prime == 1: output.write(str(i) + " " ) output.close() print "writing finished" A: You're setting a single variable named prime ten thousand times to 1, then 9998 times possibly setting it to 0, and finally (if it's not been set to 0) outputting one incomplete line (no line-end). I suspect that's not what you want to do! Maybe something like...: output = open("output.dat", 'w') for i in range(2, 10000): prime = 1 for j in range(2, i-1): if i%j == 0: prime = 0 break if prime == 1: output.write(str(i) + " " ) output.close() print "writing finished" Note the very different indentation from what you had posted. I also used break to break out of an inner loop, which I think was what you meant where you wrote j = i - 1 (which would in fact have absolutely no effect since j would just be set to its next natural value in the very next leg of that inner loop, which would still run to the end). A: This is a known Notepad bug. Check out http://blogs.msdn.com/oldnewthing/archive/2007/04/17/2158334.aspx The classic way to trigger this bug is to put "Bush hid the facts" in a file, save it, reopen it, and scream about conspiracy theories, but I guess "2 3 5 7 11 13 17" works too, except that you don't get to scream about conspiracy theories. A: With fixed indentation (which I'll have to assume is a bad paste job, otherwise I don't think it would run) your code outputs fine for me : 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 383 389 397 401 409 419 421 431 433 439 443 449 457 461 463 467 479 487 491 499 503 509 521 523 541 547 557 563 569 571 577 587 593 599 601 607 613 617 619 631 641 643 647 653 659 661 673 677 683 691 701 709 719 727 733 739 743 751 757 761 769 773 787 797 809 811 821 823 827 829 839 853 857 859 863 877 881 883 887 907 911 919 929 937 941 947 953 967 971 977 983 991 997 1009 1013 1019 1021 1031 1033 1039 1049 1051 1061 1063 1069 1087 1091 1093 1097 1103 1109 1117 1123 1129 1151 1153 1163 1171 1181 1187 1193 1201 1213 1217 1223 1229 1231 1237 1249 1259 1277 1279 1283 1289 1291 1297 1301 1303 1307 1319 1321 1327 1361 1367 1373 1381 1399 1409 1423 1427 1429 1433 1439 1447 1451 1453 1459 1471 1481 1483 1487 1489 1493 1499 1511 1523 1531 1543 1549 1553 1559 1567 1571 1579 1583 1597 1601 1607 1609 1613 1619 1621 1627 1637 1657 1663 1667 1669 1693 1697 1699 1709 1721 1723 1733 1741 1747 1753 1759 1777 1783 1787 1789 1801 1811 1823 1831 1847 1861 1867 1871 1873 1877 1879 1889 1901 1907 1913 1931 1933 1949 1951 1973 1979 1987 1993 1997 1999 2003 2011 2017 2027 2029 2039 2053 2063 2069 2081 2083 2087 2089 2099 2111 2113 2129 2131 2137 2141 2143 2153 2161 2179 2203 2207 2213 2221 2237 2239 2243 2251 2267 2269 2273 2281 2287 2293 2297 2309 2311 2333 2339 2341 2347 2351 2357 2371 2377 2381 2383 2389 2393 2399 2411 2417 2423 2437 2441 2447 2459 2467 2473 2477 2503 2521 2531 2539 2543 2549 2551 2557 2579 2591 2593 2609 2617 2621 2633 2647 2657 2659 2663 2671 2677 2683 2687 2689 2693 2699 2707 2711 2713 2719 2729 2731 2741 2749 2753 2767 2777 2789 2791 2797 2801 2803 2819 2833 2837 2843 2851 2857 2861 2879 2887 2897 2903 2909 2917 2927 2939 2953 2957 2963 2969 2971 2999 3001 3011 3019 3023 3037 3041 3049 3061 3067 3079 3083 3089 3109 3119 3121 3137 3163 3167 3169 3181 3187 3191 3203 3209 3217 3221 3229 3251 3253 3257 3259 3271 3299 3301 3307 3313 3319 3323 3329 3331 3343 3347 3359 3361 3371 3373 3389 3391 3407 3413 3433 3449 3457 3461 3463 3467 3469 3491 3499 3511 3517 3527 3529 3533 3539 3541 3547 3557 3559 3571 3581 3583 3593 3607 3613 3617 3623 3631 3637 3643 3659 3671 3673 3677 3691 3697 3701 3709 3719 3727 3733 3739 3761 3767 3769 3779 3793 3797 3803 3821 3823 3833 3847 3851 3853 3863 3877 3881 3889 3907 3911 3917 3919 3923 3929 3931 3943 3947 3967 3989 4001 4003 4007 4013 4019 4021 4027 4049 4051 4057 4073 4079 4091 4093 4099 4111 4127 4129 4133 4139 4153 4157 4159 4177 4201 4211 4217 4219 4229 4231 4241 4243 4253 4259 4261 4271 4273 4283 4289 4297 4327 4337 4339 4349 4357 4363 4373 4391 4397 4409 4421 4423 4441 4447 4451 4457 4463 4481 4483 4493 4507 4513 4517 4519 4523 4547 4549 4561 4567 4583 4591 4597 4603 4621 4637 4639 4643 4649 4651 4657 4663 4673 4679 4691 4703 4721 4723 4729 4733 4751 4759 4783 4787 4789 4793 4799 4801 4813 4817 4831 4861 4871 4877 4889 4903 4909 4919 4931 4933 4937 4943 4951 4957 4967 4969 4973 4987 4993 4999 5003 5009 5011 5021 5023 5039 5051 5059 5077 5081 5087 5099 5101 5107 5113 5119 5147 5153 5167 5171 5179 5189 5197 5209 5227 5231 5233 5237 5261 5273 5279 5281 5297 5303 5309 5323 5333 5347 5351 5381 5387 5393 5399 5407 5413 5417 5419 5431 5437 5441 5443 5449 5471 5477 5479 5483 5501 5503 5507 5519 5521 5527 5531 5557 5563 5569 5573 5581 5591 5623 5639 5641 5647 5651 5653 5657 5659 5669 5683 5689 5693 5701 5711 5717 5737 5741 5743 5749 5779 5783 5791 5801 5807 5813 5821 5827 5839 5843 5849 5851 5857 5861 5867 5869 5879 5881 5897 5903 5923 5927 5939 5953 5981 5987 6007 6011 6029 6037 6043 6047 6053 6067 6073 6079 6089 6091 6101 6113 6121 6131 6133 6143 6151 6163 6173 6197 6199 6203 6211 6217 6221 6229 6247 6257 6263 6269 6271 6277 6287 6299 6301 6311 6317 6323 6329 6337 6343 6353 6359 6361 6367 6373 6379 6389 6397 6421 6427 6449 6451 6469 6473 6481 6491 6521 6529 6547 6551 6553 6563 6569 6571 6577 6581 6599 6607 6619 6637 6653 6659 6661 6673 6679 6689 6691 6701 6703 6709 6719 6733 6737 6761 6763 6779 6781 6791 6793 6803 6823 6827 6829 6833 6841 6857 6863 6869 6871 6883 6899 6907 6911 6917 6947 6949 6959 6961 6967 6971 6977 6983 6991 6997 7001 7013 7019 7027 7039 7043 7057 7069 7079 7103 7109 7121 7127 7129 7151 7159 7177 7187 7193 7207 7211 7213 7219 7229 7237 7243 7247 7253 7283 7297 7307 7309 7321 7331 7333 7349 7351 7369 7393 7411 7417 7433 7451 7457 7459 7477 7481 7487 7489 7499 7507 7517 7523 7529 7537 7541 7547 7549 7559 7561 7573 7577 7583 7589 7591 7603 7607 7621 7639 7643 7649 7669 7673 7681 7687 7691 7699 7703 7717 7723 7727 7741 7753 7757 7759 7789 7793 7817 7823 7829 7841 7853 7867 7873 7877 7879 7883 7901 7907 7919 7927 7933 7937 7949 7951 7963 7993 8009 8011 8017 8039 8053 8059 8069 8081 8087 8089 8093 8101 8111 8117 8123 8147 8161 8167 8171 8179 8191 8209 8219 8221 8231 8233 8237 8243 8263 8269 8273 8287 8291 8293 8297 8311 8317 8329 8353 8363 8369 8377 8387 8389 8419 8423 8429 8431 8443 8447 8461 8467 8501 8513 8521 8527 8537 8539 8543 8563 8573 8581 8597 8599 8609 8623 8627 8629 8641 8647 8663 8669 8677 8681 8689 8693 8699 8707 8713 8719 8731 8737 8741 8747 8753 8761 8779 8783 8803 8807 8819 8821 8831 8837 8839 8849 8861 8863 8867 8887 8893 8923 8929 8933 8941 8951 8963 8969 8971 8999 9001 9007 9011 9013 9029 9041 9043 9049 9059 9067 9091 9103 9109 9127 9133 9137 9151 9157 9161 9173 9181 9187 9199 9203 9209 9221 9227 9239 9241 9257 9277 9281 9283 9293 9311 9319 9323 9337 9341 9343 9349 9371 9377 9391 9397 9403 9413 9419 9421 9431 9433 9437 9439 9461 9463 9467 9473 9479 9491 9497 9511 9521 9533 9539 9547 9551 9587 9601 9613 9619 9623 9629 9631 9643 9649 9661 9677 9679 9689 9697 9719 9721 9733 9739 9743 9749 9767 9769 9781 9787 9791 9803 9811 9817 9829 9833 9839 9851 9857 9859 9871 9883 9887 9901 9907 9923 9929 9931 9941 9949 9967 9973 EDIT the version of indentation I ran: output = open("output.dat", 'w') for i in range(2, 10000): prime = 1 for j in range(2, i-1): if i%j == 0: prime = 0 j = i-1 if prime == 1: output.write(str(i) + " " ) output.close() print "writing finished" A: Your second for should be nested in the first for. Also, this looks like a homework question. It is not clear how your output is garbage - does it not compute what you want? Or is the output scrambled? Post a copy of the output so we can see! A: Don't you want your loops to be nested? output = open("output.dat", 'w') for i in range(2, 10000): prime = 1 for j in range(2, i-1): if i%j == 0: prime = 0 j = i-1 if prime == 1: output.write(str(i) + " " ) output.close() print "writing finished" A: so, you set prime to 1, 9998 times then you use the final value of i (10000?, 10001?) as an end value .... to summarize, you have serious indention problems....
file output in python giving me garbage
When I write the following code I get garbage for an output. It is just a simple program to find prime numbers. It works when the first for loops range only goes up to 1000 but once the range becomes large the program fail's to output meaningful data output = open("output.dat", 'w') for i in range(2, 10000): prime = 1 for j in range(2, i-1): if i%j == 0: prime = 0 j = i-1 if prime == 1: output.write(str(i) + " " ) output.close() print "writing finished"
[ "You're setting a single variable named prime ten thousand times to 1, then 9998 times possibly setting it to 0, and finally (if it's not been set to 0) outputting one incomplete line (no line-end). I suspect that's not what you want to do! Maybe something like...:\noutput = open(\"output.dat\", 'w')\nfor i in range(2, 10000):\n prime = 1\n for j in range(2, i-1): \n if i%j == 0:\n prime = 0\n break\n if prime == 1:\n output.write(str(i) + \" \" )\noutput.close()\nprint \"writing finished\"\n\nNote the very different indentation from what you had posted. I also used break to break out of an inner loop, which I think was what you meant where you wrote j = i - 1 (which would in fact have absolutely no effect since j would just be set to its next natural value in the very next leg of that inner loop, which would still run to the end).\n", "This is a known Notepad bug. Check out\nhttp://blogs.msdn.com/oldnewthing/archive/2007/04/17/2158334.aspx\nThe classic way to trigger this bug is to put \"Bush hid the facts\" in a file, save it, reopen it, and scream about conspiracy theories, but I guess \"2 3 5 7 11 13 17\" works too, except that you don't get to scream about conspiracy theories.\n", "With fixed indentation (which I'll have to assume is a bad paste job, otherwise I don't think it would run) your code outputs fine for me : \n2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 383 389 397 401 409 419 421 431 433 439 443 449 457 461 463 467 479 487 491 499 503 509 521 523 541 547 557 563 569 571 577 587 593 599 601 607 613 617 619 631 641 643 647 653 659 661 673 677 683 691 701 709 719 727 733 739 743 751 757 761 769 773 787 797 809 811 821 823 827 829 839 853 857 859 863 877 881 883 887 907 911 919 929 937 941 947 953 967 971 977 983 991 997 1009 1013 1019 1021 1031 1033 1039 1049 1051 1061 1063 1069 1087 1091 1093 1097 1103 1109 1117 1123 1129 1151 1153 1163 1171 1181 1187 1193 1201 1213 1217 1223 1229 1231 1237 1249 1259 1277 1279 1283 1289 1291 1297 1301 1303 1307 1319 1321 1327 1361 1367 1373 1381 1399 1409 1423 1427 1429 1433 1439 1447 1451 1453 1459 1471 1481 1483 1487 1489 1493 1499 1511 1523 1531 1543 1549 1553 1559 1567 1571 1579 1583 1597 1601 1607 1609 1613 1619 1621 1627 1637 1657 1663 1667 1669 1693 1697 1699 1709 1721 1723 1733 1741 1747 1753 1759 1777 1783 1787 1789 1801 1811 1823 1831 1847 1861 1867 1871 1873 1877 1879 1889 1901 1907 1913 1931 1933 1949 1951 1973 1979 1987 1993 1997 1999 2003 2011 2017 2027 2029 2039 2053 2063 2069 2081 2083 2087 2089 2099 2111 2113 2129 2131 2137 2141 2143 2153 2161 2179 2203 2207 2213 2221 2237 2239 2243 2251 2267 2269 2273 2281 2287 2293 2297 2309 2311 2333 2339 2341 2347 2351 2357 2371 2377 2381 2383 2389 2393 2399 2411 2417 2423 2437 2441 2447 2459 2467 2473 2477 2503 2521 2531 2539 2543 2549 2551 2557 2579 2591 2593 2609 2617 2621 2633 2647 2657 2659 2663 2671 2677 2683 2687 2689 2693 2699 2707 2711 2713 2719 2729 2731 2741 2749 2753 2767 2777 2789 2791 2797 2801 2803 2819 2833 2837 2843 2851 2857 2861 2879 2887 2897 2903 2909 2917 2927 2939 2953 2957 2963 2969 2971 2999 3001 3011 3019 3023 3037 3041 3049 3061 3067 3079 3083 3089 3109 3119 3121 3137 3163 3167 3169 3181 3187 3191 3203 3209 3217 3221 3229 3251 3253 3257 3259 3271 3299 3301 3307 3313 3319 3323 3329 3331 3343 3347 3359 3361 3371 3373 3389 3391 3407 3413 3433 3449 3457 3461 3463 3467 3469 3491 3499 3511 3517 3527 3529 3533 3539 3541 3547 3557 3559 3571 3581 3583 3593 3607 3613 3617 3623 3631 3637 3643 3659 3671 3673 3677 3691 3697 3701 3709 3719 3727 3733 3739 3761 3767 3769 3779 3793 3797 3803 3821 3823 3833 3847 3851 3853 3863 3877 3881 3889 3907 3911 3917 3919 3923 3929 3931 3943 3947 3967 3989 4001 4003 4007 4013 4019 4021 4027 4049 4051 4057 4073 4079 4091 4093 4099 4111 4127 4129 4133 4139 4153 4157 4159 4177 4201 4211 4217 4219 4229 4231 4241 4243 4253 4259 4261 4271 4273 4283 4289 4297 4327 4337 4339 4349 4357 4363 4373 4391 4397 4409 4421 4423 4441 4447 4451 4457 4463 4481 4483 4493 4507 4513 4517 4519 4523 4547 4549 4561 4567 4583 4591 4597 4603 4621 4637 4639 4643 4649 4651 4657 4663 4673 4679 4691 4703 4721 4723 4729 4733 4751 4759 4783 4787 4789 4793 4799 4801 4813 4817 4831 4861 4871 4877 4889 4903 4909 4919 4931 4933 4937 4943 4951 4957 4967 4969 4973 4987 4993 4999 5003 5009 5011 5021 5023 5039 5051 5059 5077 5081 5087 5099 5101 5107 5113 5119 5147 5153 5167 5171 5179 5189 5197 5209 5227 5231 5233 5237 5261 5273 5279 5281 5297 5303 5309 5323 5333 5347 5351 5381 5387 5393 5399 5407 5413 5417 5419 5431 5437 5441 5443 5449 5471 5477 5479 5483 5501 5503 5507 5519 5521 5527 5531 5557 5563 5569 5573 5581 5591 5623 5639 5641 5647 5651 5653 5657 5659 5669 5683 5689 5693 5701 5711 5717 5737 5741 5743 5749 5779 5783 5791 5801 5807 5813 5821 5827 5839 5843 5849 5851 5857 5861 5867 5869 5879 5881 5897 5903 5923 5927 5939 5953 5981 5987 6007 6011 6029 6037 6043 6047 6053 6067 6073 6079 6089 6091 6101 6113 6121 6131 6133 6143 6151 6163 6173 6197 6199 6203 6211 6217 6221 6229 6247 6257 6263 6269 6271 6277 6287 6299 6301 6311 6317 6323 6329 6337 6343 6353 6359 6361 6367 6373 6379 6389 6397 6421 6427 6449 6451 6469 6473 6481 6491 6521 6529 6547 6551 6553 6563 6569 6571 6577 6581 6599 6607 6619 6637 6653 6659 6661 6673 6679 6689 6691 6701 6703 6709 6719 6733 6737 6761 6763 6779 6781 6791 6793 6803 6823 6827 6829 6833 6841 6857 6863 6869 6871 6883 6899 6907 6911 6917 6947 6949 6959 6961 6967 6971 6977 6983 6991 6997 7001 7013 7019 7027 7039 7043 7057 7069 7079 7103 7109 7121 7127 7129 7151 7159 7177 7187 7193 7207 7211 7213 7219 7229 7237 7243 7247 7253 7283 7297 7307 7309 7321 7331 7333 7349 7351 7369 7393 7411 7417 7433 7451 7457 7459 7477 7481 7487 7489 7499 7507 7517 7523 7529 7537 7541 7547 7549 7559 7561 7573 7577 7583 7589 7591 7603 7607 7621 7639 7643 7649 7669 7673 7681 7687 7691 7699 7703 7717 7723 7727 7741 7753 7757 7759 7789 7793 7817 7823 7829 7841 7853 7867 7873 7877 7879 7883 7901 7907 7919 7927 7933 7937 7949 7951 7963 7993 8009 8011 8017 8039 8053 8059 8069 8081 8087 8089 8093 8101 8111 8117 8123 8147 8161 8167 8171 8179 8191 8209 8219 8221 8231 8233 8237 8243 8263 8269 8273 8287 8291 8293 8297 8311 8317 8329 8353 8363 8369 8377 8387 8389 8419 8423 8429 8431 8443 8447 8461 8467 8501 8513 8521 8527 8537 8539 8543 8563 8573 8581 8597 8599 8609 8623 8627 8629 8641 8647 8663 8669 8677 8681 8689 8693 8699 8707 8713 8719 8731 8737 8741 8747 8753 8761 8779 8783 8803 8807 8819 8821 8831 8837 8839 8849 8861 8863 8867 8887 8893 8923 8929 8933 8941 8951 8963 8969 8971 8999 9001 9007 9011 9013 9029 9041 9043 9049 9059 9067 9091 9103 9109 9127 9133 9137 9151 9157 9161 9173 9181 9187 9199 9203 9209 9221 9227 9239 9241 9257 9277 9281 9283 9293 9311 9319 9323 9337 9341 9343 9349 9371 9377 9391 9397 9403 9413 9419 9421 9431 9433 9437 9439 9461 9463 9467 9473 9479 9491 9497 9511 9521 9533 9539 9547 9551 9587 9601 9613 9619 9623 9629 9631 9643 9649 9661 9677 9679 9689 9697 9719 9721 9733 9739 9743 9749 9767 9769 9781 9787 9791 9803 9811 9817 9829 9833 9839 9851 9857 9859 9871 9883 9887 9901 9907 9923 9929 9931 9941 9949 9967 9973 \n\nEDIT the version of indentation I ran:\noutput = open(\"output.dat\", 'w')\nfor i in range(2, 10000):\n prime = 1\n for j in range(2, i-1): \n if i%j == 0:\n prime = 0\n j = i-1\n if prime == 1:\n output.write(str(i) + \" \" )\noutput.close()\nprint \"writing finished\"\n\n", "Your second for should be nested in the first for.\nAlso, this looks like a homework question. It is not clear how your output is garbage - does it not compute what you want? Or is the output scrambled? Post a copy of the output so we can see!\n", "Don't you want your loops to be nested?\noutput = open(\"output.dat\", 'w')\nfor i in range(2, 10000):\n prime = 1\n for j in range(2, i-1): \n if i%j == 0:\n prime = 0\n j = i-1\n if prime == 1:\n output.write(str(i) + \" \" )\n output.close()\nprint \"writing finished\"\n\n", "so, you set prime to 1, 9998 times\nthen you use the final value of i (10000?, 10001?) as an end value\n....\nto summarize, you have serious indention problems....\n" ]
[ 3, 3, 1, 0, 0, 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0002699014_file_python.txt
Q: Regex: Matching a space-joined list of words, excluding last whitespace How would I match a space separated list of words followed by whitespace and some optional numbers? I have this: >>> import re >>> m = re.match('(?P<words>(\S+\s+)+)(?P<num>\d+)?\r\n', 'Foo Bar 12345\r\n') >>> m.groupdict() {'num': '12345', 'words': 'Foo Bar '} I'd like the words group to not include the last whitespace(s) but I can't figure this one out. I could do a .strip() on the result but that's not as much fun :) Some strings to test and wanted result: 'Foo & Bar 555\r\n' => {'num': '555', 'words': 'Foo & Bar'} 'Hello World\r\n' => {'num': None, 'words': 'Hello World'} 'Spam 99\r\n' => {'num': 99, 'words': 'Spam'} 'Number 1 666\r\n' => {'num': 666, 'words': 'Number 1'} A: I'm a bit confused by your double capturing group, and the fact that you're using \w but want to match a non-word character like & (maybe you mean \S, non-spaces, where you say \w...?), but, maybe...: >>> import re >>> r = re.compile(r'(?P<words>\w+(?:\s+\S+)*?)\s*(?P<num>\d+)?\r\n') >>> for s in ('Foo & Bar 555\r\n', 'Hello World\r\n', 'Spam 99\r\n', ... 'Number 1 666\r\n'): ... print s, r.match(s).groupdict() ... Foo & Bar 555 {'num': '555', 'words': 'Foo & Bar'} Hello World {'num': None, 'words': 'Hello World'} Spam 99 {'num': '99', 'words': 'Spam'} Number 1 666 {'num': '666', 'words': 'Number 1'} >>>
Regex: Matching a space-joined list of words, excluding last whitespace
How would I match a space separated list of words followed by whitespace and some optional numbers? I have this: >>> import re >>> m = re.match('(?P<words>(\S+\s+)+)(?P<num>\d+)?\r\n', 'Foo Bar 12345\r\n') >>> m.groupdict() {'num': '12345', 'words': 'Foo Bar '} I'd like the words group to not include the last whitespace(s) but I can't figure this one out. I could do a .strip() on the result but that's not as much fun :) Some strings to test and wanted result: 'Foo & Bar 555\r\n' => {'num': '555', 'words': 'Foo & Bar'} 'Hello World\r\n' => {'num': None, 'words': 'Hello World'} 'Spam 99\r\n' => {'num': 99, 'words': 'Spam'} 'Number 1 666\r\n' => {'num': 666, 'words': 'Number 1'}
[ "I'm a bit confused by your double capturing group, and the fact that you're using \\w but want to match a non-word character like & (maybe you mean \\S, non-spaces, where you say \\w...?), but, maybe...:\n>>> import re\n>>> r = re.compile(r'(?P<words>\\w+(?:\\s+\\S+)*?)\\s*(?P<num>\\d+)?\\r\\n')\n>>> for s in ('Foo & Bar 555\\r\\n', 'Hello World\\r\\n', 'Spam 99\\r\\n',\n... 'Number 1 666\\r\\n'):\n... print s, r.match(s).groupdict()\n... \nFoo & Bar 555\n{'num': '555', 'words': 'Foo & Bar'}\nHello World\n{'num': None, 'words': 'Hello World'}\nSpam 99\n{'num': '99', 'words': 'Spam'}\nNumber 1 666\n{'num': '666', 'words': 'Number 1'}\n>>> \n\n" ]
[ 2 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0002699366_python_regex.txt
Q: cx_Oracle and output variables I'm trying to do this again an Oracle 10 database: cursor = connection.cursor() lOutput = cursor.var(cx_Oracle.STRING) cursor.execute(""" BEGIN %(out)s := 'N'; END;""", {'out' : lOutput}) print lOutput.value but I'm getting DatabaseError: ORA-01036: illegal variable name/number Is it possible to define PL/SQL blocks in cx_Oracle this way? A: Yes, you can do anonymous PL/SQL blocks. Your bind variable for the output parameter is not in the correct format. It should be :out instead of %(out)s cursor = connection.cursor() lOutput = cursor.var(cx_Oracle.STRING) cursor.execute(""" BEGIN :out := 'N'; END;""", {'out' : lOutput}) print lOutput Which produces the output: <cx_Oracle.STRING with value 'N'>
cx_Oracle and output variables
I'm trying to do this again an Oracle 10 database: cursor = connection.cursor() lOutput = cursor.var(cx_Oracle.STRING) cursor.execute(""" BEGIN %(out)s := 'N'; END;""", {'out' : lOutput}) print lOutput.value but I'm getting DatabaseError: ORA-01036: illegal variable name/number Is it possible to define PL/SQL blocks in cx_Oracle this way?
[ "Yes, you can do anonymous PL/SQL blocks. Your bind variable for the output parameter is not in the correct format. It should be :out instead of %(out)s\ncursor = connection.cursor()\nlOutput = cursor.var(cx_Oracle.STRING)\ncursor.execute(\"\"\"\n BEGIN\n :out := 'N';\n END;\"\"\",\n {'out' : lOutput})\nprint lOutput\n\nWhich produces the output:\n<cx_Oracle.STRING with value 'N'>\n\n" ]
[ 9 ]
[]
[]
[ "cx_oracle", "oracle", "oracle10g", "python" ]
stackoverflow_0002698008_cx_oracle_oracle_oracle10g_python.txt
Q: python unittest howto I`d like to know how I could unit-test the following module. def download_distribution(url, tempdir): """ Method which downloads the distribution from PyPI """ print "Attempting to download from %s" % (url,) try: url_handler = urllib2.urlopen(url) distribution_contents = url_handler.read() url_handler.close() filename = get_file_name(url) file_handler = open(os.path.join(tempdir, filename), "w") file_handler.write(distribution_contents) file_handler.close() return True except ValueError, IOError: return False A: Vague question. If you're just looking for a primer for unit testing in general with a Python slant, I recommend Mark Pilgrim's "Dive Into Python" which has a chapter on unit testing with Python. Otherwise you need to clear up what specific issues you are having testing that code. A: Unit test propositioners will tell you that unit tests should be self contained, that is, they should not access the network or the filesystem (especially not in writing mode). Network and filesystem tests are beyond the scope of unit tests (though you might subject them to integration tests). Speaking generally, for such a case, I'd extract the urllib and file-writing codes to separate functions (which would not be unit-tested), and inject mock-functions during unit testing. I.e. (slightly abbreviated for better reading): def get_web_content(url): # Extracted code url_handler = urllib2.urlopen(url) content = url_handler.read() url_handler.close() return content def write_to_file(content, filename, tmpdir): # Extracted code file_handler = open(os.path.join(tempdir, filename), "w") file_handler.write(content) file_handler.close() def download_distribution(url, tempdir): # Original code, after extractions distribution_contents = get_web_content(url) filename = get_file_name(url) write_to_file(distribution_contents, filename, tmpdir) return True And, on the test file: import module_I_want_to_test def mock_web_content(url): return """Some fake content, useful for testing""" def mock_write_to_file(content, filename, tmpdir): # In this case, do nothing, as we don't do filesystem meddling while unit testing pass module_I_want_to_test.get_web_content = mock_web_content module_I_want_to_test.write_to_file = mock_write_to_file class SomeTests(unittest.Testcase): # And so on... And then I second Daniel's suggestion, you should read some more in-depth material on unit testing. A: To mock urllopen you can pre fetch some examples that you can then use in your unittests. Here's an example to get you started: def urlopen(url): urlclean = url[:url.find('?')] # ignore GET parameters files = { 'http://example.com/foo.xml': 'foo.xml', 'http://example.com/bar.xml': 'bar.xml', } return file(files[urlclean]) yourmodule.urllib.urlopen = urlopen
python unittest howto
I`d like to know how I could unit-test the following module. def download_distribution(url, tempdir): """ Method which downloads the distribution from PyPI """ print "Attempting to download from %s" % (url,) try: url_handler = urllib2.urlopen(url) distribution_contents = url_handler.read() url_handler.close() filename = get_file_name(url) file_handler = open(os.path.join(tempdir, filename), "w") file_handler.write(distribution_contents) file_handler.close() return True except ValueError, IOError: return False
[ "Vague question. If you're just looking for a primer for unit testing in general with a Python slant, I recommend Mark Pilgrim's \"Dive Into Python\" which has a chapter on unit testing with Python. Otherwise you need to clear up what specific issues you are having testing that code.\n", "Unit test propositioners will tell you that unit tests should be self contained, that is, they should not access the network or the filesystem (especially not in writing mode). Network and filesystem tests are beyond the scope of unit tests (though you might subject them to integration tests).\nSpeaking generally, for such a case, I'd extract the urllib and file-writing codes to separate functions (which would not be unit-tested), and inject mock-functions during unit testing. \nI.e. (slightly abbreviated for better reading):\ndef get_web_content(url):\n # Extracted code\n url_handler = urllib2.urlopen(url)\n content = url_handler.read()\n url_handler.close()\n return content\n\ndef write_to_file(content, filename, tmpdir):\n # Extracted code\n file_handler = open(os.path.join(tempdir, filename), \"w\")\n file_handler.write(content)\n file_handler.close()\n\ndef download_distribution(url, tempdir):\n # Original code, after extractions\n distribution_contents = get_web_content(url)\n filename = get_file_name(url)\n write_to_file(distribution_contents, filename, tmpdir)\n return True\n\nAnd, on the test file:\nimport module_I_want_to_test\n\ndef mock_web_content(url):\n return \"\"\"Some fake content, useful for testing\"\"\"\ndef mock_write_to_file(content, filename, tmpdir):\n # In this case, do nothing, as we don't do filesystem meddling while unit testing\n pass\n\nmodule_I_want_to_test.get_web_content = mock_web_content\nmodule_I_want_to_test.write_to_file = mock_write_to_file\n\nclass SomeTests(unittest.Testcase):\n # And so on...\n\nAnd then I second Daniel's suggestion, you should read some more in-depth material on unit testing.\n", "To mock urllopen you can pre fetch some examples that you can then use in your unittests. Here's an example to get you started:\ndef urlopen(url):\n urlclean = url[:url.find('?')] # ignore GET parameters\n files = {\n 'http://example.com/foo.xml': 'foo.xml',\n 'http://example.com/bar.xml': 'bar.xml',\n }\n return file(files[urlclean])\nyourmodule.urllib.urlopen = urlopen\n\n" ]
[ 5, 5, 0 ]
[]
[]
[ "python", "unit_testing" ]
stackoverflow_0002655697_python_unit_testing.txt
Q: Is it possible to retrieve an uri chunk on AppEngine? Let's say i go to myblog.com/post/12. The /post handler is already defined, but how can i get the parameter being passed? 12 in this case is the post_id. I'm using the Python SDK. A: Sure. Example rule: ('/post/(\d+)', views.PostHandler) Example view: class PostHandler(BaseHandler): ''' Handler for viewing blog posts. ''' def get(self, id): blog_post = models.BlogPost.get_by_id(int(id))
Is it possible to retrieve an uri chunk on AppEngine?
Let's say i go to myblog.com/post/12. The /post handler is already defined, but how can i get the parameter being passed? 12 in this case is the post_id. I'm using the Python SDK.
[ "Sure.\nExample rule:\n('/post/(\\d+)', views.PostHandler)\n\nExample view:\nclass PostHandler(BaseHandler):\n ''' Handler for viewing blog posts. '''\n def get(self, id):\n blog_post = models.BlogPost.get_by_id(int(id))\n\n" ]
[ 3 ]
[]
[]
[ "google_app_engine", "python", "uri" ]
stackoverflow_0002700159_google_app_engine_python_uri.txt
Q: Python if statement not working as expected I'm searching for a string in a website and checking to see if the location of this string is in the expected location. I know the string starts at the 182nd character, and if I print temp it will even tell me that it is 182, however, the if statement says 182 is not 182. Some code f = urllib.urlopen(link) #store page contents in 's' s = f.read() f.close() temp = s.find('lettersandnumbers') if (htmlsize == "197"): #if ((s.find('lettersandnumbers')) == "182"): if (temp=="182"): print "Glorious" doStuff() else: print "HTML not correct. Aborting." else: print htmlsize print "File size is incorrect. Aborting." A: str.find returns integer, not string. String-integers comparison always returns False. A: Im not a python guru, but ill take a shot Try it like this if (temp == 182) Why? See SilentGhost answer. It involves types
Python if statement not working as expected
I'm searching for a string in a website and checking to see if the location of this string is in the expected location. I know the string starts at the 182nd character, and if I print temp it will even tell me that it is 182, however, the if statement says 182 is not 182. Some code f = urllib.urlopen(link) #store page contents in 's' s = f.read() f.close() temp = s.find('lettersandnumbers') if (htmlsize == "197"): #if ((s.find('lettersandnumbers')) == "182"): if (temp=="182"): print "Glorious" doStuff() else: print "HTML not correct. Aborting." else: print htmlsize print "File size is incorrect. Aborting."
[ "str.find returns integer, not string. String-integers comparison always returns False.\n", "Im not a python guru, but ill take a shot\nTry it like this\nif (temp == 182)\n\nWhy? See SilentGhost answer. It involves types\n" ]
[ 5, 3 ]
[]
[]
[ "python" ]
stackoverflow_0002700255_python.txt
Q: Django and Reportlab Question I have written this small Django view to return pdf. @login_required def code_view(request,myid): try: deal = Deal.objects.get(id=myid) except: raise Http404 header = deal.header code = deal.code response = HttpResponse(mimetype='application/pdf') response['Content-Disposition'] = 'attachment; filename=code.pdf' p = canvas.Canvas(response) p.drawString(10, 800, header) p.drawString(10, 700, code) p.showPage() p.save() return response And my questions: Utf-8 characters are not shown correctly within the pdf. How can I include an image ? How can I include a very basic html such as: . <ul> <li>List One</li> <li>List Two</li> <li>List Three</li> </ul> A: You should move to the next level and use DocTemplates. Images are quite easy, but using bullets is really hard - you have to define styles and more! I use a set of classes like the below: # -*- coding: utf-8 -*- from django.utils.encoding import smart_str from reportlab.lib.colors import Color from reportlab.lib.pagesizes import A4 from reportlab.lib.styles import StyleSheet1, ParagraphStyle from reportlab.lib.units import cm from reportlab.pdfgen import canvas from reportlab.platypus.doctemplate import BaseDocTemplate, PageTemplate, \ _doNothing from reportlab.platypus.frames import Frame from reportlab.platypus.paragraph import Paragraph import copy import re from reportlab.platypus.flowables import KeepTogether, Image, PageBreak from htmlentitydefs import name2codepoint from atom.http_core import HttpResponse import tempfile def htmlentitydecode(s): return re.sub('&(%s);' % '|'.join(name2codepoint), lambda m: smart_str(unichr(name2codepoint[m.group(1)])), s) PS = ParagraphStyle stylesheet = StyleSheet1() stylesheet.add(PS(name='Normal', leading=15)) stylesheet.add(PS(name='Bullet', parent=stylesheet['Normal'], bulletFontName = 'Symbol', bulletIndent = 0, bulletFontSize = 13, bulletColor = Color(0.93,0,0), bulletOffsetY = -1.5, leftIndent = 15.8, firstLineIndent = 0, ), alias='bu') stylesheet.add(PS(name='Heading1', parent=stylesheet['Normal'], fontSize=18, spaceAfter=23.5), alias='h1') stylesheet.add(PS(name='Heading2', parent=stylesheet['Normal'], fontSize=14, spaceAfter=4), alias='h2') stylesheet.add(PS(name='Heading3', parent=stylesheet['Normal'], textColor=Color(0.93,0,0) ), alias='h3') stylesheet.add(PS(name='Heading4', parent=stylesheet['Heading3'], textColor='black'), alias='h4') BulletStyle = copy.deepcopy(stylesheet["Bullet"]) H1Style = copy.deepcopy(stylesheet["Heading1"]) H2Style = copy.deepcopy(stylesheet["Heading2"]) H3Style = copy.deepcopy(stylesheet["Heading3"]) H4Style = copy.deepcopy(stylesheet["Heading4"]) NormalStyle = copy.deepcopy(stylesheet["Normal"]) top_margin = A4[1] - 1.22*cm bottom_margin = 1.5*cm left_margin = 2.8*cm frame_width = 17.02*cm right_margin = left_margin + frame_width frame_height = 22.7*cm letter_top_margin = 25.0*cm letter_bottom_margin = 3.0*cm letter_left_margin = 2.5*cm letter_right_margin = A4[0] - 2.5*cm letter_frame_width = A4[0] - 5.0*cm letter_frame_height = letter_top_margin - letter_bottom_margin class LetterTemplate(BaseDocTemplate): _invalidInitArgs = ('pageTemplates',) def handle_pageBegin(self): self._handle_pageBegin() self._handle_nextPageTemplate('First') def build(self, flowables, onFirstPage=_doNothing, canvasmaker=canvas.Canvas): self._calc() frameT = Frame(letter_left_margin, letter_bottom_margin, letter_frame_width, letter_frame_height, leftPadding=0, bottomPadding=0, rightPadding=0, topPadding=0, id='normal') self.addPageTemplates([PageTemplate(id='First',frames=frameT, onPage=onFirstPage, pagesize=self.pagesize)]) if onFirstPage is _doNothing and hasattr(self,'onFirstPage'): self.pageTemplates[0].beforeDrawPage = self.onFirstPage BaseDocTemplate.build(self, flowables, canvasmaker=canvasmaker) class PdfA4Letter(object): def __init__(self, filename): self.title = filename self._keep_together = False self.elements = [] self._keep_together_elements = [] self.doc = LetterTemplate(filename,showBoundary=False) self.elements = [] def _process_text(self, txt): text_elems = [] # avoid us from user added html. txt = txt.replace('&lt;','&lang;').replace('&gt;','&rang;') txt = htmlentitydecode(smart_str(txt).replace('<p>', '').replace('</p>', '<br />')) # @todo: in some case the reegxp does not work -> hack txt = txt.replace('target="_blank"', '') # process text for part in re.split('<ul>|</ul>|<ol>|</ol>', txt): part = part.strip() if part.count('<li>') > 0: for item in re.split('<li>|</li>', part): item = item.strip() if len(item) > 0: text_elems.append(Paragraph(item, BulletStyle, bulletText=u'•')) else: text_elems.append(Paragraph(part, NormalStyle)) return text_elems def _store_flowable(self, flowable): if self._keep_together == False: self.elements.append(flowable) else: self._keep_together_elements.append(flowable) def start_keep_together(self): self.end_keep_together() self._keep_together = True def end_keep_together(self): self._keep_together = False if len(self._keep_together_elements) > 0: e = self._keep_together_elements self.elements.append(KeepTogether(e)) self._keep_together_elements = [] def newPage(self): self._store_flowable(PageBreak()) def blankline(self, cnt=1): self.text(cnt*'<br/>') def text(self, txt): for e in self._process_text(txt): self._store_flowable(e) def image(self, name, width, height, halign='CENTER'): im = Image(name, width=width, height=height) im.hAlign = halign self._store_flowable(im) def h1(self, txt, add_to_toc=True): self.newPage() self._store_flowable(Paragraph(txt, H1Style)) def h2(self, txt, add_to_toc=True): self._store_flowable(Paragraph(txt, H2Style)) def h3(self, txt, add_to_toc=False): self._store_flowable(Paragraph(txt, H3Style)) def h4(self, txt, add_to_toc=False): self._store_flowable(Paragraph(txt, H4Style)) def _drawPage(self, canvas, doc): canvas.setSubject('Letter Subject') canvas.setTitle('Letter Title') canvas.setAuthor('Me') def build(self): # flush elems self.end_keep_together() self.doc.build(self.elements, self._drawPage) You could then handle the pdf generation in your view, like this: def view(request): file = tempfile.NamedTemporaryFile() e = PdfA4Letter(file.name) ref = '/absolute/path/to/image.png' e.image(ref, width=frame_width, height=10*cm) e.h1((u'Über Mich')) e.h3('Next header') t = """ ascasc<br /> ascascasc<br /> <ul> <li>sdv1</li> <li>sdv2</li> <li>sdv3</li> </ul> ascasc<br /> ascasc<br /> """ e.text(t) e.blankline(2) e.end_keep_together() e.build() response = HttpResponse(mimetype='application/pdf') response['Content-Disposition'] = 'attachment; filename=gugus.pdf' response.write(file.read()) file.close() return response A: Reportlab can handle some basic HTML formatting (<b>, <i>), not sure if it can do lists. You could use pisa for HTML to PDF conversion. Then you could also use <img> tag for image inclusion (You need to install PIL for using image)
Django and Reportlab Question
I have written this small Django view to return pdf. @login_required def code_view(request,myid): try: deal = Deal.objects.get(id=myid) except: raise Http404 header = deal.header code = deal.code response = HttpResponse(mimetype='application/pdf') response['Content-Disposition'] = 'attachment; filename=code.pdf' p = canvas.Canvas(response) p.drawString(10, 800, header) p.drawString(10, 700, code) p.showPage() p.save() return response And my questions: Utf-8 characters are not shown correctly within the pdf. How can I include an image ? How can I include a very basic html such as: . <ul> <li>List One</li> <li>List Two</li> <li>List Three</li> </ul>
[ "You should move to the next level and use DocTemplates.\nImages are quite easy, but using bullets is really hard - you have to define styles and more!\nI use a set of classes like the below:\n# -*- coding: utf-8 -*-\n\nfrom django.utils.encoding import smart_str\nfrom reportlab.lib.colors import Color\nfrom reportlab.lib.pagesizes import A4\nfrom reportlab.lib.styles import StyleSheet1, ParagraphStyle\nfrom reportlab.lib.units import cm\nfrom reportlab.pdfgen import canvas\nfrom reportlab.platypus.doctemplate import BaseDocTemplate, PageTemplate, \\\n _doNothing\nfrom reportlab.platypus.frames import Frame\nfrom reportlab.platypus.paragraph import Paragraph\nimport copy\nimport re\nfrom reportlab.platypus.flowables import KeepTogether, Image, PageBreak\nfrom htmlentitydefs import name2codepoint\nfrom atom.http_core import HttpResponse\nimport tempfile\n\ndef htmlentitydecode(s):\n return re.sub('&(%s);' % '|'.join(name2codepoint), lambda m: smart_str(unichr(name2codepoint[m.group(1)])), s)\n\nPS = ParagraphStyle\nstylesheet = StyleSheet1()\n\nstylesheet.add(PS(name='Normal',\n leading=15))\nstylesheet.add(PS(name='Bullet',\n parent=stylesheet['Normal'],\n bulletFontName = 'Symbol',\n bulletIndent = 0,\n bulletFontSize = 13,\n bulletColor = Color(0.93,0,0),\n bulletOffsetY = -1.5,\n leftIndent = 15.8,\n firstLineIndent = 0,\n ), alias='bu')\nstylesheet.add(PS(name='Heading1',\n parent=stylesheet['Normal'],\n fontSize=18,\n spaceAfter=23.5), alias='h1')\nstylesheet.add(PS(name='Heading2',\n parent=stylesheet['Normal'],\n fontSize=14,\n spaceAfter=4), alias='h2')\nstylesheet.add(PS(name='Heading3',\n parent=stylesheet['Normal'],\n textColor=Color(0.93,0,0)\n ), alias='h3')\nstylesheet.add(PS(name='Heading4',\n parent=stylesheet['Heading3'],\n textColor='black'), alias='h4')\n\nBulletStyle = copy.deepcopy(stylesheet[\"Bullet\"])\nH1Style = copy.deepcopy(stylesheet[\"Heading1\"])\nH2Style = copy.deepcopy(stylesheet[\"Heading2\"])\nH3Style = copy.deepcopy(stylesheet[\"Heading3\"])\nH4Style = copy.deepcopy(stylesheet[\"Heading4\"])\nNormalStyle = copy.deepcopy(stylesheet[\"Normal\"])\n\ntop_margin = A4[1] - 1.22*cm\nbottom_margin = 1.5*cm\nleft_margin = 2.8*cm\nframe_width = 17.02*cm\n\nright_margin = left_margin + frame_width \nframe_height = 22.7*cm\n\nletter_top_margin = 25.0*cm\nletter_bottom_margin = 3.0*cm\nletter_left_margin = 2.5*cm\nletter_right_margin = A4[0] - 2.5*cm\nletter_frame_width = A4[0] - 5.0*cm\nletter_frame_height = letter_top_margin - letter_bottom_margin\n\n\nclass LetterTemplate(BaseDocTemplate):\n _invalidInitArgs = ('pageTemplates',)\n\n def handle_pageBegin(self):\n self._handle_pageBegin()\n self._handle_nextPageTemplate('First')\n\n def build(self, flowables, onFirstPage=_doNothing, canvasmaker=canvas.Canvas):\n self._calc()\n\n frameT = Frame(letter_left_margin, letter_bottom_margin, letter_frame_width, letter_frame_height,\n leftPadding=0, bottomPadding=0, rightPadding=0, topPadding=0,\n id='normal')\n\n self.addPageTemplates([PageTemplate(id='First',frames=frameT, onPage=onFirstPage, pagesize=self.pagesize)])\n\n if onFirstPage is _doNothing and hasattr(self,'onFirstPage'):\n self.pageTemplates[0].beforeDrawPage = self.onFirstPage\n\n BaseDocTemplate.build(self, flowables, canvasmaker=canvasmaker)\n\n\nclass PdfA4Letter(object):\n\n def __init__(self, filename):\n self.title = filename\n self._keep_together = False\n self.elements = []\n self._keep_together_elements = []\n self.doc = LetterTemplate(filename,showBoundary=False)\n self.elements = []\n\n def _process_text(self, txt):\n text_elems = []\n\n # avoid us from user added html.\n txt = txt.replace('&lt;','&lang;').replace('&gt;','&rang;')\n txt = htmlentitydecode(smart_str(txt).replace('<p>', '').replace('</p>', '<br />'))\n\n # @todo: in some case the reegxp does not work -> hack\n txt = txt.replace('target=\"_blank\"', '')\n # process text\n for part in re.split('<ul>|</ul>|<ol>|</ol>', txt):\n part = part.strip()\n if part.count('<li>') > 0:\n for item in re.split('<li>|</li>', part):\n item = item.strip()\n if len(item) > 0:\n text_elems.append(Paragraph(item, BulletStyle, bulletText=u'•'))\n else:\n text_elems.append(Paragraph(part, NormalStyle))\n return text_elems\n\n def _store_flowable(self, flowable): \n if self._keep_together == False:\n self.elements.append(flowable)\n else:\n self._keep_together_elements.append(flowable)\n\n def start_keep_together(self):\n self.end_keep_together()\n self._keep_together = True\n\n def end_keep_together(self):\n self._keep_together = False\n if len(self._keep_together_elements) > 0:\n e = self._keep_together_elements\n self.elements.append(KeepTogether(e))\n self._keep_together_elements = []\n\n def newPage(self):\n self._store_flowable(PageBreak())\n\n def blankline(self, cnt=1):\n self.text(cnt*'<br/>')\n\n def text(self, txt):\n for e in self._process_text(txt):\n self._store_flowable(e) \n\n def image(self, name, width, height, halign='CENTER'):\n im = Image(name, width=width, height=height)\n im.hAlign = halign\n self._store_flowable(im)\n\n def h1(self, txt, add_to_toc=True):\n self.newPage()\n self._store_flowable(Paragraph(txt, H1Style))\n\n def h2(self, txt, add_to_toc=True):\n self._store_flowable(Paragraph(txt, H2Style))\n\n def h3(self, txt, add_to_toc=False):\n self._store_flowable(Paragraph(txt, H3Style))\n\n def h4(self, txt, add_to_toc=False):\n self._store_flowable(Paragraph(txt, H4Style))\n\n def _drawPage(self, canvas, doc):\n canvas.setSubject('Letter Subject')\n canvas.setTitle('Letter Title')\n canvas.setAuthor('Me')\n\n def build(self):\n # flush elems\n self.end_keep_together()\n self.doc.build(self.elements, self._drawPage)\n\nYou could then handle the pdf generation in your view, like this:\ndef view(request):\n\n file = tempfile.NamedTemporaryFile()\n\n e = PdfA4Letter(file.name)\n\n ref = '/absolute/path/to/image.png'\n e.image(ref, width=frame_width, height=10*cm)\n e.h1((u'Über Mich'))\n e.h3('Next header')\n\n t = \"\"\"\n ascasc<br />\n ascascasc<br />\n <ul>\n <li>sdv1</li>\n <li>sdv2</li>\n <li>sdv3</li>\n </ul>\n ascasc<br />\n ascasc<br />\n \"\"\"\n\n e.text(t)\n e.blankline(2)\n e.end_keep_together()\n e.build()\n\n response = HttpResponse(mimetype='application/pdf')\n response['Content-Disposition'] = 'attachment; filename=gugus.pdf'\n response.write(file.read()) \n file.close()\n return response \n\n", "Reportlab can handle some basic HTML formatting (<b>, <i>), not sure if it can do lists. You could use pisa for HTML to PDF conversion. Then you could also use <img> tag for image inclusion (You need to install PIL for using image)\n" ]
[ 10, 2 ]
[]
[]
[ "django", "pdf", "python", "reportlab" ]
stackoverflow_0002467042_django_pdf_python_reportlab.txt
Q: Importing Classes Within a Module Currently, I have a parser with multiple classes that work together. For Instance: TreeParser creates multiple Product and Reactant modules which in turn create multiple Element classes. The TreeParser is called by a render method within the same module, which is called from the importer. Finally, if the package has dependencies (such as re and another another module within the same folder), where is the best place to require those modules? Within the __init__.py file or within the module itself? EDIT: When importing a part of a module that calls another def within the module, how do you call that def if it isn't imported? lib/toolset.py => def add(){ toolset.show("I'm Add"); } def show(text){print text}; if that file is called from main.py => import lib.toolset then, the show method wouldn't be loaded, or main.py => from lib.toolset import show wouldn't work. Can an import toolset be put at the top of toolset.py? A: I think this is the key statement in your question. I don't really want to add the module name in front of every call to the class My response: I hear what you're saying, but this is standard practice in Python. Any Python programmer reading code like "result = match(blah)" will presume you're calling a local function inside your own module. If you're actually talking about the function match() in the re module they'll expect to see "result = re.match(blah)". That's just how it is. If it helps, I didn't like this style either when I came to Python first, but now I appreciate that it removes any ambiguity over exactly which of the many functions called "match" I am calling, especially when I come back to read code that I wrote six months ago. A: I'm not really sure what your problem is, is it that you just want to type less? get a decent source editor with autocomplete! you can do import longmodulename as ln and use ln.something instead of longmodulename.something you can do from longmodulename import ( something, otherthing ) and use something directly import * is never a good idea, it messes with coding tools, breaks silently, makes readers wonder stuff was defined and so on ...
Importing Classes Within a Module
Currently, I have a parser with multiple classes that work together. For Instance: TreeParser creates multiple Product and Reactant modules which in turn create multiple Element classes. The TreeParser is called by a render method within the same module, which is called from the importer. Finally, if the package has dependencies (such as re and another another module within the same folder), where is the best place to require those modules? Within the __init__.py file or within the module itself? EDIT: When importing a part of a module that calls another def within the module, how do you call that def if it isn't imported? lib/toolset.py => def add(){ toolset.show("I'm Add"); } def show(text){print text}; if that file is called from main.py => import lib.toolset then, the show method wouldn't be loaded, or main.py => from lib.toolset import show wouldn't work. Can an import toolset be put at the top of toolset.py?
[ "I think this is the key statement in your question.\n\nI don't really want to add the module name in front of every call to the class\n\nMy response: I hear what you're saying, but this is standard practice in Python.\nAny Python programmer reading code like \"result = match(blah)\" will presume you're calling a local function inside your own module. If you're actually talking about the function match() in the re module they'll expect to see \"result = re.match(blah)\". That's just how it is.\nIf it helps, I didn't like this style either when I came to Python first, but now I appreciate that it removes any ambiguity over exactly which of the many functions called \"match\" I am calling, especially when I come back to read code that I wrote six months ago.\n", "I'm not really sure what your problem is, is it that you just want to type less?\n\nget a decent source editor with autocomplete!\nyou can do import longmodulename as ln and use ln.something instead of longmodulename.something\nyou can do from longmodulename import ( something, otherthing ) and use something directly\n\nimport * is never a good idea, it messes with coding tools, breaks silently, makes readers wonder stuff was defined and so on ...\n" ]
[ 3, 2 ]
[]
[]
[ "module", "namespaces", "python" ]
stackoverflow_0002699987_module_namespaces_python.txt
Q: Storing hierarchical (parent/child) data in Python/Django: MPTT alternative? I'm looking for a good way to store and use hierarchical (parent/child) data in Django. I've been using django-mptt, but it seems entirely incompatible with my brain - I end up with non-obvious bugs in non-obvious places, mostly when moving things around in the tree: I end up with inconsistent state, where a node and its parent will disagree on their relationship. My needs are simple: Given a node: find its root find its ancestors find its descendants With a tree: easily move nodes (ie. change parent) My trees will be smallish (at most 10k nodes over 20 levels, generally much much smaller, say 10 nodes with 1 or 2 levels). I have to think there has to be an easier way to do trees in python/django. Are there other approaches that do a better job of maintaining consistency? A: django-treebeard is another option. It has great documentation. I believe it meets all of your above requirements and includes some functions for checking the tree for problems and fixing those problems in the tree. Node.find_problems() https://tabo.pe/projects/django-treebeard/docs/1.60/api.html#treebeard.models.Node.find_problems Node.fix_tree() https://tabo.pe/projects/django-treebeard/docs/1.60/api.html#treebeard.models.Node.fix_tree
Storing hierarchical (parent/child) data in Python/Django: MPTT alternative?
I'm looking for a good way to store and use hierarchical (parent/child) data in Django. I've been using django-mptt, but it seems entirely incompatible with my brain - I end up with non-obvious bugs in non-obvious places, mostly when moving things around in the tree: I end up with inconsistent state, where a node and its parent will disagree on their relationship. My needs are simple: Given a node: find its root find its ancestors find its descendants With a tree: easily move nodes (ie. change parent) My trees will be smallish (at most 10k nodes over 20 levels, generally much much smaller, say 10 nodes with 1 or 2 levels). I have to think there has to be an easier way to do trees in python/django. Are there other approaches that do a better job of maintaining consistency?
[ "django-treebeard is another option. It has great documentation. I believe it meets all of your above requirements and includes some functions for checking the tree for problems and fixing those problems in the tree.\nNode.find_problems() https://tabo.pe/projects/django-treebeard/docs/1.60/api.html#treebeard.models.Node.find_problems\nNode.fix_tree() https://tabo.pe/projects/django-treebeard/docs/1.60/api.html#treebeard.models.Node.fix_tree\n" ]
[ 4 ]
[]
[]
[ "django", "django_mptt", "mptt", "python", "tree" ]
stackoverflow_0002699881_django_django_mptt_mptt_python_tree.txt
Q: How to suppress error messages in rpy2 The following code does not work. It seems that the R warning message raises a python error. # enable use of python objects in rpy2 import rpy2.robjects.numpy2ri import numpy as np from rpy2.robjects import r # create an example array a = np.array([[5,2,5],[3,7,8]]) # this line leads to a warning message, which in turn raises an # error message if run within a script. result = r['chisq.test'](a) Running that code example in ipython works, however, running it inside a script raises the errorTypeError: 'module' object is unsubscriptable. I assume this is due to the warning message. What is the best way to avoid this problem? Thanks in advance! A: Put a print statement right before the error: print(r) result = r['chisq.test'](a) The error message TypeError: 'module' object is unsubscriptable is claiming that r is referencing a module. When you run the script with the print statement, you'll see something like <module 'rpy2' from '/usr/lib/python2.6/dist-packages/rpy2/__init__.pyc'> Traceback (most recent call last): File "/home/unutbu/pybin/test.py", line 14, in <module> result = r['chisq.test'](a) TypeError: 'module' object is unsubscriptable Note that the first line says that r is referencing the module rpy2. This should give you a clue as to what is going wrong. Once you find the name of the trouble-making module, check your import statements to see how r is getting reassigned to that module. For example, if you have from rpy2.robjects import r ... import rpy2 as r then the second import statement is overriding the first, and the name r is thereafter referencing the module rpy2 instead of rpy2.robjects.r.
How to suppress error messages in rpy2
The following code does not work. It seems that the R warning message raises a python error. # enable use of python objects in rpy2 import rpy2.robjects.numpy2ri import numpy as np from rpy2.robjects import r # create an example array a = np.array([[5,2,5],[3,7,8]]) # this line leads to a warning message, which in turn raises an # error message if run within a script. result = r['chisq.test'](a) Running that code example in ipython works, however, running it inside a script raises the errorTypeError: 'module' object is unsubscriptable. I assume this is due to the warning message. What is the best way to avoid this problem? Thanks in advance!
[ "Put a print statement right before the error:\nprint(r)\nresult = r['chisq.test'](a)\n\nThe error message TypeError: 'module' object is unsubscriptable is claiming that r is referencing a module. When you run the script with the print statement, you'll see something like\n<module 'rpy2' from '/usr/lib/python2.6/dist-packages/rpy2/__init__.pyc'>\nTraceback (most recent call last):\n File \"/home/unutbu/pybin/test.py\", line 14, in <module>\n result = r['chisq.test'](a)\nTypeError: 'module' object is unsubscriptable\n\nNote that the first line says that r is referencing the module rpy2. \nThis should give you a clue as to what is going wrong. Once you find the name of the trouble-making module, check your import statements to see how r is getting reassigned to that module.\nFor example, if you have \nfrom rpy2.robjects import r\n...\nimport rpy2 as r\n\nthen the second import statement is overriding the first, and the name r is thereafter referencing the module rpy2 instead of rpy2.robjects.r. \n" ]
[ 1 ]
[]
[]
[ "numpy", "python", "rpy2" ]
stackoverflow_0002700051_numpy_python_rpy2.txt
Q: Using Sphinx to create context-sensitive help files in HTML I am currently using AsciiDoc for documenting my software projects because it supports PDF and HTML help generation. I am currently running it through Cygwin so that the a2x toolchain functions properly. This works well for me but is a pain to setup on other Windows computers. I have been looking for alternative methods and recently revisited Sphinx. Noticing that it now produces HTML help files I gave it a try and it seems to work well in the small tests I performed. My question is, is there a way to specify map id's for context sensitive help in the text so that my Windows programs can call the proper help API and the file is launched and opened to the desired location? In AsciiDoc I am using pass::[<?dbhh topicname="_about" topicid="801"?>]. By using these constructs a context.h and alias.h are generated along with the other HTML help files (context sensitive help information). A: I do not know about AcsiiDoc much, but in Sphinx you can reference arbitrary locations by placing anchors where you need them. See :ref: role.
Using Sphinx to create context-sensitive help files in HTML
I am currently using AsciiDoc for documenting my software projects because it supports PDF and HTML help generation. I am currently running it through Cygwin so that the a2x toolchain functions properly. This works well for me but is a pain to setup on other Windows computers. I have been looking for alternative methods and recently revisited Sphinx. Noticing that it now produces HTML help files I gave it a try and it seems to work well in the small tests I performed. My question is, is there a way to specify map id's for context sensitive help in the text so that my Windows programs can call the proper help API and the file is launched and opened to the desired location? In AsciiDoc I am using pass::[<?dbhh topicname="_about" topicid="801"?>]. By using these constructs a context.h and alias.h are generated along with the other HTML help files (context sensitive help information).
[ "I do not know about AcsiiDoc much, but in Sphinx you can reference arbitrary locations by placing anchors where you need them. See :ref: role.\n" ]
[ 2 ]
[]
[]
[ "asciidoc", "python", "python_sphinx" ]
stackoverflow_0002690732_asciidoc_python_python_sphinx.txt
Q: Installing python2.6 and assorted libraries on DreamHost I managed to install python2.6 on DreamHost following this guide. I also tried to easy_install "lxml" but it fails horribly. Anyone ever accomplished this? TIA A: You should try http://wiki.dreamhost.com/Django and http://wiki.dreamhost.com/Python#Building_a_custom_version_of_Python - it contains the most up to date info.
Installing python2.6 and assorted libraries on DreamHost
I managed to install python2.6 on DreamHost following this guide. I also tried to easy_install "lxml" but it fails horribly. Anyone ever accomplished this? TIA
[ "You should try http://wiki.dreamhost.com/Django and http://wiki.dreamhost.com/Python#Building_a_custom_version_of_Python - it contains the most up to date info.\n" ]
[ 0 ]
[]
[]
[ "dreamhost", "lxml", "python" ]
stackoverflow_0002694944_dreamhost_lxml_python.txt
Q: Python: need to get energies of charge pairs I am new to python. I have to make a program for a project that takes a PDB format file as input and returns a list of all the intra-chain and inter-chain charge pairs and their energies (using coulomb’s law assuming a dielectric constant of () of 40.0). For simplicity, the charged residues for this program are just Arg (CZ), Lys (NZ), Asp (CG) and Glu (CD) with the charge bearing atoms for each indicated in parentheses. The program should report any attractive or repulsive interactions within 8.0 Å. Here is some additional information needed for the program. Eij = energy of interaction between atoms i and j in kilocalories/mole (kcals/mol) qi = charge for atom i (+1 for Lys or Arg, -1 for Glu or Asp) rij = distance between atoms i and j in angstroms using the distance formula The output should adhere to the following format: First residue : Second residue Distance Energy Lys 10 Chain A: ASP 46 Chain A D= 4.76 ang E= -2.32 kcals/mol (For some reason I can't organize the top two rows, but the first row should be labels and below it the corresponding values.) I really have no idea how to tackle this problem, any and all help is greatly appreciated. I hope this is the right place to ask. Thank you in advance. Using python 2.5 A: Where exactly is your problem? Your description is much too general. The general idea is as follows: Load the PDB file and parse each line. That will give you a list of atoms and their (x, y, z) positions. Iterate over the list in a nested loop to compare each atom with each other. Compute the distance of the atom pair. If their distance is less than 8.0 Å, compute their charges. A: have you looked at solutions already done? http://biopython.org/wiki/Biopython http://pymmlib.sourceforge.net/ if you want to roll your own, you will have the implement databank format parser (which is trivial). Then something like this assuming you structure is in atoms (I know this is not what you after, but perhaps this will give the idea how to do it): for i in range(len(atoms)): for j in range(i): r = distance(i,j) if r < 8: Q += (atoms[i].q * atoms[j].q)/r however, you have to be careful with hydrogens oftentimes they are not provided explicitly, especially with NMR data
Python: need to get energies of charge pairs
I am new to python. I have to make a program for a project that takes a PDB format file as input and returns a list of all the intra-chain and inter-chain charge pairs and their energies (using coulomb’s law assuming a dielectric constant of () of 40.0). For simplicity, the charged residues for this program are just Arg (CZ), Lys (NZ), Asp (CG) and Glu (CD) with the charge bearing atoms for each indicated in parentheses. The program should report any attractive or repulsive interactions within 8.0 Å. Here is some additional information needed for the program. Eij = energy of interaction between atoms i and j in kilocalories/mole (kcals/mol) qi = charge for atom i (+1 for Lys or Arg, -1 for Glu or Asp) rij = distance between atoms i and j in angstroms using the distance formula The output should adhere to the following format: First residue : Second residue Distance Energy Lys 10 Chain A: ASP 46 Chain A D= 4.76 ang E= -2.32 kcals/mol (For some reason I can't organize the top two rows, but the first row should be labels and below it the corresponding values.) I really have no idea how to tackle this problem, any and all help is greatly appreciated. I hope this is the right place to ask. Thank you in advance. Using python 2.5
[ "Where exactly is your problem? Your description is much too general.\nThe general idea is as follows:\n\nLoad the PDB file and parse each line.\nThat will give you a list of atoms and their (x, y, z) positions.\nIterate over the list in a nested loop to compare each atom with each other.\nCompute the distance of the atom pair.\nIf their distance is less than 8.0 Å, compute their charges.\n\n", "have you looked at solutions already done?\nhttp://biopython.org/wiki/Biopython\nhttp://pymmlib.sourceforge.net/\nif you want to roll your own, you will have the implement databank format parser (which is trivial).\nThen something like this assuming you structure is in atoms (I know this is not what you after, but perhaps this will give the idea how to do it):\nfor i in range(len(atoms)):\n for j in range(i):\n r = distance(i,j)\n if r < 8: Q += (atoms[i].q * atoms[j].q)/r\n\nhowever, you have to be careful with hydrogens oftentimes they are not provided explicitly, especially with NMR data\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0002700236_python.txt
Q: Bipartite matching in Python Does anybody know any module in Python that computes the best bipartite matching? I have tried the following two: munkres hungarian However, in my case, I have to deal with non-complete graph (i.e., there might not be an edge between two nodes), and therefore, there might not be a match if the node has no edge. The above two packages seem not to be able to deal with this. Any advice? A: Set cost to infinity or a large value for an edge that does not exist. You can then tell by the result whether an invalid edge was used.
Bipartite matching in Python
Does anybody know any module in Python that computes the best bipartite matching? I have tried the following two: munkres hungarian However, in my case, I have to deal with non-complete graph (i.e., there might not be an edge between two nodes), and therefore, there might not be a match if the node has no edge. The above two packages seem not to be able to deal with this. Any advice?
[ "Set cost to infinity or a large value for an edge that does not exist. You can then tell by the result whether an invalid edge was used.\n" ]
[ 5 ]
[]
[]
[ "bipartite", "graph", "python" ]
stackoverflow_0002700847_bipartite_graph_python.txt
Q: Regular Expression Question I'm trying to use regular expression to extract the comments in the heading of a file. For example, the source code may look like: //This is an example file. //Please help me. #include "test.h" int main() //main function { ... } What I want to extract from the code are the first two lines, i.e. //This is an example file. //Please help me. Any idea? A: Why use regex? >>> f = file('/tmp/source') >>> for line in f.readlines(): ... if not line.startswith('//'): ... break ... print line ... A: >>> code="""//This is an example file. ... //Please help me. ... ... #include "test.h" ... int main() //main function ... { ... ... ... } ... """ >>> >>> import re >>> re.findall("^\s*//.*",code,re.MULTILINE) ['//This is an example file.', '//Please help me.'] >>> If you only need to match continuous comment lines at the top, you could use following. >>> re.search("^((?:\s*//.*\n)+)",code).group().strip().split("\n") ['//This is an example file.', '//Please help me.'] >>> A: this doesn't just get the first 2 comment lines, but mulitline and // comments at the back as well. Its not what you required though. data=open("file").read() for c in data.split("*/"): # multiline if "/*" in c: print ''.join(c.split("/*")[1:]) if "//" in c: for item in c.split("\n"): if "//" in c: print ''.join(item.split("//")[1:]) A: to extend the context into below considerations spaces in front of //... empty lines between each //... line import re code = """//This is an example file. a // Please help me. // ha #include "test.h" int main() //main function { ... }""" for s in re.finditer(r"^(\s*)(//.*)",code,re.MULTILINE): print(s.group(2)) >>> //This is an example file. // Please help me. // ha
Regular Expression Question
I'm trying to use regular expression to extract the comments in the heading of a file. For example, the source code may look like: //This is an example file. //Please help me. #include "test.h" int main() //main function { ... } What I want to extract from the code are the first two lines, i.e. //This is an example file. //Please help me. Any idea?
[ "Why use regex?\n>>> f = file('/tmp/source')\n>>> for line in f.readlines():\n... if not line.startswith('//'):\n... break\n... print line\n... \n\n", ">>> code=\"\"\"//This is an example file.\n... //Please help me.\n...\n... #include \"test.h\"\n... int main() //main function\n... {\n... ...\n... }\n... \"\"\"\n>>>\n>>> import re\n>>> re.findall(\"^\\s*//.*\",code,re.MULTILINE)\n['//This is an example file.', '//Please help me.']\n>>>\n\nIf you only need to match continuous comment lines at the top, you could use following.\n>>> re.search(\"^((?:\\s*//.*\\n)+)\",code).group().strip().split(\"\\n\")\n['//This is an example file.', '//Please help me.']\n>>>\n\n", "this doesn't just get the first 2 comment lines, but mulitline and // comments at the back as well. Its not what you required though.\ndata=open(\"file\").read()\nfor c in data.split(\"*/\"):\n # multiline\n if \"/*\" in c:\n print ''.join(c.split(\"/*\")[1:])\n if \"//\" in c:\n for item in c.split(\"\\n\"):\n if \"//\" in c:\n print ''.join(item.split(\"//\")[1:])\n\n", "to extend the context into below considerations\n\nspaces in front of //... \nempty lines between each //... line \n\n\nimport re\n\ncode = \"\"\"//This is an example file. \n a\n // Please help me.\n\n// ha\n\n#include \"test.h\"\nint main() //main function\n{\n ...\n}\"\"\"\n\nfor s in re.finditer(r\"^(\\s*)(//.*)\",code,re.MULTILINE):\n print(s.group(2))\n\n>>>\n//This is an example file. \n// Please help me.\n// ha\n\n" ]
[ 5, 2, 1, 0 ]
[]
[]
[ "python", "regex", "string" ]
stackoverflow_0002699417_python_regex_string.txt
Q: Endless problems with a very simple python subprocess.Popen task I'd like python to send around a half-million integers in the range 0-255 each to an executable written in C++. This executable will then respond with a few thousand integers. Each on one line. This seems like it should be very simple to do with subprocess but i've had endless troubles. Right now im testing with code: // main() u32 num; std::cin >> num; u8* data = new u8[num]; for (u32 i = 0; i < num; ++i) std::cin >> data[i]; // test output / spit it back out for (u32 i = 0; i < num; ++i) std::cout << data[i] << std::endl; return 0; Building an array of strings ("data"), each like "255\n", in python and then using: output = proc.communicate("".join(data))[0] ...doesn't work (says stdin is closed, maybe too much at one time). Neither has using proc.stdin and proc.stdout worked. This should be so very simple, but I'm getting constant exceptions, and/or no output data returned to me. My Popen is currently: proc = Popen('aux/test_cpp_program', stdin=PIPE, stdout=PIPE, bufsize=1) Advise me before I pull my hair out. ;) A: This works perfectly for me: #include <iostream> int main() { int num; std::cin >> num; char* data = new char[num]; for (int i = 0; i < num; ++i) std::cin >> data[i]; // test output / spit it back out for (int i = 0; i < num; ++i) std::cout << data[i] << std::endl; return 0; } python: In [9]: from subprocess import Popen, PIPE In [10]: a = Popen('./a.out', stdin=PIPE, stdout=PIPE) In [11]: a.stdin.write("2\nab") In [12]: a.stdout.read() Out[12]: 'a\nb\n' Note that I added a delimiter (\n) between the number of bytes to write, this is the safest if you do not want your c++ cin to fail on sending something like 3,1,2,3 which would concatenate to 3123, expecting so many arguments. A: In C++, when you read a char or unsigned char from cin, it reads a single byte from stdin. However, you expect it to read a decimal representation of a number from 0 to 255. If you read an int instead it should read it correctly: unsigned int n; std::cin >> n; data[i] = static_cast<u8>(n); Or instead, you can have the Python code write the values as a sequence of bytes rather than digits, by using the chr function.
Endless problems with a very simple python subprocess.Popen task
I'd like python to send around a half-million integers in the range 0-255 each to an executable written in C++. This executable will then respond with a few thousand integers. Each on one line. This seems like it should be very simple to do with subprocess but i've had endless troubles. Right now im testing with code: // main() u32 num; std::cin >> num; u8* data = new u8[num]; for (u32 i = 0; i < num; ++i) std::cin >> data[i]; // test output / spit it back out for (u32 i = 0; i < num; ++i) std::cout << data[i] << std::endl; return 0; Building an array of strings ("data"), each like "255\n", in python and then using: output = proc.communicate("".join(data))[0] ...doesn't work (says stdin is closed, maybe too much at one time). Neither has using proc.stdin and proc.stdout worked. This should be so very simple, but I'm getting constant exceptions, and/or no output data returned to me. My Popen is currently: proc = Popen('aux/test_cpp_program', stdin=PIPE, stdout=PIPE, bufsize=1) Advise me before I pull my hair out. ;)
[ "This works perfectly for me:\n#include <iostream>\n\nint main()\n{\n int num;\n std::cin >> num;\n\n char* data = new char[num];\n for (int i = 0; i < num; ++i)\n std::cin >> data[i];\n\n // test output / spit it back out\n for (int i = 0; i < num; ++i)\n std::cout << data[i] << std::endl;\n\n return 0;\n}\n\npython:\nIn [9]: from subprocess import Popen, PIPE\nIn [10]: a = Popen('./a.out', stdin=PIPE, stdout=PIPE)\nIn [11]: a.stdin.write(\"2\\nab\")\nIn [12]: a.stdout.read()\nOut[12]: 'a\\nb\\n'\n\nNote that I added a delimiter (\\n) between the number of bytes to write, this is the safest if you do not want your c++ cin to fail on sending something like 3,1,2,3 which would concatenate to 3123, expecting so many arguments.\n", "In C++, when you read a char or unsigned char from cin, it reads a single byte from stdin. However, you expect it to read a decimal representation of a number from 0 to 255. If you read an int instead it should read it correctly:\nunsigned int n;\nstd::cin >> n;\ndata[i] = static_cast<u8>(n);\n\nOr instead, you can have the Python code write the values as a sequence of bytes rather than digits, by using the chr function.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0002701364_python_subprocess.txt
Q: Python re.IGNORECASE being dynamic I'd like to do something like this: re.findall(r"(?:(?:\A|\W)" + 'Hello' + r"(?:\Z|\W))", 'hello world',re.I) And have re.I be dynamic, so I can do case-sensitive or insensitive comparisons on the fly. This works but is undocumented: re.findall(r"(?:(?:\A|\W)" + 'Hello' + r"(?:\Z|\W))", 'hello world',1) To set it to sensitive. Is there a Pythonic way to do this? My best thought so far is: if case_sensitive: regex_senstive = 1 else: regex_sensitive = re.I re.findall(r"(?:(?:\A|\W)" + 'Hello' + r"(?:\Z|\W))", 'hello world',regex_sensitive) A: To get the default behavior, you can use 0 for the flags parameter. You should not use 1, as it will set the undocumented re.TEMPLATE flag, which disables backtracking. So you can use: flags = 0 if case_sensitive else re.I re.findall(r'pattern', s, flags) The flags parameter is actually a combination of flags (re.I, re.M, etc.), with each flag represented by a single bit. When no bits are set (the value 0), the default behavior is used.
Python re.IGNORECASE being dynamic
I'd like to do something like this: re.findall(r"(?:(?:\A|\W)" + 'Hello' + r"(?:\Z|\W))", 'hello world',re.I) And have re.I be dynamic, so I can do case-sensitive or insensitive comparisons on the fly. This works but is undocumented: re.findall(r"(?:(?:\A|\W)" + 'Hello' + r"(?:\Z|\W))", 'hello world',1) To set it to sensitive. Is there a Pythonic way to do this? My best thought so far is: if case_sensitive: regex_senstive = 1 else: regex_sensitive = re.I re.findall(r"(?:(?:\A|\W)" + 'Hello' + r"(?:\Z|\W))", 'hello world',regex_sensitive)
[ "To get the default behavior, you can use 0 for the flags parameter. You should not use 1, as it will set the undocumented re.TEMPLATE flag, which disables backtracking.\nSo you can use:\nflags = 0 if case_sensitive else re.I\nre.findall(r'pattern', s, flags)\n\nThe flags parameter is actually a combination of flags (re.I, re.M, etc.), with each flag represented by a single bit. When no bits are set (the value 0), the default behavior is used.\n" ]
[ 2 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0002701844_python_regex.txt
Q: Fast iterating over first n items of an iterable (not a list) in python I'm looking for a pythonic way of iterating over first n items of an iterable (upd: not a list in a common case, as for lists things are trivial), and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_something(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it? A: for item in itertools.islice(iterable, n): is the most obvious, easy way to do it. It works for arbitrary iterables and is O(n), like would be any sane solution. It's conceivable that another solution could have better performance; we wouldn't know without timing. I wouldn't recommend bothering with timing unless you profile your code and find this call to be a hotspot. Unless it's buries within an inner loop, it is highly doubtful that it will be. Premature optimization is the root of all evil. If I was going to look for alternate solutions, I would look at ones like for count, item in enumerate(iterable): if count > n: break ... and for i in xrange(n): item = next(iterator) .... I wouldn't guess these would help, but they seem to be worth trying if we really want to compare things. If I was stuck in a situation where I profiled and found this was a hotspot in an inner loop (is this really your situation?), I would also try to ease the name lookup from getting the islice attribute of the global iterools to binding the function to a local name already. These are things you only do after you've proven they'll help. People try doing them other times a lot. It doens't help make their programs appreciably faster; it just makes their programs worse. A: itertools tends to be the fastest solution, when directly applicable. Obviously, the only way to check is to benchmark -- e.g., save in aaa.py import itertools def doit1(iterable, n, do_something=lambda x: None): count = 0 for item in iterable: do_something(item) count += 1 if count >= n: break def doit2(iterable, n, do_something=lambda x: None): for item in itertools.islice(iterable, n): do_something(item) pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) def dd1(itrbl=range(44)): doit1(itrbl, 23) def dd2(itrbl=range(44)): doit2(itrbl, 23) and see...: $ python -mtimeit -s'import aaa' 'aaa.dd1()' 100000 loops, best of 3: 8.82 usec per loop $ python -mtimeit -s'import aaa' 'aaa.dd2()' 100000 loops, best of 3: 6.33 usec per loop so clearly, itertools is faster here -- benchmark with your own data to verify. BTW, I find timeit MUCH more usable from the command line, so that's how I always use it -- it then runs the right "order of magnitude" of loops for the kind of speeds you're specifically trying to measure, be those 10, 100, 1000, and so on -- here, to distinguish a microsecond and a half of difference, a hundred thousand loops is about right. A: If it's a list then you can use slicing: list[:n] A: You can use enumerate to write essentially the same loop you have, but in a more simple, Pythonic way: for idx, val in enumerate(iterableobj): if idx > n: break do_something(val) A: Of a list? Try for k in mylist[0:n]: # do stuff with k you can also use a comprehension if you need to my_new_list = [blah(k) for k in mylist[0:n]]
Fast iterating over first n items of an iterable (not a list) in python
I'm looking for a pythonic way of iterating over first n items of an iterable (upd: not a list in a common case, as for lists things are trivial), and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_something(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it?
[ "for item in itertools.islice(iterable, n): is the most obvious, easy way to do it. It works for arbitrary iterables and is O(n), like would be any sane solution.\nIt's conceivable that another solution could have better performance; we wouldn't know without timing. I wouldn't recommend bothering with timing unless you profile your code and find this call to be a hotspot. Unless it's buries within an inner loop, it is highly doubtful that it will be. Premature optimization is the root of all evil.\n\nIf I was going to look for alternate solutions, I would look at ones like for count, item in enumerate(iterable): if count > n: break ... and for i in xrange(n): item = next(iterator) .... I wouldn't guess these would help, but they seem to be worth trying if we really want to compare things. If I was stuck in a situation where I profiled and found this was a hotspot in an inner loop (is this really your situation?), I would also try to ease the name lookup from getting the islice attribute of the global iterools to binding the function to a local name already. \nThese are things you only do after you've proven they'll help. People try doing them other times a lot. It doens't help make their programs appreciably faster; it just makes their programs worse.\n", "itertools tends to be the fastest solution, when directly applicable.\nObviously, the only way to check is to benchmark -- e.g., save in aaa.py\nimport itertools\n\ndef doit1(iterable, n, do_something=lambda x: None):\n count = 0\n for item in iterable:\n do_something(item)\n count += 1\n if count >= n: break\n\ndef doit2(iterable, n, do_something=lambda x: None):\n for item in itertools.islice(iterable, n):\n do_something(item)\n\npair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2)\n\ndef dd1(itrbl=range(44)): doit1(itrbl, 23)\ndef dd2(itrbl=range(44)): doit2(itrbl, 23)\n\nand see...:\n$ python -mtimeit -s'import aaa' 'aaa.dd1()'\n100000 loops, best of 3: 8.82 usec per loop\n$ python -mtimeit -s'import aaa' 'aaa.dd2()'\n100000 loops, best of 3: 6.33 usec per loop\n\nso clearly, itertools is faster here -- benchmark with your own data to verify.\nBTW, I find timeit MUCH more usable from the command line, so that's how I always use it -- it then runs the right \"order of magnitude\" of loops for the kind of speeds you're specifically trying to measure, be those 10, 100, 1000, and so on -- here, to distinguish a microsecond and a half of difference, a hundred thousand loops is about right.\n", "If it's a list then you can use slicing:\nlist[:n]\n\n", "You can use enumerate to write essentially the same loop you have, but in a more simple, Pythonic way:\n\nfor idx, val in enumerate(iterableobj):\n if idx > n:\n break\n do_something(val)\n\n", "Of a list? Try \nfor k in mylist[0:n]:\n # do stuff with k\n\nyou can also use a comprehension if you need to\nmy_new_list = [blah(k) for k in mylist[0:n]]\n\n" ]
[ 16, 6, 2, 2, 1 ]
[]
[]
[ "generator", "iterator", "performance", "python" ]
stackoverflow_0002702158_generator_iterator_performance_python.txt
Q: 'int' object is not callable I'm trying to define a simply Fraction class And I'm getting this error: python fraction.py Traceback (most recent call last): File "fraction.py", line 20, in <module> f.numerator(2) TypeError: 'int' object is not callable The code follows: class Fraction(object): def __init__( self, n=0, d=0 ): self.numerator = n self.denominator = d def get_numerator(self): return self.numerator def get_denominator(self): return self.denominator def numerator(self, n): self.numerator = n def denominator( self, d ): self.denominator = d def prints( self ): print "%d/%d" %(self.numerator, self.denominator) if __name__ == "__main__": f = Fraction() f.numerator(2) f.denominator(5) f.prints() I thought it was because I had numerator(self) and numerator(self, n) but now I know Python doesn't have method overloading ( function overloading ) so I renamed to get_numerator but that's not the problems. What could it be? A: You're using numerator as both a method name (def numerator(...)) and member variable name (self.numerator = n). Use set_numerator and set_denominator for the method names and it will work. By the way, Python 2.6 has a built-in fraction class. A: You can't overload the name numerator to refer to both the member variable and the method. When you set self.numerator = n, you're overwriting the reference to the method, and so when you call f.numerator(2), it's trying to do a method call on the member variable, which is an int, and Python doesn't let you do that. It's like saying x = 2; x(4) -- it just doesn't make any sense. You should rename the setter methods to set_numerator and set_denominator to remove this naming conflict. A: You are using numerator as both a method name and a name for an instance attribute. Since methods are stored on the class, when you lookup that attribute you get the number, not the method. (Python will look up attributes on the instance before looking at the class.) That is to say that on the line where you say f.numerator(2), it looks up f.numerator and finds that it is 0, then tries to call that 0, which obviously shouldn't work. If you have any practical purpose for this code, you can use the stdlib fractions module: http://docs.python.org/library/fractions.html This is new in Python 2.6. If I needed to represent fractions but was using an earlier version of Python, I'd probably use sympy's Rational type. A more practical default value for denominator is probably 1. (That way Fraction(5) would be five, not some undefined operation tending towards infinity.) Rather than a prints method, it would be more typical to define __str__ and just print your object. Your methods are just getting and setting an attribute. In Python, we generally do not use getters and setters—we just let users set our attributes. You're coming from a Java background, where one of the basic rules is always to use getter and setter methods rather than let users access attributes. The rationale for this rule is that if, at some future date, you needed to do more than just get and set (you needed to process the data), it would require an API change. Since in Python we have properties, we would not need an API change in that instance so we can safely avoid the boilerplate and cruft of setters and getters. It wouldn't hurt to inherit numbers.Rational (Python 2.6 and up), which lets your class automatically do several things numbers are expected to. You will have to implement everything it needs you to, but then it will automatically make a lot more work. Check out Check out http://docs.python.org/library/numbers.html to learn more. Spoiler alert: class Fraction(object): """Don't forget the docstring.....""" def __init__(self, numerator=0, denominator=1): self.numerator = numerator self.denominator = denominator def __str__(self): return "%d / %d" % (self.numerator, self.denominator) # I probably want to implement a lot of arithmetic and stuff! if __name__ == "__main__": f = Fraction(2, 5) # If I wanted to change the numerator or denominator at this point, # I'd just do `f.numerator = 4` or whatever. print f
'int' object is not callable
I'm trying to define a simply Fraction class And I'm getting this error: python fraction.py Traceback (most recent call last): File "fraction.py", line 20, in <module> f.numerator(2) TypeError: 'int' object is not callable The code follows: class Fraction(object): def __init__( self, n=0, d=0 ): self.numerator = n self.denominator = d def get_numerator(self): return self.numerator def get_denominator(self): return self.denominator def numerator(self, n): self.numerator = n def denominator( self, d ): self.denominator = d def prints( self ): print "%d/%d" %(self.numerator, self.denominator) if __name__ == "__main__": f = Fraction() f.numerator(2) f.denominator(5) f.prints() I thought it was because I had numerator(self) and numerator(self, n) but now I know Python doesn't have method overloading ( function overloading ) so I renamed to get_numerator but that's not the problems. What could it be?
[ "You're using numerator as both a method name (def numerator(...)) and member variable name (self.numerator = n). Use set_numerator and set_denominator for the method names and it will work.\nBy the way, Python 2.6 has a built-in fraction class.\n", "You can't overload the name numerator to refer to both the member variable and the method. When you set self.numerator = n, you're overwriting the reference to the method, and so when you call f.numerator(2), it's trying to do a method call on the member variable, which is an int, and Python doesn't let you do that. It's like saying x = 2; x(4) -- it just doesn't make any sense.\nYou should rename the setter methods to set_numerator and set_denominator to remove this naming conflict.\n", "\nYou are using numerator as both a method name and a name for an instance attribute. Since methods are stored on the class, when you lookup that attribute you get the number, not the method. (Python will look up attributes on the instance before looking at the class.)\nThat is to say that on the line where you say f.numerator(2), it looks up f.numerator and finds that it is 0, then tries to call that 0, which obviously shouldn't work.\nIf you have any practical purpose for this code, you can use the stdlib fractions module: http://docs.python.org/library/fractions.html\n\nThis is new in Python 2.6. If I needed to represent fractions but was using an earlier version of Python, I'd probably use sympy's Rational type.\n\nA more practical default value for denominator is probably 1. (That way Fraction(5) would be five, not some undefined operation tending towards infinity.)\nRather than a prints method, it would be more typical to define __str__ and just print your object.\nYour methods are just getting and setting an attribute. In Python, we generally do not use getters and setters—we just let users set our attributes.\n\nYou're coming from a Java background, where one of the basic rules is always to use getter and setter methods rather than let users access attributes. The rationale for this rule is that if, at some future date, you needed to do more than just get and set (you needed to process the data), it would require an API change. Since in Python we have properties, we would not need an API change in that instance so we can safely avoid the boilerplate and cruft of setters and getters.\n\nIt wouldn't hurt to inherit numbers.Rational (Python 2.6 and up), which lets your class automatically do several things numbers are expected to. You will have to implement everything it needs you to, but then it will automatically make a lot more work. Check out Check out http://docs.python.org/library/numbers.html to learn more.\n\n\nSpoiler alert:\nclass Fraction(object):\n \"\"\"Don't forget the docstring.....\"\"\"\n\n def __init__(self, numerator=0, denominator=1):\n self.numerator = numerator\n self.denominator = denominator\n\n def __str__(self):\n return \"%d / %d\" % (self.numerator, self.denominator)\n\n # I probably want to implement a lot of arithmetic and stuff!\n\nif __name__ == \"__main__\":\n f = Fraction(2, 5)\n # If I wanted to change the numerator or denominator at this point, \n # I'd just do `f.numerator = 4` or whatever.\n print f\n\n" ]
[ 18, 8, 7 ]
[]
[]
[ "python" ]
stackoverflow_0002702344_python.txt
Q: Generating Mouse-Keyboard combination events in python I want to be able to do a combination of keypresses and mouseclicks simultaneously, as in for example Control+LeftClick At the moment I am able to do Control and then a left click with the following code: import win32com, win32api, win32con def CopyBox( x, y): time.sleep(.2) wsh = win32com.client.Dispatch("WScript.Shell") wsh.SendKeys("^") win32api.SetCursorPos((x,y)) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, x, y, 0, 0) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, x, y, 0, 0) What this does is press control on the keyboard, then it clicks. I need it to keep the controll pressed longer and return while it's still pressed to continue running the code. Is there a maybe lower level way of saying press the key and then later in the code tell it to lift up the key such as like what the mouse is doing? A: to press control: win32api.keybd_event(win32con.VK_CONTROL, 0, win32con.KEYEVENTF_EXTENDEDKEY, 0) to release it: win32api.keybd_event(win32con.VK_CONTROL, 0, win32con.KEYEVENTF_EXTENDEDKEY | win32con.KEYEVENTF_KEYUP, 0) so your code will look like this: import win32api, win32con def CopyBox(x, y): time.sleep(.2) win32api.keybd_event(win32con.VK_CONTROL, 0, win32con.KEYEVENTF_EXTENDEDKEY, 0) win32api.SetCursorPos((x,y)) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, x, y, 0, 0) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, x, y, 0, 0) win32api.keybd_event(win32con.VK_CONTROL, 0, win32con.KEYEVENTF_KEYUP, 0)
Generating Mouse-Keyboard combination events in python
I want to be able to do a combination of keypresses and mouseclicks simultaneously, as in for example Control+LeftClick At the moment I am able to do Control and then a left click with the following code: import win32com, win32api, win32con def CopyBox( x, y): time.sleep(.2) wsh = win32com.client.Dispatch("WScript.Shell") wsh.SendKeys("^") win32api.SetCursorPos((x,y)) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, x, y, 0, 0) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, x, y, 0, 0) What this does is press control on the keyboard, then it clicks. I need it to keep the controll pressed longer and return while it's still pressed to continue running the code. Is there a maybe lower level way of saying press the key and then later in the code tell it to lift up the key such as like what the mouse is doing?
[ "to press control:\nwin32api.keybd_event(win32con.VK_CONTROL, 0, win32con.KEYEVENTF_EXTENDEDKEY, 0)\n\nto release it:\nwin32api.keybd_event(win32con.VK_CONTROL, 0, win32con.KEYEVENTF_EXTENDEDKEY | win32con.KEYEVENTF_KEYUP, 0)\n\nso your code will look like this:\nimport win32api, win32con\ndef CopyBox(x, y):\n time.sleep(.2)\n win32api.keybd_event(win32con.VK_CONTROL, 0, win32con.KEYEVENTF_EXTENDEDKEY, 0)\n win32api.SetCursorPos((x,y))\n win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, x, y, 0, 0)\n win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, x, y, 0, 0)\n win32api.keybd_event(win32con.VK_CONTROL, 0, win32con.KEYEVENTF_KEYUP, 0)\n\n" ]
[ 3 ]
[]
[]
[ "combinations", "keyboard_hook", "mouseevent", "python" ]
stackoverflow_0002702617_combinations_keyboard_hook_mouseevent_python.txt
Q: best way to find out type I have a dict val_dict - {'val1': 'abcd', 'val': '1234', 'val3': '1234.00', 'val4': '1abcd 2gfff'} All the values to my keys are string. So my question is how to find out type for my values in the dict. I mean if i say`int(val_dict['val1']) will give me error. Basically what I am trying to do is find out if the string is actual string or int or float.` if int( val_dict['val1'): dosomething else if float(val_dict['val1']): dosomething thanks A: Maybe this: is_int = True try: as_int = int (val_dict['val1']) except ValueError: is_int = False as_float = float (val_dict['val1']) if is_int: ... else: ... You can get rid of is_int, but then there will be a lot of code (all float value handling) in try...except and I'd feel uneasy about that. A: All the values are of course "actual strings" (you can do with them all you can possibly do with strings!), but I think most respondents know what you mean -- you want to try converting each value to several possible types in turn ('int' then 'float' is specifically what you name, but couldn't there be others...?) and return and use the first conversion that succeeds. This is of course best encapsulated in a function, away from your application logic. If the best match for your needs is just to do the conversion and return the "best converted value" (and they'll all be used similarly), then: def best_convert(s, types=(int, float)): for t in types: try: return t(s) except ValueError: continue return s if you want to do something different in each case, then: def dispatch(s, defaultfun, typesandfuns): for t, f in typesandfuns: try: v = t(s) except ValueError: continue else: return f(v) return defaultfun(s) to be called, e.g, as r = dispatch(s, asstring, ((int, asint), (float, asfloat))) if the functions to be called on "nonconvertible strings", ones convertible to int, and ones convertible to float but not int, are respectively asstring, asint, asfloat. I do not recommend putting the "structural" "try converting to these various types in turn and act accordingly" code in an inextricable mixture with your "application logic" -- this is a clear case for neatly layering the two aspects, with good structure and separation. A: you can determine if the string will convert to an int or float very easily, without using exceptions # string has nothing but digits, so it's an int if string.isdigit(): int(string) # string has nothing but digits and one decimal point, so it's a float elif string.replace('.', '', 1).isdigit(): float(string)
best way to find out type
I have a dict val_dict - {'val1': 'abcd', 'val': '1234', 'val3': '1234.00', 'val4': '1abcd 2gfff'} All the values to my keys are string. So my question is how to find out type for my values in the dict. I mean if i say`int(val_dict['val1']) will give me error. Basically what I am trying to do is find out if the string is actual string or int or float.` if int( val_dict['val1'): dosomething else if float(val_dict['val1']): dosomething thanks
[ "Maybe this:\nis_int = True\ntry:\n as_int = int (val_dict['val1'])\nexcept ValueError:\n is_int = False\n as_float = float (val_dict['val1'])\n\nif is_int:\n ...\nelse:\n ...\n\nYou can get rid of is_int, but then there will be a lot of code (all float value handling) in try...except and I'd feel uneasy about that.\n", "All the values are of course \"actual strings\" (you can do with them all you can possibly do with strings!), but I think most respondents know what you mean -- you want to try converting each value to several possible types in turn ('int' then 'float' is specifically what you name, but couldn't there be others...?) and return and use the first conversion that succeeds.\nThis is of course best encapsulated in a function, away from your application logic. If the best match for your needs is just to do the conversion and return the \"best converted value\" (and they'll all be used similarly), then:\ndef best_convert(s, types=(int, float)):\n for t in types:\n try: return t(s)\n except ValueError: continue\n return s\n\nif you want to do something different in each case, then:\ndef dispatch(s, defaultfun, typesandfuns):\n for t, f in typesandfuns:\n try: \n v = t(s)\n except ValueError:\n continue\n else:\n return f(v)\n return defaultfun(s)\n\nto be called, e.g, as\nr = dispatch(s, asstring, ((int, asint), (float, asfloat)))\n\nif the functions to be called on \"nonconvertible strings\", ones convertible to int, and ones convertible to float but not int, are respectively asstring, asint, asfloat.\nI do not recommend putting the \"structural\" \"try converting to these various types in turn and act accordingly\" code in an inextricable mixture with your \"application logic\" -- this is a clear case for neatly layering the two aspects, with good structure and separation.\n", "you can determine if the string will convert to an int or float very easily, without using exceptions\n# string has nothing but digits, so it's an int\nif string.isdigit():\n int(string)\n\n# string has nothing but digits and one decimal point, so it's a float\nelif string.replace('.', '', 1).isdigit():\n float(string)\n\n" ]
[ 3, 2, 0 ]
[ "A simple solution, if you don't have too many formats, could involve checking the format of each value.\ndef intlike(value):\n return value.isdigit()\ndef floatlike(value):\n import re\n return re.match(\"^\\d+\\.\\d+$\")\n\nif intlike(val_dict['val1']):\n dosomething(int(val_dict['val1']))\nelif floatlike(val_dict['val1']):\n somethingelse(float(val_dict['val1']))\nelse:\n entirelydifferent()\n\nHowever, it really is easier to use Python's exception framework for certain complex formats:\ndef floatlike(value):\n try:\n float(value)\n except ValueError:\n result = False\n else:\n result = True\n return result\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0002701426_python.txt
Q: Operations on Python hashes I've got a rather strange problem. For a Distributed Hash Table I need to be able to do some simple math operations on MD5 hashes. These include a sum (numeric sum represented by the hash) and a modulo operation. Now I'm wondering what the best way to implement these operations is. I'm using hashlib to calculate the hashes, but since the hashes I get are then string, how do I calculate with them? A: You can use the hexdigest() method to get hexadecimal digits, and then convert them to a number: >>> h = hashlib.md5('data') >>> int(h.hexdigest(), 16) 188041611063492600696317361555123480284L If you already have the output of digest(), you can convert it to hexadecimal digits: >>> hexDig = ''.join('%02x' % ord(x) for x in h.digest()) >>> int(hexDig, 16) 188041611063492600696317361555123480284L Edit: For the second case, it's actually easier to convert using .encode('hex') or binascii.hexlify: >>> int(h.digest().encode('hex'), 16) 188041611063492600696317361555123480284L >>> int(binascii.hexlify(h.digest()), 16) 188041611063492600696317361555123480284L
Operations on Python hashes
I've got a rather strange problem. For a Distributed Hash Table I need to be able to do some simple math operations on MD5 hashes. These include a sum (numeric sum represented by the hash) and a modulo operation. Now I'm wondering what the best way to implement these operations is. I'm using hashlib to calculate the hashes, but since the hashes I get are then string, how do I calculate with them?
[ "You can use the hexdigest() method to get hexadecimal digits, and then convert them to a number:\n>>> h = hashlib.md5('data')\n>>> int(h.hexdigest(), 16)\n188041611063492600696317361555123480284L\n\nIf you already have the output of digest(), you can convert it to hexadecimal digits:\n>>> hexDig = ''.join('%02x' % ord(x) for x in h.digest())\n>>> int(hexDig, 16)\n188041611063492600696317361555123480284L\n\nEdit:\nFor the second case, it's actually easier to convert using .encode('hex') or binascii.hexlify:\n>>> int(h.digest().encode('hex'), 16)\n188041611063492600696317361555123480284L\n>>> int(binascii.hexlify(h.digest()), 16)\n188041611063492600696317361555123480284L\n\n" ]
[ 35 ]
[]
[]
[ "dht", "hash", "hashlib", "math", "python" ]
stackoverflow_0002702751_dht_hash_hashlib_math_python.txt
Q: Check if the internet cannot be accessed in Python I have an app that makes a HTTP GET request to a particular URL on the internet. But when the network is down (say, no public wifi - or my ISP is down, or some such thing), I get the following traceback at urllib2.urlopen: 70, in get u = urllib2.urlopen(req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 391, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 409, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1161, in http_open return self.do_open(httplib.HTTPConnection, req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1136, in do_open raise URLError(err) URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known> I want to print a friendly error to the user telling him that his network maybe down instead of this unfriendly "nodename nor servname provided" error message. Sure I can catch URLError, but that would catch every url error, not just the one related to network downtime. I am not a purist, so even an error message like "The server example.com cannot be reached; either the server is indeed having problems or your network connection is down" would be nice. How do I go about selectively catching such errors? (For a start, if DNS resolution fails at urllib2.urlopen, that can be reasonably assumed as network inaccessibility? If so, how do I "catch" it in the except block?) A: You should wrap the request in a try/except statement so that you catch the fault and then let them know. try: u = urllib2.urlopen(req) except HTTPError as e: #inform them of the specific error here (based off the error code) except URLError as e: #inform them of the specific error here except Exception as e: #inform them that a general error has occurred A: urllib2 - The Missing Manual has a good section on how to handle URLError and HTTPError exceptions and how to differentiate the conditions that caused them. A: How about catching URLError, then testing the reason attribute? If the reason isn't one you're interested in, re-throw the URLError and handle it somewhere else. Alternatively, you could try httplib2. Its ServerNotFoundError exception would probably suit your needs.
Check if the internet cannot be accessed in Python
I have an app that makes a HTTP GET request to a particular URL on the internet. But when the network is down (say, no public wifi - or my ISP is down, or some such thing), I get the following traceback at urllib2.urlopen: 70, in get u = urllib2.urlopen(req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 391, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 409, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1161, in http_open return self.do_open(httplib.HTTPConnection, req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1136, in do_open raise URLError(err) URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known> I want to print a friendly error to the user telling him that his network maybe down instead of this unfriendly "nodename nor servname provided" error message. Sure I can catch URLError, but that would catch every url error, not just the one related to network downtime. I am not a purist, so even an error message like "The server example.com cannot be reached; either the server is indeed having problems or your network connection is down" would be nice. How do I go about selectively catching such errors? (For a start, if DNS resolution fails at urllib2.urlopen, that can be reasonably assumed as network inaccessibility? If so, how do I "catch" it in the except block?)
[ "You should wrap the request in a try/except statement so that you catch the fault and then let them know.\ntry:\n u = urllib2.urlopen(req)\nexcept HTTPError as e:\n #inform them of the specific error here (based off the error code)\nexcept URLError as e:\n #inform them of the specific error here\nexcept Exception as e:\n #inform them that a general error has occurred \n\n", "urllib2 - The Missing Manual has a good section on how to handle URLError and HTTPError exceptions and how to differentiate the conditions that caused them. \n", "How about catching URLError, then testing the reason attribute? If the reason isn't one you're interested in, re-throw the URLError and handle it somewhere else.\nAlternatively, you could try httplib2. Its ServerNotFoundError exception would probably suit your needs.\n" ]
[ 8, 1, 1 ]
[]
[]
[ "exception", "networking", "python", "urllib2" ]
stackoverflow_0002702802_exception_networking_python_urllib2.txt
Q: Intelligent search and generation of Java code, preferrably using Python? Basically, I do lots of one-off code generation, large-scale refactorings, etc. etc. in Java. My tool language of choice is Python, but I'll take whatever solutions you can offer. Here is a simplified illustration of what I would like, in a pseudocode Generating an implementation for an interface search within my project: for each Interface as iName: write class(name=iName+"Impl", implements=iName) search within the body of iName: for each Method as mName: write method(name=mName, body="// TODO implement this...") Basically, the tool I'm searching for would allow me to: parse files according to their Java structure ("search for interfaces") search for words contextualized by language elements and types ("variables of type SomeClass", "doStuff() method calls on SomeClass instances") to run searches with structural context ("within the body of the current result") easily replace or generate code (with helpers to generate, as above, or functions for replacing, "rename the interface to Foo", "insert the line Blah.Blah()", etc.) The point is, I don't want to spend a lot of time writing these things, as they are usually throwaway. But sometimes I need something just a little smarter than what grep offers. It wouldn't be too hard to write up a simplistic version of this, but if I'm going to use something like this at all, I'd expect it to be robust. Any suggestions of a tool/library that will help me accomplish this? Edit to add some clarification Python is definitely not necessary; I'll take whatever is that. I merely suggest it incase there are choices. This is to be used in combination with IDE refactoring; sometimes it just doesn't do everything I want. In instances where I'm using for code generation (as above), it's for augmenting the output of other code generators. e.g. a library we use outputs a tonne of interfaces, and we need to make standard implementations of each one to mesh it to our codebase. A: First, I am not aware of any tool or libraries implemented in Python that specifically designed for refactoring Java code, and a Google search did not give me any leads. Second, I would posit that writing such a decent tool or library for refactoring Java in Python would be a large task. You would have to implement a Java compiler front-end (lexer/parser, AST builder and type analyser) in Python, then figure out how to integrate this with a program editor. I'm not surprised that nobody has done this ... given that mature alternatives already exist. Thirdly, doing refactoring without a full analysis of the source code (but uses pattern matching for example) will be incapable of doing complex refactoring, and will is likely to make mistakes in edge cases that the implementor did not think of. I expect that is the level at which the OP is currently operating ... Given that bleak outlook, what are the alternatives: One alternative is to use one of the existing Java IDEs (e.g. NetBeans, Eclipse, IDEA. etc) as a refactoring tool. The OP won't be able to extend the capabilities of such a tool in Python code, but the chances are that he won't really need to. I expect that at least one of these IDEs does 95% of what he needs, and (if he is realistic) that should be good enough. Especially when you consider that IDEs have lots of incidental features that help make refactoring easier; e.g. structured editing, undo/redo, incremental compilation, intelligent code completion, intelligent searching, type and call hierarchy views, and so on. (Aside ... if existing IDEs are not good enough (@WizardOfOdds - only the OP can make that call!!), it would make more sense to try to extend the refactoring capability of an existing IDE than start again in a different implementation language.) Depending on what he is actually doing, model-driven code generation may be another alternative. For instance, if the refactoring is happening because he is frequently creating and recreating his object model(s), then an alternative is to code the models in some modeling language and generate his code from those models. My tool of choice when doing this kind of thing is Eclipse EMF and related technologies. The EMF technologies include generation of editors, XML serialization, persistence, queries, model to model transformation and so on. I have used EMF to implement and roll out projects with object models consisting of 50 to 100 distinct classes with complex relationships and validation requirements. EMF's support for merging source code edits when you regenerate from an updated model is a key feature. A: If you are coding in Java, I strongly recommend that you use NetBeans IDE. It has this kind of refactoring support builtin. Eclipse also supports this kind of thing (although I prefer NetBeans). Both projects are open source, so if you want to see how they perform this refactoring, you can look at their source code. A: Java has its fair share of criticism these days but in the area of tooling - it isn't justified. We are spoiled for choice; Eclipse, Netbeans, Intellij are the big three IDEs. All of them offer excellent levels of searching and Refactoring. Eclipse has the edge on Netbeans I think and Intellij is often ahead of Eclipse You can also use static analysis tools such as FindBugs, CheckTyle etc to find issues - i.e. excessively long methods and classes, overly complex code. If you really want to leverage your Python skills - take a look at Jython. Its a Python interpreter written in Java.
Intelligent search and generation of Java code, preferrably using Python?
Basically, I do lots of one-off code generation, large-scale refactorings, etc. etc. in Java. My tool language of choice is Python, but I'll take whatever solutions you can offer. Here is a simplified illustration of what I would like, in a pseudocode Generating an implementation for an interface search within my project: for each Interface as iName: write class(name=iName+"Impl", implements=iName) search within the body of iName: for each Method as mName: write method(name=mName, body="// TODO implement this...") Basically, the tool I'm searching for would allow me to: parse files according to their Java structure ("search for interfaces") search for words contextualized by language elements and types ("variables of type SomeClass", "doStuff() method calls on SomeClass instances") to run searches with structural context ("within the body of the current result") easily replace or generate code (with helpers to generate, as above, or functions for replacing, "rename the interface to Foo", "insert the line Blah.Blah()", etc.) The point is, I don't want to spend a lot of time writing these things, as they are usually throwaway. But sometimes I need something just a little smarter than what grep offers. It wouldn't be too hard to write up a simplistic version of this, but if I'm going to use something like this at all, I'd expect it to be robust. Any suggestions of a tool/library that will help me accomplish this? Edit to add some clarification Python is definitely not necessary; I'll take whatever is that. I merely suggest it incase there are choices. This is to be used in combination with IDE refactoring; sometimes it just doesn't do everything I want. In instances where I'm using for code generation (as above), it's for augmenting the output of other code generators. e.g. a library we use outputs a tonne of interfaces, and we need to make standard implementations of each one to mesh it to our codebase.
[ "First, I am not aware of any tool or libraries implemented in Python that specifically designed for refactoring Java code, and a Google search did not give me any leads.\nSecond, I would posit that writing such a decent tool or library for refactoring Java in Python would be a large task. You would have to implement a Java compiler front-end (lexer/parser, AST builder and type analyser) in Python, then figure out how to integrate this with a program editor. I'm not surprised that nobody has done this ... given that mature alternatives already exist.\nThirdly, doing refactoring without a full analysis of the source code (but uses pattern matching for example) will be incapable of doing complex refactoring, and will is likely to make mistakes in edge cases that the implementor did not think of. I expect that is the level at which the OP is currently operating ...\nGiven that bleak outlook, what are the alternatives:\nOne alternative is to use one of the existing Java IDEs (e.g. NetBeans, Eclipse, IDEA. etc) as a refactoring tool. The OP won't be able to extend the capabilities of such a tool in Python code, but the chances are that he won't really need to. I expect that at least one of these IDEs does 95% of what he needs, and (if he is realistic) that should be good enough. Especially when you consider that IDEs have lots of incidental features that help make refactoring easier; e.g. structured editing, undo/redo, incremental compilation, intelligent code completion, intelligent searching, type and call hierarchy views, and so on.\n(Aside ... if existing IDEs are not good enough (@WizardOfOdds - only the OP can make that call!!), it would make more sense to try to extend the refactoring capability of an existing IDE than start again in a different implementation language.)\nDepending on what he is actually doing, model-driven code generation may be another alternative. For instance, if the refactoring is happening because he is frequently creating and recreating his object model(s), then an alternative is to code the models in some modeling language and generate his code from those models. My tool of choice when doing this kind of thing is Eclipse EMF and related technologies. The EMF technologies include generation of editors, XML serialization, persistence, queries, model to model transformation and so on. I have used EMF to implement and roll out projects with object models consisting of 50 to 100 distinct classes with complex relationships and validation requirements. EMF's support for merging source code edits when you regenerate from an updated model is a key feature.\n", "If you are coding in Java, I strongly recommend that you use NetBeans IDE. It has this kind of refactoring support builtin. Eclipse also supports this kind of thing (although I prefer NetBeans). Both projects are open source, so if you want to see how they perform this refactoring, you can look at their source code.\n", "Java has its fair share of criticism these days but in the area of tooling - it isn't justified.\nWe are spoiled for choice; Eclipse, Netbeans, Intellij are the big three IDEs. All of them offer excellent levels of searching and Refactoring. Eclipse has the edge on Netbeans I think and Intellij is often ahead of Eclipse\nYou can also use static analysis tools such as FindBugs, CheckTyle etc to find issues - i.e. excessively long methods and classes, overly complex code.\nIf you really want to leverage your Python skills - take a look at Jython. Its a Python interpreter written in Java. \n" ]
[ 2, 0, 0 ]
[]
[]
[ "code_generation", "java", "parsing", "python" ]
stackoverflow_0002702315_code_generation_java_parsing_python.txt
Q: How do I add a method with a decorator to a class in python? How do I add a method with a decorator to a class? I tried def add_decorator( cls ): @dec def update(self): pass cls.update = update usage add_decorator( MyClass ) MyClass.update() but MyClass.update does not have the decorator @dec did not apply to update I'm trying to use this with orm.reconstructor in sqlalchemy. A: If you want class decorator in python >= 2.6 you can do this def funkyDecorator(cls): cls.funky = 1 @funkyDecorator class MyClass(object): pass or in python 2.5 MyClass = funkyDecorator(MyClass) But looks like you are interested in method decorator, for which you can do this def logDecorator(func): def wrapper(*args, **kwargs): print "Before", func.__name__ ret = func(*args, **kwargs) print "After", func.__name__ return ret return wrapper class MyClass(object): @logDecorator def mymethod(self): print "xxx" MyClass().mymethod() Output: Before mymethod xxx After mymethod So in short you have to just put @orm.reconstructor before method definition A: In the class that represents your SQL record, from sqlalchemy.orm import reconstructor class Thing(object): @reconstructor def reconstruct(self): pass
How do I add a method with a decorator to a class in python?
How do I add a method with a decorator to a class? I tried def add_decorator( cls ): @dec def update(self): pass cls.update = update usage add_decorator( MyClass ) MyClass.update() but MyClass.update does not have the decorator @dec did not apply to update I'm trying to use this with orm.reconstructor in sqlalchemy.
[ "If you want class decorator in python >= 2.6 you can do this\ndef funkyDecorator(cls):\n cls.funky = 1\n\n@funkyDecorator\nclass MyClass(object):\n pass\n\nor in python 2.5\nMyClass = funkyDecorator(MyClass)\n\nBut looks like you are interested in method decorator, for which you can do this\ndef logDecorator(func):\n\n def wrapper(*args, **kwargs):\n print \"Before\", func.__name__\n ret = func(*args, **kwargs)\n print \"After\", func.__name__\n return ret\n\n return wrapper\n\nclass MyClass(object):\n\n @logDecorator\n def mymethod(self):\n print \"xxx\"\n\n\nMyClass().mymethod()\n\nOutput:\nBefore mymethod\nxxx\nAfter mymethod\n\nSo in short you have to just put @orm.reconstructor before method definition\n", "In the class that represents your SQL record,\nfrom sqlalchemy.orm import reconstructor\n\nclass Thing(object):\n @reconstructor\n def reconstruct(self):\n pass\n\n" ]
[ 7, 0 ]
[]
[]
[ "decorator", "python" ]
stackoverflow_0002703182_decorator_python.txt
Q: PyFacebook with Pylons I'd like to implement PyFacebook in my Python + Pylons application. Where should I include the package? What's the cleanest way to import it? What directory should I put the files in? Thanks! A: Most of your libraries are on your pythonpath, which mostly is lib/site-packages. You should just install those and most installers will make sure they're on your python-path. Then you should be able to import them normally.
PyFacebook with Pylons
I'd like to implement PyFacebook in my Python + Pylons application. Where should I include the package? What's the cleanest way to import it? What directory should I put the files in? Thanks!
[ "Most of your libraries are on your pythonpath, which mostly is lib/site-packages. You should just install those and most installers will make sure they're on your python-path. Then you should be able to import them normally.\n" ]
[ 1 ]
[]
[]
[ "facebook", "pyfacebook", "pylons", "python" ]
stackoverflow_0002703540_facebook_pyfacebook_pylons_python.txt
Q: Gtk: How can I get a part of a file in a textview with scrollbars relating to the full file I'm trying to make a very large file editor (where the editor only stores a part of the buffer in memory at a time), but I'm stuck while building my textview object. Basically- I know that I have to be able to update the text view buffer dynamically, and I don't know hot to get the scrollbars to relate to the full file while the textview contains only a small buffer of the file. I've played with Gtk.Adjustment on a Gtk.ScrolledWindow and ScrollBars, but though I can extend the range of the scrollbars, they still apply to the range of the buffer and not the filesize (which I try to set via Gtk.Adjustment parameters) when I load into textview. I need to have a widget that "knows" that it is looking at a part of a file, and can load/unload buffers as necessary to view different parts of the file. So far, I believe I'll respond to the "change_view" to calculate when I'm off, or about to be off the current buffer and need to load the next, but I don't know how to get the scrollbars to have the top relate to the beginning of the file, and the bottom relate to the end of the file, rather than to the loaded buffer in textview. Any help would be greatly appreciated, thanks! A: You probably should create your own Gtk.TextBuffer implementation, as the default one relies on storing whole buffer in memory. A: I agree with el.pescado's answer, but you could also try to fake it. Count the number of lines in the file you're editing. Put one screenful of text in the buffer and fill the rest with newlines so the buffer has the same number of lines as the file. Then connect to the changed signal of the vertical adjustment of the scrolled window that contains the text view, this will notify you whenever the window is scrolled. When that happens, replace the text you already had in the buffer with newlines, and load the section you are now looking at. You can tell which line numbers you are supposed to be looking at in a text view with this code (may have bugs, I'm doing this from memory and translating into Python on the fly): visible_rect = textview.get_visible_rect() top = textview.get_iter_at_location(visible_rect.x, visible_rect.y) bottom = textview.get_iter_at_location(visible_rect.x, visible_rect.y + visible_rect.height) top_line, bottom_line = top.get_line(), bottom.get_line()
Gtk: How can I get a part of a file in a textview with scrollbars relating to the full file
I'm trying to make a very large file editor (where the editor only stores a part of the buffer in memory at a time), but I'm stuck while building my textview object. Basically- I know that I have to be able to update the text view buffer dynamically, and I don't know hot to get the scrollbars to relate to the full file while the textview contains only a small buffer of the file. I've played with Gtk.Adjustment on a Gtk.ScrolledWindow and ScrollBars, but though I can extend the range of the scrollbars, they still apply to the range of the buffer and not the filesize (which I try to set via Gtk.Adjustment parameters) when I load into textview. I need to have a widget that "knows" that it is looking at a part of a file, and can load/unload buffers as necessary to view different parts of the file. So far, I believe I'll respond to the "change_view" to calculate when I'm off, or about to be off the current buffer and need to load the next, but I don't know how to get the scrollbars to have the top relate to the beginning of the file, and the bottom relate to the end of the file, rather than to the loaded buffer in textview. Any help would be greatly appreciated, thanks!
[ "You probably should create your own Gtk.TextBuffer implementation, as the default one relies on storing whole buffer in memory.\n", "I agree with el.pescado's answer, but you could also try to fake it. Count the number of lines in the file you're editing. Put one screenful of text in the buffer and fill the rest with newlines so the buffer has the same number of lines as the file. \nThen connect to the changed signal of the vertical adjustment of the scrolled window that contains the text view, this will notify you whenever the window is scrolled. When that happens, replace the text you already had in the buffer with newlines, and load the section you are now looking at.\nYou can tell which line numbers you are supposed to be looking at in a text view with this code (may have bugs, I'm doing this from memory and translating into Python on the fly):\nvisible_rect = textview.get_visible_rect()\ntop = textview.get_iter_at_location(visible_rect.x, visible_rect.y)\nbottom = textview.get_iter_at_location(visible_rect.x, visible_rect.y + visible_rect.height)\ntop_line, bottom_line = top.get_line(), bottom.get_line()\n\n" ]
[ 1, 0 ]
[]
[]
[ "gtk", "pygtk", "python" ]
stackoverflow_0002698533_gtk_pygtk_python.txt
Q: What is the incoming email address used on google-app-engine? I'm reading this article : http://code.google.com/intl/zh-CN/appengine/docs/python/mail/receivingmail.html I'd like to know, is this the right article to read to deal with mail from others sent to me ? My Gmail is zjm1126@gmail.com, so when someone sends email to zjm1126@gmail.com, can I do something automatically with the incoming mail? Update: The article say: Your app can receive email at addresses of the following form: string@appid.appspotmail.com Where do I set this? A: is article used to deal with mail from others send to me ? Yes and my gmail is zjm1126@gmail.com , so someone send email to zjm1126@gmail.com,i can do something automatically use incoming mail ,yes ? No (unless you configure GMail to forward it to the address the article tells you to use) where to set this ??? Nowhere. You are given your appid when you sign up.
What is the incoming email address used on google-app-engine?
I'm reading this article : http://code.google.com/intl/zh-CN/appengine/docs/python/mail/receivingmail.html I'd like to know, is this the right article to read to deal with mail from others sent to me ? My Gmail is zjm1126@gmail.com, so when someone sends email to zjm1126@gmail.com, can I do something automatically with the incoming mail? Update: The article say: Your app can receive email at addresses of the following form: string@appid.appspotmail.com Where do I set this?
[ "\nis article used to deal with mail from others send to me ?\n\nYes\n\nand my gmail is zjm1126@gmail.com , so someone send email to zjm1126@gmail.com,i can do something automatically use incoming mail ,yes ?\n\nNo (unless you configure GMail to forward it to the address the article tells you to use)\n\nwhere to set this ???\n\nNowhere. You are given your appid when you sign up.\n" ]
[ 1 ]
[]
[]
[ "email", "google_app_engine", "python" ]
stackoverflow_0002703709_email_google_app_engine_python.txt
Q: How to give color to your code in open-office-writer? I have code blocks written in Open Office Write and want colorize it. How can I do this? EDIT: When I copy syntax-highlighted code back to open office writer it becomes black again. How can I change this? A: I think you need to take a look at coooder plugin for LibreOffice(OpenOffice). A: You could try pygments. A: I know this works for MS Word, but it may also work for Open Office. http://www.planetb.ca/2008/11/syntax-highlight-code-in-word-documents/
How to give color to your code in open-office-writer?
I have code blocks written in Open Office Write and want colorize it. How can I do this? EDIT: When I copy syntax-highlighted code back to open office writer it becomes black again. How can I change this?
[ "I think you need to take a look at coooder plugin for LibreOffice(OpenOffice).\n", "You could try pygments.\n", "I know this works for MS Word, but it may also work for Open Office.\nhttp://www.planetb.ca/2008/11/syntax-highlight-code-in-word-documents/\n" ]
[ 2, 1, 0 ]
[]
[]
[ "colors", "openoffice_writer", "python", "syntax_highlighting" ]
stackoverflow_0002703675_colors_openoffice_writer_python_syntax_highlighting.txt
Q: Problem building PyGTK on CentOS I am trying to build PyGTK on CentOS for a non-standard Python (2.6, vs the out-of-the-box 2.4). It requires that I first build pygobject. pygobject-2.18.0 fails at the configure step. The error messages is as follows: checking for GLIB - version >= 2.14.0... no *** Could not run GLIB test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means GLIB is incorrectly installed. configure: error: maybe you want the pygobject-2-4 branch? I have downloaded, built and successfully installed glib. The config.log file contains the following output: conftest.c:27:18: error: glib.h: No such file or directory conftest.c: In function 'main': conftest.c:33: error: 'glib_major_version' undeclared (first use in this function) conftest.c:33: error: (Each undeclared identifier is reported only once conftest.c:33: error: for each function it appears in.) conftest.c:33: error: 'glib_minor_version' undeclared (first use in this function) conftest.c:33: error: 'glib_micro_version' undeclared (first use in this function) configure:13844: $? = 1 What am I doing wrong? A: Looks like your glib version is not up to date. In gentoo, following versions apply in PyGTK 2.16.0: glib 2.8.0 pygobject-2.16.1 pycairo 2.0.1
Problem building PyGTK on CentOS
I am trying to build PyGTK on CentOS for a non-standard Python (2.6, vs the out-of-the-box 2.4). It requires that I first build pygobject. pygobject-2.18.0 fails at the configure step. The error messages is as follows: checking for GLIB - version >= 2.14.0... no *** Could not run GLIB test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means GLIB is incorrectly installed. configure: error: maybe you want the pygobject-2-4 branch? I have downloaded, built and successfully installed glib. The config.log file contains the following output: conftest.c:27:18: error: glib.h: No such file or directory conftest.c: In function 'main': conftest.c:33: error: 'glib_major_version' undeclared (first use in this function) conftest.c:33: error: (Each undeclared identifier is reported only once conftest.c:33: error: for each function it appears in.) conftest.c:33: error: 'glib_minor_version' undeclared (first use in this function) conftest.c:33: error: 'glib_micro_version' undeclared (first use in this function) configure:13844: $? = 1 What am I doing wrong?
[ "Looks like your glib version is not up to date.\nIn gentoo, following versions apply in PyGTK 2.16.0:\n\nglib 2.8.0\npygobject-2.16.1\npycairo 2.0.1\n\n" ]
[ 4 ]
[]
[]
[ "pygobject", "pygtk", "python" ]
stackoverflow_0002642238_pygobject_pygtk_python.txt
Q: Building proper link with spaces I have the following code in Python: linkHTML = "<a href=\"page?q=%s\">click here</a>" % strLink The problem is that when strLink has spaces in it the link shows up as <a href="page?q=with space">click here</a> I can use strLink.replace(" ","+") But I am sure there are other characters which can cause errors. I tried using urllib.quote(strLink) But it doesn't seem to help. Thanks! Joel A: Make sure you use the urllib.quote_plus(string[, safe]) to replace spaces with plus sign. urllib.quote_plus(string[, safe]) Like quote(), but also replaces spaces by plus signs, as required for quoting HTML form values when building up a query string to go into a URL. Plus signs in the original string are escaped unless they are included in safe. It also does not have safe default to '/'. from http://docs.python.org/library/urllib.html#urllib.quote_plus Ideally you'd be using the urllib.urlencode function and passing it a sequence of key/value pairs like {["q","with space"],["s","with space & other"]} etc. A: As well as quote_plus(*), you also need to HTML-encode any text you output to HTML. Otherwise < and & symbols will be markup, with potential security consequences. (OK, you're not going to get < in a URL, but you definitely are going to get &, so just one parameter name that matches an HTML entity name and your string's messed up. html= '<a href="page?q=%s">click here</a>' % cgi.escape(urllib.quote_plus(q)) *: actually plain old quote is fine too; I don't know what wasn't working for you, but it is a perfectly good way of URL-encoding strings. It converts spaces to %20 which is also valid, and valid in path parts too. quote_plus is optimal for generating query strings, but otherwise, when in doubt, quote is safest.
Building proper link with spaces
I have the following code in Python: linkHTML = "<a href=\"page?q=%s\">click here</a>" % strLink The problem is that when strLink has spaces in it the link shows up as <a href="page?q=with space">click here</a> I can use strLink.replace(" ","+") But I am sure there are other characters which can cause errors. I tried using urllib.quote(strLink) But it doesn't seem to help. Thanks! Joel
[ "Make sure you use the urllib.quote_plus(string[, safe]) to replace spaces with plus sign.\nurllib.quote_plus(string[, safe])\n\n\nLike quote(), but also replaces spaces\n by plus signs, as required for quoting\n HTML form values when building up a\n query string to go into a URL. Plus\n signs in the original string are\n escaped unless they are included in\n safe. It also does not have safe\n default to '/'.\n\nfrom http://docs.python.org/library/urllib.html#urllib.quote_plus\nIdeally you'd be using the urllib.urlencode function and passing it a sequence of key/value pairs like {[\"q\",\"with space\"],[\"s\",\"with space & other\"]} etc.\n", "As well as quote_plus(*), you also need to HTML-encode any text you output to HTML. Otherwise < and & symbols will be markup, with potential security consequences. (OK, you're not going to get < in a URL, but you definitely are going to get &, so just one parameter name that matches an HTML entity name and your string's messed up.\nhtml= '<a href=\"page?q=%s\">click here</a>' % cgi.escape(urllib.quote_plus(q))\n\n*: actually plain old quote is fine too; I don't know what wasn't working for you, but it is a perfectly good way of URL-encoding strings. It converts spaces to %20 which is also valid, and valid in path parts too. quote_plus is optimal for generating query strings, but otherwise, when in doubt, quote is safest.\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0002703638_python.txt
Q: Paver 0.8.1 compatibility with python 2.6 Does anyone manage to bootstrap its development area using paver with python 2.6 ? I have install python 2.6, install paver with easy_install-2.6, everything looks fine. But when I try to launch the bootstrap method it raises an urllib2.HTTPError (: HTTP Error 404: Not Found) while trying to download http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c8-py2.6.egg. I have tryed to add the correct setuptools EGG file (which is 0.6c9) in the support-files directory, bootstrap.py find the EGG file, but doesn't seem to use it because it still try to download the 0.6c8 version which is no more available. Any ideas how to solve this issue ? Thanks in advance Bertrand A: You should try newer version. =) http://www.blueskyonmars.com/projects/paver/
Paver 0.8.1 compatibility with python 2.6
Does anyone manage to bootstrap its development area using paver with python 2.6 ? I have install python 2.6, install paver with easy_install-2.6, everything looks fine. But when I try to launch the bootstrap method it raises an urllib2.HTTPError (: HTTP Error 404: Not Found) while trying to download http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c8-py2.6.egg. I have tryed to add the correct setuptools EGG file (which is 0.6c9) in the support-files directory, bootstrap.py find the EGG file, but doesn't seem to use it because it still try to download the 0.6c8 version which is no more available. Any ideas how to solve this issue ? Thanks in advance Bertrand
[ "You should try newer version. =) http://www.blueskyonmars.com/projects/paver/\n" ]
[ 2 ]
[]
[]
[ "bootstrapper", "build_environment", "python" ]
stackoverflow_0000178300_bootstrapper_build_environment_python.txt
Q: Multiply with find and replace Can regular expressions be used to perform arithmetic? Such as find all numbers in a file and multiply them by a scalar value. A: You can achieve this using re.sub() with a callback: import re def repl(matchobj): i = int(matchobj.group(0)) return str(i * 2) print re.sub(r'\d+', repl, '1 a20 300c') Output: 2 a40 600c From the docs: re.sub(pattern, repl, string[, count]) If repl is a function, it is called for every non-overlapping occurrence of pattern. The function takes a single match object argument, and returns the replacement string. A: In perl you can do this with the /e modifier. This causes the substitution part of the expression be evaluated. Assuming $line contains a line of the file my $scalar= 4; $line =~ s/([\d]+)/$1*$scalar/ge; Applying this to every line will do the job for you. For example applying this to a $line containing "foo2 bar25 baz", transforms it to "foo8 bar100 baz" A: I prepared a small script which uses re.finditer to find all the integers (you can change the regexp so that it can deal with floats or scientific notation) and then use map to return a list of scaled numbers. import re def scale(fact): """This function returns a lambda which will scale a number by a factor 'fact'""" return lambda val: fact * val def find_and_scale(file, fact): """This function will find all the numbers (integers) in a file and return a list of all such numbers scaled by a factor 'fact'""" num = re.compile('(\d+)') scaling = scale(fact) f = open(file, 'r').read() numbers = [int(m.group(1)) for m in num.finditer(f)] return map(scaling, numbers) if __name__ == "__main__": import sys if len(sys.argv) != 3: print "usage: %s file factor" % sys.argv[0] sys.exit(-1) numbers = find_and_scale(sys.argv[1], int(sys.argv[2])) for number in numbers: print "%d " % number If you have a file whose numbers you want to scale by a factor fact, you call the script from the command line as python script.py file fact and it will print to STDOUT all the scaled numbers. Of course, you can do something more useful if you wanted... A: Regular expressions themselves can't - they're all about text - so sed can't directly. It's easy enough to do something like that in a full scripting language like python or perl, though. A: To those of you who doubt that sed can do arithmetic I offer this counter-example. This one is even wilder.
Multiply with find and replace
Can regular expressions be used to perform arithmetic? Such as find all numbers in a file and multiply them by a scalar value.
[ "You can achieve this using re.sub() with a callback:\nimport re\n\ndef repl(matchobj):\n i = int(matchobj.group(0))\n return str(i * 2)\n\nprint re.sub(r'\\d+', repl, '1 a20 300c')\n\nOutput:\n2 a40 600c\n\nFrom the docs:\n\nre.sub(pattern, repl, string[,\n count])\nIf repl is a function, it is called\n for every non-overlapping occurrence\n of pattern. The function takes a\n single match object argument, and\n returns the replacement string.\n\n", "In perl you can do this with the /e modifier. This causes the substitution part of the expression be evaluated. Assuming $line contains a line of the file \n my $scalar= 4;\n $line =~ s/([\\d]+)/$1*$scalar/ge;\n\nApplying this to every line will do the job for you. For example applying this to a \n$line containing \"foo2 bar25 baz\", transforms it to \"foo8 bar100 baz\"\n", "I prepared a small script which uses re.finditer to find all the integers (you can change the regexp so that it can deal with floats or scientific notation) and then use map to return a list of scaled numbers.\nimport re\n\ndef scale(fact):\n \"\"\"This function returns a lambda which will scale a number by a \n factor 'fact'\"\"\"\n return lambda val: fact * val\n\ndef find_and_scale(file, fact):\n \"\"\"This function will find all the numbers (integers) in a file and \n return a list of all such numbers scaled by a factor 'fact'\"\"\"\n num = re.compile('(\\d+)')\n scaling = scale(fact)\n f = open(file, 'r').read()\n numbers = [int(m.group(1)) for m in num.finditer(f)]\n return map(scaling, numbers)\n\nif __name__ == \"__main__\":\n import sys\n if len(sys.argv) != 3:\n print \"usage: %s file factor\" % sys.argv[0]\n sys.exit(-1)\n numbers = find_and_scale(sys.argv[1], int(sys.argv[2]))\n for number in numbers:\n print \"%d \" % number\n\nIf you have a file whose numbers you want to scale by a factor fact, you call the script from the command line as python script.py file fact and it will print to STDOUT all the scaled numbers. Of course, you can do something more useful if you wanted...\n", "Regular expressions themselves can't - they're all about text - so sed can't directly. It's easy enough to do something like that in a full scripting language like python or perl, though.\n", "To those of you who doubt that sed can do arithmetic I offer this counter-example. This one is even wilder.\n" ]
[ 8, 4, 2, 1, 1 ]
[ "Ayman Hourieh's answer can be reduced to be a little bit simpler, and imo more readable:\n>>> import re\n>>> repl = lambda m: str(int(m.group(0)) * 2)\n>>> print re.sub(r'\\d+', repl, '1 a20 300c')\n2 a40 600c\n\n" ]
[ -1 ]
[ "python", "regex", "sed" ]
stackoverflow_0002701063_python_regex_sed.txt
Q: A lightweight protocol for Python and Erlang interaction What protocol preferred to use for interaction between Python-code and Erlang-code over Internet? ASN.1 would be ideally for me, but its implementation in Python cannot generate encoder/decoder out from notation. A: Did you check Google's protocol buffers? It is very easy to use and there is an Erlang implementation available A: Well, you could use JSON or BERT. JSON is easily reable by humans, as it is ASCII only. To send binary data, you need to encode them (e.g. with base64). Another solution would be using BERT. BERT is based on the "erlang external binary format" for serialization, so the erlang side is pretty simple ;) python: http://github.com/samuel/python-bert Erlang: http://github.com/mojombo/bert.erl A: Also, you might want to have a look to Apache Thrift, an IDL supporting both Python and Erlang.
A lightweight protocol for Python and Erlang interaction
What protocol preferred to use for interaction between Python-code and Erlang-code over Internet? ASN.1 would be ideally for me, but its implementation in Python cannot generate encoder/decoder out from notation.
[ "Did you check Google's protocol buffers?\nIt is very easy to use and there is an Erlang implementation available\n", "Well, you could use JSON or BERT.\nJSON is easily reable by humans, as it is ASCII only. To send binary data, you need to encode them (e.g. with base64).\nAnother solution would be using BERT. BERT is based on the \"erlang external binary format\" for serialization, so the erlang side is pretty simple ;)\n\npython: http://github.com/samuel/python-bert\nErlang: http://github.com/mojombo/bert.erl\n\n", "Also, you might want to have a look to Apache Thrift, an IDL supporting both Python and Erlang.\n" ]
[ 5, 4, 4 ]
[]
[]
[ "asn.1", "erlang", "python" ]
stackoverflow_0002701397_asn.1_erlang_python.txt
Q: What's the easiest way to get my facebook status and photos using python? I just want to import my facebook status and photos to my personal django website but all the examples and documentation i can find are for developing facebook applications. A simple rss feed would be enough but it doesnt seem to exist in facebook. Do i really have to create a full facebook app to do this? A: A simple facebook application isn't that hard ... excluding trying to decipher the soup on developers.facebook.com. The "problem" is that you need to get an application key, application secret, and sometimes a session key in order to access the web services. Unless someone is sharing a service to do just that (I haven't looked, and you'd need to trust them) then the only way to fulfill the requirements are to create an application. However, the application key/application secret don't actually require that you write anything. They will show up in the Facebook Developer Application (the application that allows you to edit your applications...) Now, all you need is a session key (however, a session key is not always required, see the Understanding Sessions link below) -- and hopefully a permanent one. To do this, ask for the extended offline_access permission**. If you grant that to an application then it can get a session for you whenever it feels like it (or rather, the session does not follow the one-hour expiration policies for that application). Extended permissions. Understanding Sessions. Oh, but ignore that 'auth.renewOfflineSession(UID)' example -- the method doesn't exist. I told you the "developer" documentation was soup :-) You can use the URL in format: http://www.facebook.com/tos.php?api_key=YOURAPIKEY&req_perms=offline_access to request the permission of yourself. Now see the links below :-) Extra information in: **I'm not entirely sure if new changes to the FB policy affect forever-sessions, but this link seems more than relevant to the task at hand: http://blog.jylin.com/2009/10/01/loading-wall-posts-using-facebookstream_get/ Getting offline_access to work with Facebook Facebook offline access step-by-step (You need never post/share your facebook application -- you can keep it in sandbox mode forever.) A: Probably. Anything that bypassed authentication would be a fairly large privacy issue. A: With the release of the new graph api, this is pretty simple once you get your oauth token. Unfortunately you will need to create an app, but it can be a rather small one to get your oauth token so facebook can authorize your requests. You can use the python sdk here: http://github.com/facebook/python-sdk/ Once you have your token, you make a call to: https://graph.facebook.com/[your profile]/statuses?token=[your token] And you will get json back. If you first login to facebook and then go to the documentation page you can see the working example by clicking on the statuses link in the connections table. http://developers.facebook.com/docs/reference/api/user
What's the easiest way to get my facebook status and photos using python?
I just want to import my facebook status and photos to my personal django website but all the examples and documentation i can find are for developing facebook applications. A simple rss feed would be enough but it doesnt seem to exist in facebook. Do i really have to create a full facebook app to do this?
[ "A simple facebook application isn't that hard ... excluding trying to decipher the soup on developers.facebook.com.\nThe \"problem\" is that you need to get an application key, application secret, and sometimes a session key in order to access the web services. Unless someone is sharing a service to do just that (I haven't looked, and you'd need to trust them) then the only way to fulfill the requirements are to create an application. However, the application key/application secret don't actually require that you write anything. They will show up in the Facebook Developer Application (the application that allows you to edit your applications...)\nNow, all you need is a session key (however, a session key is not always required, see the Understanding Sessions link below) -- and hopefully a permanent one. To do this, ask for the extended offline_access permission**. If you grant that to an application then it can get a session for you whenever it feels like it (or rather, the session does not follow the one-hour expiration policies for that application). Extended permissions. Understanding Sessions. Oh, but ignore that 'auth.renewOfflineSession(UID)' example -- the method doesn't exist. I told you the \"developer\" documentation was soup :-)\nYou can use the URL in format:\nhttp://www.facebook.com/tos.php?api_key=YOURAPIKEY&req_perms=offline_access to request the permission of yourself. Now see the links below :-)\nExtra information in:\n**I'm not entirely sure if new changes to the FB policy affect forever-sessions, but this link seems more than relevant to the task at hand:\nhttp://blog.jylin.com/2009/10/01/loading-wall-posts-using-facebookstream_get/\nGetting offline_access to work with Facebook\nFacebook offline access step-by-step\n(You need never post/share your facebook application -- you can keep it in sandbox mode forever.)\n", "Probably. Anything that bypassed authentication would be a fairly large privacy issue. \n", "With the release of the new graph api, this is pretty simple once you get your oauth token. Unfortunately you will need to create an app, but it can be a rather small one to get your oauth token so facebook can authorize your requests. You can use the python sdk here: http://github.com/facebook/python-sdk/\nOnce you have your token, you make a call to: https://graph.facebook.com/[your profile]/statuses?token=[your token]\nAnd you will get json back. \nIf you first login to facebook and then go to the documentation page you can see the working example by clicking on the statuses link in the connections table.\nhttp://developers.facebook.com/docs/reference/api/user\n" ]
[ 2, 0, 0 ]
[]
[]
[ "django", "facebook", "pyfacebook", "python" ]
stackoverflow_0002582627_django_facebook_pyfacebook_python.txt
Q: What is the easiest, most concise way to make selected attributes in an instance be readonly? In Python, I want to make selected instance attributes of a class be readonly to code outside of the class. I want there to be no way outside code can alter the attribute, except indirectly by invoking methods on the instance. I want the syntax to be concise. What is the best way? (I give my current best answer below...) A: You should use the @property decorator. >>> class a(object): ... def __init__(self, x): ... self.x = x ... @property ... def xval(self): ... return self.x ... >>> b = a(5) >>> b.xval 5 >>> b.xval = 6 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: can't set attribute A: class C(object): def __init__(self): self.fullaccess = 0 self.__readonly = 22 # almost invisible to outside code... # define a publicly visible, read-only version of '__readonly': readonly = property(lambda self: self.__readonly) def inc_readonly( self ): self.__readonly += 1 c=C() # prove regular attribute is RW... print "c.fullaccess = %s" % c.fullaccess c.fullaccess = 1234 print "c.fullaccess = %s" % c.fullaccess # prove 'readonly' is a read-only attribute print "c.readonly = %s" % c.readonly try: c.readonly = 3 except AttributeError: print "Can't change c.readonly" print "c.readonly = %s" % c.readonly # change 'readonly' indirectly... c.inc_readonly() print "c.readonly = %s" % c.readonly This outputs: $ python ./p.py c.fullaccess = 0 c.fullaccess = 1234 c.readonly = 22 Can't change c.readonly c.readonly = 22 c.readonly = 23 My fingers itch to be able to say @readonly self.readonly = 22 i.e., use a decorator on an attribute. It would be so clean... A: Here's how: class whatever(object): def __init__(self, a, b, c, ...): self.__foobar = 1 self.__blahblah = 2 foobar = property(lambda self: self.__foobar) blahblah = property(lambda self: self.__blahblah) (Assuming foobar and blahblah are the attributes you want to be read-only.) Prepending two underscores to an attribute name effectively hides it from outside the class, so the internal versions won't be accessible from the outside. This only works for new-style classes inheriting from object since it depends on property. On the other hand... this is a pretty silly thing to do. Keeping variables private seems to be an obsession that comes from C++ and Java. Your users should use the public interface to your class because it's well-designed, not because you force them to. Edit: Looks like Kevin already posted a similar version. A: There is no real way to do this. There are ways to make it more 'difficult', but there's no concept of completely hidden, inaccessible class attributes. If the person using your class can't be trusted to follow the API docs, then that's their own problem. Protecting people from doing stupid stuff just means that they will do far more elaborate, complicated, and damaging stupid stuff to try to do whatever they shouldn't have been doing in the first place. A: You could use a metaclass that auto-wraps methods (or class attributes) that follow a naming convention into properties (shamelessly taken from Unifying Types and Classes in Python 2.2: class autoprop(type): def __init__(cls, name, bases, dict): super(autoprop, cls).__init__(name, bases, dict) props = {} for name in dict.keys(): if name.startswith("_get_") or name.startswith("_set_"): props[name[5:]] = 1 for name in props.keys(): fget = getattr(cls, "_get_%s" % name, None) fset = getattr(cls, "_set_%s" % name, None) setattr(cls, name, property(fget, fset)) This allows you to use: class A: __metaclass__ = autosuprop def _readonly(self): return __x A: I am aware that William Keller is the cleanest solution by far.. but here's something I came up with.. class readonly(object): def __init__(self, attribute_name): self.attribute_name = attribute_name def __get__(self, instance, instance_type): if instance != None: return getattr(instance, self.attribute_name) else: raise AttributeError("class %s has no attribute %s" % (instance_type.__name__, self.attribute_name)) def __set__(self, instance, value): raise AttributeError("attribute %s is readonly" % self.attribute_name) And here's the usage example class a(object): def __init__(self, x): self.x = x xval = readonly("x") Unfortunately this solution can't handle private variables (__ named variables).
What is the easiest, most concise way to make selected attributes in an instance be readonly?
In Python, I want to make selected instance attributes of a class be readonly to code outside of the class. I want there to be no way outside code can alter the attribute, except indirectly by invoking methods on the instance. I want the syntax to be concise. What is the best way? (I give my current best answer below...)
[ "You should use the @property decorator.\n>>> class a(object):\n... def __init__(self, x):\n... self.x = x\n... @property\n... def xval(self):\n... return self.x\n... \n>>> b = a(5)\n>>> b.xval\n5\n>>> b.xval = 6\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: can't set attribute\n\n", "class C(object):\n\n def __init__(self):\n\n self.fullaccess = 0\n self.__readonly = 22 # almost invisible to outside code...\n\n # define a publicly visible, read-only version of '__readonly':\n readonly = property(lambda self: self.__readonly)\n\n def inc_readonly( self ):\n self.__readonly += 1\n\nc=C()\n\n# prove regular attribute is RW...\nprint \"c.fullaccess = %s\" % c.fullaccess\nc.fullaccess = 1234\nprint \"c.fullaccess = %s\" % c.fullaccess\n\n# prove 'readonly' is a read-only attribute\nprint \"c.readonly = %s\" % c.readonly\ntry:\n c.readonly = 3\nexcept AttributeError:\n print \"Can't change c.readonly\"\nprint \"c.readonly = %s\" % c.readonly\n\n# change 'readonly' indirectly...\nc.inc_readonly()\nprint \"c.readonly = %s\" % c.readonly\n\nThis outputs:\n$ python ./p.py\nc.fullaccess = 0\nc.fullaccess = 1234\nc.readonly = 22\nCan't change c.readonly\nc.readonly = 22\nc.readonly = 23\n\nMy fingers itch to be able to say\n @readonly\n self.readonly = 22\n\ni.e., use a decorator on an attribute. It would be so clean...\n", "Here's how:\nclass whatever(object):\n def __init__(self, a, b, c, ...):\n self.__foobar = 1\n self.__blahblah = 2\n\n foobar = property(lambda self: self.__foobar)\n blahblah = property(lambda self: self.__blahblah)\n\n(Assuming foobar and blahblah are the attributes you want to be read-only.) Prepending two underscores to an attribute name effectively hides it from outside the class, so the internal versions won't be accessible from the outside. This only works for new-style classes inheriting from object since it depends on property.\nOn the other hand... this is a pretty silly thing to do. Keeping variables private seems to be an obsession that comes from C++ and Java. Your users should use the public interface to your class because it's well-designed, not because you force them to.\nEdit: Looks like Kevin already posted a similar version.\n", "There is no real way to do this. There are ways to make it more 'difficult', but there's no concept of completely hidden, inaccessible class attributes.\nIf the person using your class can't be trusted to follow the API docs, then that's their own problem. Protecting people from doing stupid stuff just means that they will do far more elaborate, complicated, and damaging stupid stuff to try to do whatever they shouldn't have been doing in the first place.\n", "You could use a metaclass that auto-wraps methods (or class attributes) that follow a naming convention into properties (shamelessly taken from Unifying Types and Classes in Python 2.2:\nclass autoprop(type):\n def __init__(cls, name, bases, dict):\n super(autoprop, cls).__init__(name, bases, dict)\n props = {}\n for name in dict.keys():\n if name.startswith(\"_get_\") or name.startswith(\"_set_\"):\n props[name[5:]] = 1\n for name in props.keys():\n fget = getattr(cls, \"_get_%s\" % name, None)\n fset = getattr(cls, \"_set_%s\" % name, None)\n setattr(cls, name, property(fget, fset))\n\nThis allows you to use:\nclass A:\n __metaclass__ = autosuprop\n def _readonly(self):\n return __x\n\n", "I am aware that William Keller is the cleanest solution by far.. but here's something I came up with.. \nclass readonly(object):\n def __init__(self, attribute_name):\n self.attribute_name = attribute_name\n\n def __get__(self, instance, instance_type):\n if instance != None:\n return getattr(instance, self.attribute_name)\n else:\n raise AttributeError(\"class %s has no attribute %s\" % \n (instance_type.__name__, self.attribute_name))\n\n def __set__(self, instance, value):\n raise AttributeError(\"attribute %s is readonly\" % \n self.attribute_name)\n\nAnd here's the usage example\nclass a(object):\n def __init__(self, x):\n self.x = x\n xval = readonly(\"x\")\n\nUnfortunately this solution can't handle private variables (__ named variables).\n" ]
[ 7, 2, 2, 1, 0, 0 ]
[]
[]
[ "attributes", "python", "readonly" ]
stackoverflow_0000125034_attributes_python_readonly.txt
Q: Intercept method calls in Python I'm implementing a RESTful web service in python and would like to add some QOS logging functionality by intercepting function calls and logging their execution time and so on. Basically i thought of a class from which all other services can inherit, that automatically overrides the default method implementations and wraps them in a logger function. What's the best way to achieve this? A: Something like this? This implictly adds a decorator to your method (you can also make an explicit decorator based on this if you prefer that): class Foo(object): def __getattribute__(self,name): attr = object.__getattribute__(self, name) if hasattr(attr, '__call__'): def newfunc(*args, **kwargs): print('before calling %s' %attr.__name__) result = attr(*args, **kwargs) print('done calling %s' %attr.__name__) return result return newfunc else: return attr when you now try something like: class Bar(Foo): def myFunc(self, data): print("myFunc: %s"% data) bar = Bar() bar.myFunc(5) You'll get: before calling myFunc myFunc: 5 done calling myFunc A: What if you write a decorator on each functions ? Here is an example on python's wiki. Do you use any web framework for doing your webservice ? Or are you doing everything by hand ?
Intercept method calls in Python
I'm implementing a RESTful web service in python and would like to add some QOS logging functionality by intercepting function calls and logging their execution time and so on. Basically i thought of a class from which all other services can inherit, that automatically overrides the default method implementations and wraps them in a logger function. What's the best way to achieve this?
[ "Something like this? This implictly adds a decorator to your method (you can also make an explicit decorator based on this if you prefer that):\nclass Foo(object):\n def __getattribute__(self,name):\n attr = object.__getattribute__(self, name)\n if hasattr(attr, '__call__'):\n def newfunc(*args, **kwargs):\n print('before calling %s' %attr.__name__)\n result = attr(*args, **kwargs)\n print('done calling %s' %attr.__name__)\n return result\n return newfunc\n else:\n return attr\n\nwhen you now try something like:\nclass Bar(Foo):\n def myFunc(self, data):\n print(\"myFunc: %s\"% data)\n\nbar = Bar()\nbar.myFunc(5)\n\nYou'll get:\nbefore calling myFunc\nmyFunc: 5\ndone calling myFunc\n\n", "What if you write a decorator on each functions ? Here is an example on python's wiki.\nDo you use any web framework for doing your webservice ? Or are you doing everything by hand ?\n" ]
[ 73, 5 ]
[]
[]
[ "python" ]
stackoverflow_0002704434_python.txt
Q: Why is '\x' invalid in Python? I was experimenting with '\' characters, using '\a\b\c...' just to enumerate for myself which characters Python interprets as control characters, and to what. Here's what I found: \a - BELL \b - BACKSPACE \f - FORMFEED \n - LINEFEED \r - RETURN \t - TAB \v - VERTICAL TAB Most of the other characters I tried, '\g', '\s', etc. just evaluate to the 2-character string of a backslash and the given character. I understand this is intentional, and makes sense to me. But '\x' is a problem. When my script reaches this source line: val = "\x" I get: ValueError: invalid \x escape What is so special about '\x'? Why is it treated differently from the other non-escaped characters? A: There is a table listing all the escape codes and their meanings in the documentation. Escape Sequence Meaning Notes \xhh Character with hex value hh (4,5) Notes: 4. Unlike in Standard C, exactly two hex digits are required. 5. In a string literal, hexadecimal and octal escapes denote the byte with the given value; it is not necessary that the byte encodes a character in the source character set. In a Unicode literal, these escapes denote a Unicode character with the given value. A: \xhh is used to represent hex escape characters. A: x is used to define (one byte) hexadecimal literals in strings, for example: '\x61' will evaluate to 'a', because 61 is the hexadecimal value of 97, which represents a in ASCII A: \x is missing the hex character you want to match against: \xnn -> \x1B A: You're not giving the full escape sequence: \xhh... The hexadecimal value hh, where hh stands for a sequence of hexadecimal digits (‘0’–‘9’, and either ‘A’–‘F’ or ‘a’–‘f’). Like the same construct in ISO C, the escape sequence continues until the first nonhexadecimal digit is seen. (c.e.) However, using more than two hexadecimal digits produces undefined results. (The ‘\x’ escape sequence is not allowed in POSIX awk.) From: http://www.gnu.org/software/gawk/manual/html_node/Escape-Sequences.html
Why is '\x' invalid in Python?
I was experimenting with '\' characters, using '\a\b\c...' just to enumerate for myself which characters Python interprets as control characters, and to what. Here's what I found: \a - BELL \b - BACKSPACE \f - FORMFEED \n - LINEFEED \r - RETURN \t - TAB \v - VERTICAL TAB Most of the other characters I tried, '\g', '\s', etc. just evaluate to the 2-character string of a backslash and the given character. I understand this is intentional, and makes sense to me. But '\x' is a problem. When my script reaches this source line: val = "\x" I get: ValueError: invalid \x escape What is so special about '\x'? Why is it treated differently from the other non-escaped characters?
[ "There is a table listing all the escape codes and their meanings in the documentation.\n\nEscape Sequence Meaning Notes\n\\xhh Character with hex value hh (4,5)\n\n\nNotes:\n4. Unlike in Standard C, exactly two hex digits are required.\n 5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a character\n in the source character set. In a Unicode literal, these escapes denote a\n Unicode character with the given value.\n\n", "\\xhh is used to represent hex escape characters.\n", "x is used to define (one byte) hexadecimal literals in strings, for example:\n'\\x61'\n\nwill evaluate to 'a', because 61 is the hexadecimal value of 97, which represents a in ASCII\n", "\\x is missing the hex character you want to match against: \\xnn -> \\x1B\n", "You're not giving the full escape sequence:\n\n\\xhh...\nThe hexadecimal value hh, where hh stands for a sequence of\nhexadecimal digits (‘0’–‘9’, and either ‘A’–‘F’ or ‘a’–‘f’). Like the\nsame construct in ISO C, the escape sequence continues until the first\nnonhexadecimal digit is seen. (c.e.) However, using more than two\nhexadecimal digits produces undefined results. (The ‘\\x’ escape\nsequence is not allowed in POSIX awk.)\n\nFrom: http://www.gnu.org/software/gawk/manual/html_node/Escape-Sequences.html\n" ]
[ 21, 6, 6, 4, 1 ]
[]
[]
[ "ascii", "backslash", "control_characters", "python", "string" ]
stackoverflow_0002704654_ascii_backslash_control_characters_python_string.txt
Q: List comprehension, map, and numpy.vectorize performance I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing a: a = [foo(i) for i in xrange(100)] a = map(foo, range(100)) vfoo = numpy.vectorize(foo) a = vfoo(range(100)) (I don't care whether the output is a list or a numpy array.) Is there a better way? A: Why are you optimizing this? Have you written working, tested code, then examined your algorithm profiled your code and found that optimizing this will have an effect? Are you doing this in a deep inner loop where you found you are spending your time? If not, don't bother. You'll only know which works fastest for you by timing it. To time it in a useful way, you'll have to specialize it to your actual use case. For example, you can get noticeable performance differences between a function call in a list comprehension versus an inline expression; it isn't clear whether you really wanted the former or if you reduced it to that to make your cases similar. You say that it doesn't matter whether you end up with a numpy array or a list, but if you're doing this kind of micro-optimization it does matter, since those will perform differently when you use them afterward. Putting your finger on that could be tricky, so hopefully it will turn out the whole problem is moot as premature. It is typically better to simply use the right tool for the job for clarity, readability, and so forth. It is rare that I would have a hard time deciding between these things. If I needed numpy arrays, I would use them. I would use these for storing large homogeneous arrays or multidimensional data. I use them a lot, but rarely where I think I'd want to use a list. If I was using these, I'd do my best to write my functions already vectorized so I didn't have to use numpy.vectorize. For example, times_five below can be used on a numpy array with no decoration. If I didn't have cause to use numpy, that is to say if I wasn't solving numerical math problems or using special numpy features or storing multidimensional arrays or whatever... If I had an already-existing function, I would use map. That's what it's for. If I had an operation that fit inside a small expression and I didn't need a function, I'd use a list comprehension. If I just wanted to do the operation for all the cases but didn't actually need to store the result, I'd use a plain for loop. In many cases, I'd actually use map and list comprehensions' lazy equivalents: itertools.imap and generator expressions. These can reduce memory usage by a factor of n in some cases and can avoid performing unnecessary operations sometimes. If it does turn out this is where performance problems lie, getting this sort of thing right is tricky. It is very common that people time the wrong toy case for their actual problems. Worse, it is extremely common people make dumb general rules based on it. Consider the following cases (timeme.py is posted below) python -m timeit "from timeme import x, times_five; from numpy import vectorize" "vectorize(times_five)(x)" 1000 loops, best of 3: 924 usec per loop python -m timeit "from timeme import x, times_five" "[times_five(item) for item in x]" 1000 loops, best of 3: 510 usec per loop python -m timeit "from timeme import x, times_five" "map(times_five, x)" 1000 loops, best of 3: 484 usec per loop A naïve obsever would conclude that map is the best-performing of these options, but the answer is still "it depends". Consider the power of using the benefits of the tools you are using: list comprehensions let you avoid defining simple functions; numpy lets you vectorize things in C if you're doing the right things. python -m timeit "from timeme import x, times_five" "[item + item + item + item + item for item in x]" 1000 loops, best of 3: 285 usec per loop python -m timeit "import numpy; x = numpy.arange(1000)" "x + x + x + x + x" 10000 loops, best of 3: 39.5 usec per loop But that's not all—there's more. Consider the power of an algorithm change. It can be even more dramatic. python -m timeit "from timeme import x, times_five" "[5 * item for item in x]" 10000 loops, best of 3: 147 usec per loop python -m timeit "import numpy; x = numpy.arange(1000)" "5 * x" 100000 loops, best of 3: 16.6 usec per loop Sometimes an algorithm change can be even more effective. This will be more and more effective as the numbers get bigger. python -m timeit "from timeme import square, x" "map(square, x)" 10 loops, best of 3: 41.8 msec per loop python -m timeit "from timeme import good_square, x" "map(good_square, x)" 1000 loops, best of 3: 370 usec per loop And even now, this all may have little bearing on your actual problem. It looks like numpy is so great if you can use it right, but it has its limitations: none of these numpy examples used actual Python objects in the arrays. That complicates what must be done; a lot even. And what if we do get to use C datatypes? These are less robust than Python objects. They aren't nullable. The integers overflow. You have to do some extra work to retrieve them. They're statically typed. Sometimes these things prove to be problems, even unexpected ones. So there you go: a definitive answer. "It depends." # timeme.py x = xrange(1000) def times_five(a): return a + a + a + a + a def square(a): if a == 0: return 0 value = a for i in xrange(a - 1): value += a return value def good_square(a): return a ** 2 A: First comment: don't mix usage of xrange() or range() in your samples... doing so invalidates your question as you're comparing apples and oranges. I second @Gabe's notion that if you have many large data structures, numpy should win overall... just keep in mind most of the time C is faster than Python, but then again, most of the time, PyPy is faster than CPython. :-) As far as listcomps vs. map() calls go... one makes 101 function calls while the other one makes 102. meaning you won't see a significant difference in timing, as shown below using the timeit module as @Mike has suggested: List Comprehension $ python -m timeit "def foo(x):pass; [foo(i) for i in range(100)]" 1000000 loops, best of 3: 0.216 usec per loop $ python -m timeit "def foo(x):pass; [foo(i) for i in range(100)]" 1000000 loops, best of 3: 0.21 usec per loop $ python -m timeit "def foo(x):pass; [foo(i) for i in range(100)]" 1000000 loops, best of 3: 0.212 usec per loop map() function call $ python -m timeit "def foo(x):pass; map(foo, range(100))" 1000000 loops, best of 3: 0.216 usec per loop $ python -m timeit "def foo(x):pass; map(foo, range(100))" 1000000 loops, best of 3: 0.214 usec per loop $ python -m timeit "def foo(x):pass; map(foo, range(100))" 1000000 loops, best of 3: 0.215 usec per loop With that said however, unless you are planning on using the lists that you create from either of these techniques, try avoid them (using lists) completely. IOW, if all you're doing is iterating over them, it's not worth the memory consumption (and possibly creating a potentially massive list in memory) when you only care to look at each element one at a time just discard the list as soon as you're done. In such cases, I highly recommend the use of generator expressions instead as they don't create the entire list in memory... it is a more memory-friendly, lazy iterative way of looping through elements to process w/o creating a largish array in memory. The best part is that its syntax is nearly identical to that of listcomps: a = (foo(i) for i in range(100)) 2.x users only: along the lines of more iteration, change all the range() calls to xrange() for any older 2.x code then switch to range() when porting to Python 3 where xrange() replaces and is renamed to range(). A: If the function itself takes a significant amount of time to execute, it's irrelevant how you map its output to an array. Once you start getting into arrays of millions of numbers, though, numpy can save you a significant amount of memory. A: The list comprehension is the fastest, then the map, then the numpy on my machine. The numpy code is quite a bit slower actually than the other two, but that the difference is much less if you use numpy.arange instead of range (or xrange) as I did in the times listed below. Also, if you use psyco, the list comprehension is sped up while the other two were slowed down for me. I also used larger arrays of numbers than in your code and my foo function just computed the square root. Here are some typical times. Without psyco: list comprehension: 47.5581952455 ms map: 51.9082732582 ms numpy.vectorize: 57.9601876775 ms With psyco: list comprehension: 30.4318844993 ms map: 96.4504427239 ms numpy.vectorize: 99.5858691538 ms I used Python 2.6.4 and the timeit module. Based on these results, I would say that it probably doesn't really make a difference which one you choose for the initialization. I would probably choose the numpy one or the list comprehension based on the speed, but ultimately you should let what you are doing with the array afterwards guide your choice.
List comprehension, map, and numpy.vectorize performance
I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing a: a = [foo(i) for i in xrange(100)] a = map(foo, range(100)) vfoo = numpy.vectorize(foo) a = vfoo(range(100)) (I don't care whether the output is a list or a numpy array.) Is there a better way?
[ "\nWhy are you optimizing this? Have you written working, tested code, then examined your algorithm profiled your code and found that optimizing this will have an effect? Are you doing this in a deep inner loop where you found you are spending your time? If not, don't bother.\nYou'll only know which works fastest for you by timing it. To time it in a useful way, you'll have to specialize it to your actual use case. For example, you can get noticeable performance differences between a function call in a list comprehension versus an inline expression; it isn't clear whether you really wanted the former or if you reduced it to that to make your cases similar.\nYou say that it doesn't matter whether you end up with a numpy array or a list, but if you're doing this kind of micro-optimization it does matter, since those will perform differently when you use them afterward. Putting your finger on that could be tricky, so hopefully it will turn out the whole problem is moot as premature.\nIt is typically better to simply use the right tool for the job for clarity, readability, and so forth. It is rare that I would have a hard time deciding between these things.\n\nIf I needed numpy arrays, I would use them. I would use these for storing large homogeneous arrays or multidimensional data. I use them a lot, but rarely where I think I'd want to use a list.\n\n\nIf I was using these, I'd do my best to write my functions already vectorized so I didn't have to use numpy.vectorize. For example, times_five below can be used on a numpy array with no decoration.\n\nIf I didn't have cause to use numpy, that is to say if I wasn't solving numerical math problems or using special numpy features or storing multidimensional arrays or whatever...\n\n\nIf I had an already-existing function, I would use map. That's what it's for.\nIf I had an operation that fit inside a small expression and I didn't need a function, I'd use a list comprehension.\nIf I just wanted to do the operation for all the cases but didn't actually need to store the result, I'd use a plain for loop.\nIn many cases, I'd actually use map and list comprehensions' lazy equivalents: itertools.imap and generator expressions. These can reduce memory usage by a factor of n in some cases and can avoid performing unnecessary operations sometimes.\n\n\n\n\nIf it does turn out this is where performance problems lie, getting this sort of thing right is tricky. It is very common that people time the wrong toy case for their actual problems. Worse, it is extremely common people make dumb general rules based on it.\nConsider the following cases (timeme.py is posted below)\npython -m timeit \"from timeme import x, times_five; from numpy import vectorize\" \"vectorize(times_five)(x)\"\n1000 loops, best of 3: 924 usec per loop\n\npython -m timeit \"from timeme import x, times_five\" \"[times_five(item) for item in x]\"\n1000 loops, best of 3: 510 usec per loop\n\npython -m timeit \"from timeme import x, times_five\" \"map(times_five, x)\"\n1000 loops, best of 3: 484 usec per loop\n\nA naïve obsever would conclude that map is the best-performing of these options, but the answer is still \"it depends\". Consider the power of using the benefits of the tools you are using: list comprehensions let you avoid defining simple functions; numpy lets you vectorize things in C if you're doing the right things. \npython -m timeit \"from timeme import x, times_five\" \"[item + item + item + item + item for item in x]\"\n1000 loops, best of 3: 285 usec per loop\n\npython -m timeit \"import numpy; x = numpy.arange(1000)\" \"x + x + x + x + x\"\n10000 loops, best of 3: 39.5 usec per loop\n\nBut that's not all—there's more. Consider the power of an algorithm change. It can be even more dramatic.\npython -m timeit \"from timeme import x, times_five\" \"[5 * item for item in x]\"\n10000 loops, best of 3: 147 usec per loop\n\npython -m timeit \"import numpy; x = numpy.arange(1000)\" \"5 * x\"\n100000 loops, best of 3: 16.6 usec per loop\n\nSometimes an algorithm change can be even more effective. This will be more and more effective as the numbers get bigger.\npython -m timeit \"from timeme import square, x\" \"map(square, x)\"\n10 loops, best of 3: 41.8 msec per loop\n\npython -m timeit \"from timeme import good_square, x\" \"map(good_square, x)\"\n1000 loops, best of 3: 370 usec per loop\n\nAnd even now, this all may have little bearing on your actual problem. It looks like numpy is so great if you can use it right, but it has its limitations: none of these numpy examples used actual Python objects in the arrays. That complicates what must be done; a lot even. And what if we do get to use C datatypes? These are less robust than Python objects. They aren't nullable. The integers overflow. You have to do some extra work to retrieve them. They're statically typed. Sometimes these things prove to be problems, even unexpected ones. \nSo there you go: a definitive answer. \"It depends.\"\n\n# timeme.py\n\nx = xrange(1000)\n\ndef times_five(a):\n return a + a + a + a + a\n\ndef square(a):\n if a == 0:\n return 0\n\n value = a\n for i in xrange(a - 1):\n value += a\n return value\n\ndef good_square(a):\n return a ** 2\n\n", "First comment: don't mix usage of xrange() or range() in your samples... doing so invalidates your question as you're comparing apples and oranges.\nI second @Gabe's notion that if you have many large data structures, numpy should win overall... just keep in mind most of the time C is faster than Python, but then again, most of the time, PyPy is faster than CPython. :-)\nAs far as listcomps vs. map() calls go... one makes 101 function calls while the other one makes 102. meaning you won't see a significant difference in timing, as shown below using the timeit module as @Mike has suggested:\n\nList Comprehension\n$ python -m timeit \"def foo(x):pass; [foo(i) for i in range(100)]\"\n1000000 loops, best of 3: 0.216 usec per loop\n$ python -m timeit \"def foo(x):pass; [foo(i) for i in range(100)]\"\n1000000 loops, best of 3: 0.21 usec per loop\n$ python -m timeit \"def foo(x):pass; [foo(i) for i in range(100)]\"\n1000000 loops, best of 3: 0.212 usec per loop\nmap() function call\n$ python -m timeit \"def foo(x):pass; map(foo, range(100))\"\n1000000 loops, best of 3: 0.216 usec per loop\n$ python -m timeit \"def foo(x):pass; map(foo, range(100))\"\n1000000 loops, best of 3: 0.214 usec per loop\n$ python -m timeit \"def foo(x):pass; map(foo, range(100))\"\n1000000 loops, best of 3: 0.215 usec per loop\n\nWith that said however, unless you are planning on using the lists that you create from either of these techniques, try avoid them (using lists) completely. IOW, if all you're doing is iterating over them, it's not worth the memory consumption (and possibly creating a potentially massive list in memory) when you only care to look at each element one at a time just discard the list as soon as you're done.\nIn such cases, I highly recommend the use of generator expressions instead as they don't create the entire list in memory... it is a more memory-friendly, lazy iterative way of looping through elements to process w/o creating a largish array in memory. The best part is that its syntax is nearly identical to that of listcomps:\na = (foo(i) for i in range(100))\n\n2.x users only: along the lines of more iteration, change all the range() calls to xrange() for any older 2.x code then switch to range() when porting to Python 3 where xrange() replaces and is renamed to range().\n", "If the function itself takes a significant amount of time to execute, it's irrelevant how you map its output to an array. Once you start getting into arrays of millions of numbers, though, numpy can save you a significant amount of memory.\n", "The list comprehension is the fastest, then the map, then the numpy on my machine. The numpy code is quite a bit slower actually than the other two, but that the difference is much less if you use numpy.arange instead of range (or xrange) as I did in the times listed below. Also, if you use psyco, the list comprehension is sped up while the other two were slowed down for me. I also used larger arrays of numbers than in your code and my foo function just computed the square root. Here are some typical times.\nWithout psyco:\nlist comprehension: 47.5581952455 ms\nmap: 51.9082732582 ms\nnumpy.vectorize: 57.9601876775 ms\n\nWith psyco:\nlist comprehension: 30.4318844993 ms\nmap: 96.4504427239 ms\nnumpy.vectorize: 99.5858691538 ms\n\nI used Python 2.6.4 and the timeit module.\nBased on these results, I would say that it probably doesn't really make a difference which one you choose for the initialization. I would probably choose the numpy one or the list comprehension based on the speed, but ultimately you should let what you are doing with the array afterwards guide your choice.\n" ]
[ 24, 13, 7, 4 ]
[]
[]
[ "list_comprehension", "numpy", "performance", "python" ]
stackoverflow_0002703310_list_comprehension_numpy_performance_python.txt
Q: Can I create threads in App Enging using Python? Can this code create threads in Google App Engine. If no, why not? class LogText(db.Model): content = db.StringProperty(multiline=True) class MyThread(threading.Thread): def __init__(self,threadname): threading.Thread.__init__(self, name=threadname) def run(self,request): log=LogText() log.content=request.POST.get('content',None) log.put() def Log(request): thr = MyThread('haha') thr.run(request) return HttpResponse('') A: App Engine does not allow you to create new threads, probably because primarily the goal of App Engine is to build simple request-response apps, and threads are usually not considered "simple". Managing threads for an app to prevent abuse (accidental or otherwise) would be difficult, or impossible, for App Engine to do, so they just disallow them entirely.
Can I create threads in App Enging using Python?
Can this code create threads in Google App Engine. If no, why not? class LogText(db.Model): content = db.StringProperty(multiline=True) class MyThread(threading.Thread): def __init__(self,threadname): threading.Thread.__init__(self, name=threadname) def run(self,request): log=LogText() log.content=request.POST.get('content',None) log.put() def Log(request): thr = MyThread('haha') thr.run(request) return HttpResponse('')
[ "App Engine does not allow you to create new threads, probably because primarily the goal of App Engine is to build simple request-response apps, and threads are usually not considered \"simple\".\nManaging threads for an app to prevent abuse (accidental or otherwise) would be difficult, or impossible, for App Engine to do, so they just disallow them entirely.\n" ]
[ 2 ]
[]
[]
[ "google_app_engine", "multithreading", "python" ]
stackoverflow_0002702888_google_app_engine_multithreading_python.txt
Q: Api to analyse complex graph I am looking for an API (preferably in python) that could be used to analyze complex networks. Basically I want to find things like: Average shortest path, Degree distribution Giant Component local clustering coefficient, global clustering coefficient etc.. Thanks A: I would suggest Networkx and PyGraphViz. I've used them for a similar (but not as complex) graphing project in python and I love it. A: The boost graph library has Python bindings. A: I've used igraph on Linux. It started to grind on 64k nodes but that graph was becoming unwieldy any way. Not sure about performance next to PyGraphViz but now you have a plenty of options.
Api to analyse complex graph
I am looking for an API (preferably in python) that could be used to analyze complex networks. Basically I want to find things like: Average shortest path, Degree distribution Giant Component local clustering coefficient, global clustering coefficient etc.. Thanks
[ "I would suggest Networkx and PyGraphViz. I've used them for a similar (but not as complex) graphing project in python and I love it.\n", "The boost graph library has Python bindings.\n", "I've used igraph on Linux. It started to grind on 64k nodes but that graph was becoming unwieldy any way. \nNot sure about performance next to PyGraphViz but now you have a plenty of options.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "api", "graph", "python", "social_networking" ]
stackoverflow_0002701101_api_graph_python_social_networking.txt
Q: Why isn't the regular expression's "non-capturing" group working? In the snippet below, the non-capturing group "(?:aaa)" should be ignored in the matching result, The result should be "_bbb" only. However, I get "aaa_bbb" in the matching result; only when I specify group(2) does it show "_bbb". >>> import re >>> s = "aaa_bbb" >>> print(re.match(r"(?:aaa)(_bbb)", s).group()) aaa_bbb A: I think you're misunderstanding the concept of a "non-capturing group". The text matched by a non-capturing group still becomes part of the overall regex match. Both the regex (?:aaa)(_bbb) and the regex (aaa)(_bbb) return aaa_bbb as the overall match. The difference is that the first regex has one capturing group which returns _bbb as its match, while the second regex has two capturing groups that return aaa and _bbb as their respective matches. In your Python code, to get _bbb, you'd need to use group(1) with the first regex, and group(2) with the second regex. The main benefit of non-capturing groups is that you can add them to a regex without upsetting the numbering of the capturing groups in the regex. They also offer (slightly) better performance as the regex engine doesn't have to keep track of the text matched by non-capturing groups. If you really want to exclude aaa from the overall regex match then you need to use lookaround. In this case, positive lookbehind does the trick: (?<=aaa)_bbb. With this regex, group() returns _bbb in Python. No capturing groups needed. My recommendation is that if you have the ability to use capturing groups to get part of the regex match, use that method instead of lookaround. A: group() and group(0) will return the entire match. Subsequent groups are actual capture groups. >>> print (re.match(r"(?:aaa)(_bbb)", string1).group(0)) aaa_bbb >>> print (re.match(r"(?:aaa)(_bbb)", string1).group(1)) _bbb >>> print (re.match(r"(?:aaa)(_bbb)", string1).group(2)) Traceback (most recent call last): File "<stdin>", line 1, in ? IndexError: no such group If you want the same behavior than group(): " ".join(re.match(r"(?:aaa)(_bbb)", string1).groups()) A: Try: print(re.match(r"(?:aaa)(_bbb)", string1).group(1)) group() is same as group(0) and Group 0 is always present and it's the whole RE match. A: TFM: class re.MatchObject group([group1, ...]) Returns one or more subgroups of the match. If there is a single argument, the result is a single string; if there are multiple arguments, the result is a tuple with one item per argument. Without arguments, group1 defaults to zero (the whole match is returned). If a groupN argument is zero, the corresponding return value is the entire matching string. A: You have to specify group(1) to get just the part captured by the parenthesis (_bbb in this case). group() without parameters will return the whole string the complete regular expression matched, no matter if some parts of it were additionally captured by parenthesis or not. A: Use the groups method on the match object instead of group. It returns a list of all capture buffers. The group method with no argument is returning the entire match of the regular expression.
Why isn't the regular expression's "non-capturing" group working?
In the snippet below, the non-capturing group "(?:aaa)" should be ignored in the matching result, The result should be "_bbb" only. However, I get "aaa_bbb" in the matching result; only when I specify group(2) does it show "_bbb". >>> import re >>> s = "aaa_bbb" >>> print(re.match(r"(?:aaa)(_bbb)", s).group()) aaa_bbb
[ "I think you're misunderstanding the concept of a \"non-capturing group\". The text matched by a non-capturing group still becomes part of the overall regex match.\nBoth the regex (?:aaa)(_bbb) and the regex (aaa)(_bbb) return aaa_bbb as the overall match. The difference is that the first regex has one capturing group which returns _bbb as its match, while the second regex has two capturing groups that return aaa and _bbb as their respective matches. In your Python code, to get _bbb, you'd need to use group(1) with the first regex, and group(2) with the second regex.\nThe main benefit of non-capturing groups is that you can add them to a regex without upsetting the numbering of the capturing groups in the regex. They also offer (slightly) better performance as the regex engine doesn't have to keep track of the text matched by non-capturing groups.\nIf you really want to exclude aaa from the overall regex match then you need to use lookaround. In this case, positive lookbehind does the trick: (?<=aaa)_bbb. With this regex, group() returns _bbb in Python. No capturing groups needed.\nMy recommendation is that if you have the ability to use capturing groups to get part of the regex match, use that method instead of lookaround.\n", "group() and group(0) will return the entire match. Subsequent groups are actual capture groups.\n>>> print (re.match(r\"(?:aaa)(_bbb)\", string1).group(0))\naaa_bbb\n>>> print (re.match(r\"(?:aaa)(_bbb)\", string1).group(1))\n_bbb\n>>> print (re.match(r\"(?:aaa)(_bbb)\", string1).group(2))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\nIndexError: no such group\n\nIf you want the same behavior than group():\n\" \".join(re.match(r\"(?:aaa)(_bbb)\", string1).groups())\n", "Try:\nprint(re.match(r\"(?:aaa)(_bbb)\", string1).group(1))\n\ngroup() is same as group(0) and Group 0 is always present and it's the whole RE match.\n", "TFM:\nclass re.MatchObject\ngroup([group1, ...])\nReturns one or more subgroups of the match. If there is a single argument, the result is a single string; if there are multiple arguments, the result is a tuple with one item per argument. Without arguments, group1 defaults to zero (the whole match is returned). If a groupN argument is zero, the corresponding return value is the entire matching string.\n", "You have to specify group(1) to get just the part captured by the parenthesis (_bbb in this case).\ngroup() without parameters will return the whole string the complete regular expression matched, no matter if some parts of it were additionally captured by parenthesis or not.\n", "Use the groups method on the match object instead of group. It returns a list of all capture buffers. The group method with no argument is returning the entire match of the regular expression.\n" ]
[ 129, 61, 3, 3, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0002703029_python_regex.txt
Q: is the sender of google-app-engine allow my own gmail my gmail is zjm1126@gmail.com i can only use zjm1126@gmail.com in the sender=".." ,yes ?? from google.appengine.api import mail message = mail.EmailMessage(sender="hahahaha@gmail.com", subject="Your account has been approved") message.to = "zjm1126@qq.com" message.body = """ Dear Albert: Your example.com account has been approved. You can now visit http://www.example.com/ and sign in using your Google Account to access new features. Please let us know if you have any questions. The example.com Team """ message.send() thanks A: There are two kinds of FROM addresses allowed by GAE's e-mail API: The currently authenticated Google user of your app (if your app uses Google auth and someone's logged in) The Google address of any administrator of the app engine app (e.g. you, as the owner) "If you want to send email on behalf of the application but do not want to use a single administrator's personal Google Account as the sender, you can create a new Google Account for the application using any valid email address, then add the new account as an administrator for the application. To add an account as an administrator, see the "Developers" section of the Admin Console." http://code.google.com/appengine/docs/java/mail/overview.html
is the sender of google-app-engine allow my own gmail
my gmail is zjm1126@gmail.com i can only use zjm1126@gmail.com in the sender=".." ,yes ?? from google.appengine.api import mail message = mail.EmailMessage(sender="hahahaha@gmail.com", subject="Your account has been approved") message.to = "zjm1126@qq.com" message.body = """ Dear Albert: Your example.com account has been approved. You can now visit http://www.example.com/ and sign in using your Google Account to access new features. Please let us know if you have any questions. The example.com Team """ message.send() thanks
[ "There are two kinds of FROM addresses allowed by GAE's e-mail API:\n\nThe currently authenticated Google user of your app (if your app uses Google auth and someone's logged in)\nThe Google address of any administrator of the app engine app (e.g. you, as the owner)\n\n\"If you want to send email on behalf of the application but do not want to use a single administrator's personal Google Account as the sender, you can create a new Google Account for the application using any valid email address, then add the new account as an administrator for the application. To add an account as an administrator, see the \"Developers\" section of the Admin Console.\"\nhttp://code.google.com/appengine/docs/java/mail/overview.html\n" ]
[ 3 ]
[]
[]
[ "email", "google_app_engine", "python" ]
stackoverflow_0002705816_email_google_app_engine_python.txt
Q: Code Coverage and Unit Testing of Python Code I have already visited Preferred Python unit-testing framework. I am not just looking at Python Unit Testing Framework, but also code coverage with respect to unit tests. So far I have only come across coverage.py. Is there any better option? An interesting option for me is to integrate cpython, unit testing of Python code and code coverage of Python code with Visual Studio 2008 through plugins (something similar to IronPython Studio). What can be done to achieve this? I look forward to suggestions. A: We use this Django coverage integration, but instead of using the default coverage.py reporting, we generate some simple HTML: Colorize Python source using the built-in tokenizer. A: PyDev seems to allow code coverage from within Eclipse. I've yet to find how to integrate that with my own (rather complex) build process, so I use Ned Batchelder's coverage.py at the command line. A: There is also figleaf which I think is based on Ned Batchelder's coverage.py. We use nose as the driver for the testing. It all works pretty well. We write our unit tests using the built-in unittest and doctest modules. A: NetBeans' new Python support has tightly integrated code coverage support - more info here. A: If you want interactive code coverage, where you can see your coverage stats change in real time, take a look at Python Coverage Validator. A: Testoob has a neat "--coverage" command-line option to generate a coverage report.
Code Coverage and Unit Testing of Python Code
I have already visited Preferred Python unit-testing framework. I am not just looking at Python Unit Testing Framework, but also code coverage with respect to unit tests. So far I have only come across coverage.py. Is there any better option? An interesting option for me is to integrate cpython, unit testing of Python code and code coverage of Python code with Visual Studio 2008 through plugins (something similar to IronPython Studio). What can be done to achieve this? I look forward to suggestions.
[ "We use this Django coverage integration, but instead of using the default coverage.py reporting, we generate some simple HTML: \nColorize Python source using the built-in tokenizer.\n", "PyDev seems to allow code coverage from within Eclipse. \nI've yet to find how to integrate that with my own (rather complex) build process, so I use Ned Batchelder's coverage.py at the command line.\n", "There is also figleaf which I think is based on Ned Batchelder's coverage.py. We use nose as the driver for the testing. It all works pretty well. We write our unit tests using the built-in unittest and doctest modules.\n", "NetBeans' new Python support has tightly integrated code coverage support - more info here.\n", "If you want interactive code coverage, where you can see your coverage stats change in real time, take a look at Python Coverage Validator.\n", "Testoob has a neat \"--coverage\" command-line option to generate a coverage report.\n" ]
[ 5, 4, 2, 2, 1, 0 ]
[]
[]
[ "code_coverage", "python", "unit_testing", "visual_studio_2008" ]
stackoverflow_0000272188_code_coverage_python_unit_testing_visual_studio_2008.txt
Q: Decorator Module Standard I was wondering if it's frowned upon to use the decorator module that comes with python. Should I be creating decorators using the original means or is it considered okay practice to use the module? A: the decorator module in pypi is a third party module from Michele Simionato. It does not belong to the python standard library. In most cases you dont need this module to work with decorators. Still it provides you with some useful tools that can simplify some uses of decorators. In any case it is a nice module to learn about decorators A: I'm not sure of what you mean by the "decorator module." But if you care about properly mimicking the wrapped function while using minimal boilerplate, you should take a look at the functools module. Couple of reasons for "properly" wrapping functions off the top of my head: (2.x, not sure of 3.x) - Pickling objects with decorated methods Compatibility with any metaprogramming
Decorator Module Standard
I was wondering if it's frowned upon to use the decorator module that comes with python. Should I be creating decorators using the original means or is it considered okay practice to use the module?
[ "the decorator module in pypi is a third party module from Michele Simionato. It does not belong to the python standard library.\nIn most cases you dont need this module to work with decorators.\nStill it provides you with some useful tools that can simplify some uses of decorators. In any case it is a nice module to learn about decorators\n", "I'm not sure of what you mean by the \"decorator module.\" But if you care about properly mimicking the wrapped function while using minimal boilerplate, you should take a look at the functools module.\nCouple of reasons for \"properly\" wrapping functions off the top of my head:\n\n(2.x, not sure of 3.x) - Pickling objects with decorated methods\nCompatibility with any metaprogramming\n\n" ]
[ 3, 0 ]
[]
[]
[ "decorator", "python" ]
stackoverflow_0002701772_decorator_python.txt
Q: High memory usage only when multiprocessing I am trying to use python's multiprocessing library to hopefully gain some performance. Specifically I am using its map function. Now, for some reason when I swap it out with its single processed counterpart I don't get high memory usage. But using the multiprocessing version of map causes my memory to go through the roof. For the record I am doing something which can easily hog up loads of memory, but what would the difference be between the two to cause such a stark difference? A: You realize that multiprocessing does not use threads, yes? I say this because you mention a "single threaded counterpart". Are you sending a lot of data through multiprocessing's map? A likely cause is the serialization multiprocessing has to do in many cases. multiprocessing uses pickle, which does typically take up more memory than the data it's pickling. (In some cases, specifically on systems with fork() where new processes are created when you call the map method, it can avoid the serialization, but whenever it needs to send new data to existing process it cannot do so.) Since with multiprocessing all of the actual work is done in separate processes, the memory of your main process should not be affected by the actual operations you perform. The total use of memory does go up by quite a bit, however, because each worker process has a copy of the data you sent across. This is sometimes copy-on-write memory (in the same cases as not serializing) on systems that have CoW, but Python's use of memory is such that this quickly becomes written to, and thus copied.
High memory usage only when multiprocessing
I am trying to use python's multiprocessing library to hopefully gain some performance. Specifically I am using its map function. Now, for some reason when I swap it out with its single processed counterpart I don't get high memory usage. But using the multiprocessing version of map causes my memory to go through the roof. For the record I am doing something which can easily hog up loads of memory, but what would the difference be between the two to cause such a stark difference?
[ "You realize that multiprocessing does not use threads, yes? I say this because you mention a \"single threaded counterpart\".\nAre you sending a lot of data through multiprocessing's map? A likely cause is the serialization multiprocessing has to do in many cases. multiprocessing uses pickle, which does typically take up more memory than the data it's pickling. (In some cases, specifically on systems with fork() where new processes are created when you call the map method, it can avoid the serialization, but whenever it needs to send new data to existing process it cannot do so.)\nSince with multiprocessing all of the actual work is done in separate processes, the memory of your main process should not be affected by the actual operations you perform. The total use of memory does go up by quite a bit, however, because each worker process has a copy of the data you sent across. This is sometimes copy-on-write memory (in the same cases as not serializing) on systems that have CoW, but Python's use of memory is such that this quickly becomes written to, and thus copied.\n" ]
[ 4 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0002705968_multiprocessing_python.txt
Q: How to pick a chunksize for python multiprocessing with large datasets I am attempting to to use python to gain some performance on a task that can be highly parallelized using http://docs.python.org/library/multiprocessing. When looking at their library they say to use chunk size for very long iterables. Now, my iterable is not long, one of the dicts that it contains is huge: ~100000 entries, with tuples as keys and numpy arrays for values. How would I set the chunksize to handle this and how can I transfer this data quickly? Thank you. A: The only way to handle this single large item in multiple workers at once is by splitting it up. multiprocessing works by dividing up the work in units, but the smallest unit you can feed it is one object -- it can't know how to split up a single object in a way that's sensible. You have to do it yourself, instead. Instead of sending over the dicts to be worked on, split up the dicts to smaller work units and send those over instead. If you can't split the dict because all the data is interdependent, then you can't really split up the work either.
How to pick a chunksize for python multiprocessing with large datasets
I am attempting to to use python to gain some performance on a task that can be highly parallelized using http://docs.python.org/library/multiprocessing. When looking at their library they say to use chunk size for very long iterables. Now, my iterable is not long, one of the dicts that it contains is huge: ~100000 entries, with tuples as keys and numpy arrays for values. How would I set the chunksize to handle this and how can I transfer this data quickly? Thank you.
[ "The only way to handle this single large item in multiple workers at once is by splitting it up. multiprocessing works by dividing up the work in units, but the smallest unit you can feed it is one object -- it can't know how to split up a single object in a way that's sensible. You have to do it yourself, instead. Instead of sending over the dicts to be worked on, split up the dicts to smaller work units and send those over instead. If you can't split the dict because all the data is interdependent, then you can't really split up the work either.\n" ]
[ 3 ]
[]
[]
[ "large_data_volumes", "multiprocessing", "python" ]
stackoverflow_0002705953_large_data_volumes_multiprocessing_python.txt
Q: python operation not permitted (graphtecprint) I'm running a python program. When it get's to these lines: f = open("/dev/bus/usb/007/005", "r") x = fcntl.ioctl(f.fileno(), 0x84005001, '\x00' * 256) It fails saying: IOError: [Errno 1] Operation not permitted What could be causing this problem? A: file system permissions? what does ls -l /dev/bus/usb/007/005 say? does cat /dev/bus/usb/007/005 work or does it report the same error? A: The third argument to fcntl.ioctl, as documented here, should be either a 1024-byte string (not just 256), or, better, a possibly even-larger writeable buffer -- the underlying object could be an array.array of bytes. Unfortunately you need to know in advance how much space the result will need, but you can play it safe with a few KB (that ioctl seems to be the "get device id" code, but I'm not sure what the max result length could be).
python operation not permitted (graphtecprint)
I'm running a python program. When it get's to these lines: f = open("/dev/bus/usb/007/005", "r") x = fcntl.ioctl(f.fileno(), 0x84005001, '\x00' * 256) It fails saying: IOError: [Errno 1] Operation not permitted What could be causing this problem?
[ "file system permissions?\nwhat does ls -l /dev/bus/usb/007/005 say?\ndoes cat /dev/bus/usb/007/005 work or does it report the same error?\n", "The third argument to fcntl.ioctl, as documented here, should be either a 1024-byte string (not just 256), or, better, a possibly even-larger writeable buffer -- the underlying object could be an array.array of bytes. Unfortunately you need to know in advance how much space the result will need, but you can play it safe with a few KB (that ioctl seems to be the \"get device id\" code, but I'm not sure what the max result length could be).\n" ]
[ 1, 0 ]
[]
[]
[ "file_io", "linux", "python", "usb" ]
stackoverflow_0002705974_file_io_linux_python_usb.txt
Q: Using arrays with other arrays in Python Trying to find an efficient way to extract all instances of items in an array out of another. For example array1 = ["abc", "def", "ghi", "jkl"] array2 = ["abc", "ghi", "456", "789"] Array 1 is an array of items that need to be extracted out of array 2. Thus, array 2 should be modified to ["456", "789"] I know how to do this, but no in an efficient manner. A: These are lists, not arrays. (The word "array" means different things to different people, but in python the objects call themselves lists, and that's that; there are other modules that provide objects that call themselves arrays, such as array and numpy) To answer your question, the easiest way is to not modify array2 at all. Use a list comprehension: set1 = set(array1) array2 = [e for e in array2 if e not in set1] (the set makes this O(n) instead of O(n^2)) If you absolutely must mutate array2 (because it exists elsewhere), you can use slice assignment: array2[:] = [e for e in array2 if e not in set1] It's just as efficient, but kind of nasty. edit: as Mark Byers points out, this only works if array1 only contains hashable elements (such as strings, numbers, etc.). A: If your lists can't contain duplicates and you don't care about the order then you should be using sets instead of lists (by the way, they are called lists, not arrays). Then what you want is both fast and trivial to implement: >>> set1 = set(["abc", "def", "ghi", "jkl"]) >>> set2 = set(["abc", "ghi", "456", "789"]) >>> set2 - set1 set(['456', '789']) If list2 can contain duplicates or the order matters then you can still make list1 a set to speed up the lookups: >>> list1 = ["abc", "def", "ghi", "jkl"] >>> list2 = ["abc", "ghi", "456", "789"] >>> set1 = set(list1) >>> [a for a in list2 if a not in set1] ['456', '789'] Note that this requires that the items are hashable but runs in close to O(n) time. If the items are not hashable but they are orderable then you could sort list1 and use a binary search to find items in it. This gives O(n log(n)) time. If your items are neither hashable not orderable then you will need to resort to the slow O(n*n) simple linear search for each element. A: The straightforward way would be something like; array2 = [i for i in array2 if i not in array1] list comprehensions is what you need here
Using arrays with other arrays in Python
Trying to find an efficient way to extract all instances of items in an array out of another. For example array1 = ["abc", "def", "ghi", "jkl"] array2 = ["abc", "ghi", "456", "789"] Array 1 is an array of items that need to be extracted out of array 2. Thus, array 2 should be modified to ["456", "789"] I know how to do this, but no in an efficient manner.
[ "These are lists, not arrays. (The word \"array\" means different things to different people, but in python the objects call themselves lists, and that's that; there are other modules that provide objects that call themselves arrays, such as array and numpy)\nTo answer your question, the easiest way is to not modify array2 at all. Use a list comprehension:\nset1 = set(array1)\narray2 = [e for e in array2 if e not in set1]\n\n(the set makes this O(n) instead of O(n^2))\nIf you absolutely must mutate array2 (because it exists elsewhere), you can use slice assignment:\narray2[:] = [e for e in array2 if e not in set1]\n\nIt's just as efficient, but kind of nasty.\nedit: as Mark Byers points out, this only works if array1 only contains hashable elements (such as strings, numbers, etc.).\n", "If your lists can't contain duplicates and you don't care about the order then you should be using sets instead of lists (by the way, they are called lists, not arrays). Then what you want is both fast and trivial to implement:\n>>> set1 = set([\"abc\", \"def\", \"ghi\", \"jkl\"])\n>>> set2 = set([\"abc\", \"ghi\", \"456\", \"789\"])\n>>> set2 - set1\nset(['456', '789'])\n\nIf list2 can contain duplicates or the order matters then you can still make list1 a set to speed up the lookups:\n>>> list1 = [\"abc\", \"def\", \"ghi\", \"jkl\"]\n>>> list2 = [\"abc\", \"ghi\", \"456\", \"789\"]\n>>> set1 = set(list1)\n>>> [a for a in list2 if a not in set1]\n['456', '789']\n\nNote that this requires that the items are hashable but runs in close to O(n) time.\nIf the items are not hashable but they are orderable then you could sort list1 and use a binary search to find items in it. This gives O(n log(n)) time.\nIf your items are neither hashable not orderable then you will need to resort to the slow O(n*n) simple linear search for each element.\n", "The straightforward way would be something like;\narray2 = [i for i in array2 if i not in array1]\n\nlist comprehensions is what you need here\n" ]
[ 6, 3, 0 ]
[]
[]
[ "arrays", "extract", "python" ]
stackoverflow_0002706440_arrays_extract_python.txt
Q: How to import *.pyc file from different version of python? I used python 2.5 and imported a file named "irit.py" from C:\util\Python25\Lib\site-packages directory. This files imports the file "_irit.pyc which is in the same directory. It worked well and did what I wanted. Than, I tried the same thing with python version 2.6.4. "irit.py" which is in C:\util\Python26\Lib\site-packages was imported, but "_irit.pyc" (which is in the same directory of 26, like before) hasn't been found. I got the error message: File "C:\util\Python26\lib\site-packages\irit.py", line 5, in import _irit ImportError: DLL load failed: The specified module could not be found. Can someone help me understand the problem and how to fix it?? Thanks, Almog. A: "DLL load failed" can't directly refer to the .pyc, since that's a bytecode file, not a DLL; a DLL would be .pyd on Windows. So presumably that _irit.pyc bytecode file tries to import some .pyd and that .pyd is not available in a 2.6-compatible version in the appropriate directory. Unfortunately it also appears that the source file _irit.py isn't around either, so the error messages end up less informative that they could be. I'd try to run python -v, which gives verbose messages on all module loading and unloading actions -- maybe that will let you infer the name of the missing .pyd when you compare its behavior in 2.5 and 2.6. A: Pyc files are not guaranteed to be compatible across python versions, so even if you fix the missing dll, you could still run in to problems.
How to import *.pyc file from different version of python?
I used python 2.5 and imported a file named "irit.py" from C:\util\Python25\Lib\site-packages directory. This files imports the file "_irit.pyc which is in the same directory. It worked well and did what I wanted. Than, I tried the same thing with python version 2.6.4. "irit.py" which is in C:\util\Python26\Lib\site-packages was imported, but "_irit.pyc" (which is in the same directory of 26, like before) hasn't been found. I got the error message: File "C:\util\Python26\lib\site-packages\irit.py", line 5, in import _irit ImportError: DLL load failed: The specified module could not be found. Can someone help me understand the problem and how to fix it?? Thanks, Almog.
[ "\"DLL load failed\" can't directly refer to the .pyc, since that's a bytecode file, not a DLL; a DLL would be .pyd on Windows. So presumably that _irit.pyc bytecode file tries to import some .pyd and that .pyd is not available in a 2.6-compatible version in the appropriate directory. Unfortunately it also appears that the source file _irit.py isn't around either, so the error messages end up less informative that they could be. I'd try to run python -v, which gives verbose messages on all module loading and unloading actions -- maybe that will let you infer the name of the missing .pyd when you compare its behavior in 2.5 and 2.6.\n", "Pyc files are not guaranteed to be compatible across python versions, so even if you fix the missing dll, you could still run in to problems.\n" ]
[ 5, 1 ]
[]
[]
[ "import", "pyc", "python", "version" ]
stackoverflow_0002705304_import_pyc_python_version.txt
Q: super() in Python 2.x without args Trying to convert super(B, self).method() into a simple nice bubble() call. Did it, see below! Is it possible to get reference to class B in this example? class A(object): pass class B(A): def test(self): test2() class C(B): pass import inspect def test2(): frame = inspect.currentframe().f_back cls = frame.[?something here?] # cls here should == B (class) c = C() c.test() Basically, C is child of B, B is child of A. Then we create c of type C. Then the call to c.test() actually calls B.test() (via inheritance), which calls to test2(). test2() can get the parent frame frame; code reference to method via frame.f_code; self via frame.f_locals['self']; but type(frame.f_locals['self']) is C (of course), but not B, where method is defined. Any way to get B? A: Found a shorter way to do super(B, self).test() -> bubble() from below. (Works with multiple inheritance, doesn't require arguments, correcly behaves with sub-classes) The solution was to use inspect.getmro(type(back_self)) (where back_self is a self from callee), then iterating it as cls with method_name in cls.__dict__ and verifying that the code reference we have is the one in this class (realized in find_class_by_code_object(self) nested function). bubble() can be easily extended with *args, **kwargs. import inspect def bubble(*args, **kwargs): def find_class_by_code_object(back_self, method_name, code): for cls in inspect.getmro(type(back_self)): if method_name in cls.__dict__: method_fun = getattr(cls, method_name) if method_fun.im_func.func_code is code: return cls frame = inspect.currentframe().f_back back_self = frame.f_locals['self'] method_name = frame.f_code.co_name for _ in xrange(5): code = frame.f_code cls = find_class_by_code_object(back_self, method_name, code) if cls: super_ = super(cls, back_self) return getattr(super_, method_name)(*args, **kwargs) try: frame = frame.f_back except: return class A(object): def test(self): print "A.test()" class B(A): def test(self): # instead of "super(B, self).test()" we can do bubble() class C(B): pass c = C() c.test() # works! b = B() b.test() # works! If anyone has a better idea, let's hear it. Known bug: (thanks doublep) If C.test = B.test --> "infinite" recursion. Although that seems un-realistic for child class to actually have a method, that has been ='ed from parent's one. Known bug2: (thanks doublep) Decorated methods won't work (probably unfixable, since decorator returns a closure)... Fixed decorator proble with for _ in xrange(5): ... frame = frame.f_back - will handle up to 5 decorators, increase if needed. I love Python! Performance is 5 times worse than super() call, but we are talking about 200K calls vs a million calls per second, if this isn't in your tightest loops - no reason to worry. A: Although this code should never be used for any normal purpose. For the sake of answering the question, here's something working ;) import inspect def test2(): funcname = inspect.stack()[1][3] frame = inspect.currentframe().f_back self = frame.f_locals['self'] return contains(self.__class__, funcname) def contains(class_, funcname): if funcname in class_.__dict__: return class_ for class_ in class_.__bases__: class_ = contains(class_, funcname) if class_: return class_
super() in Python 2.x without args
Trying to convert super(B, self).method() into a simple nice bubble() call. Did it, see below! Is it possible to get reference to class B in this example? class A(object): pass class B(A): def test(self): test2() class C(B): pass import inspect def test2(): frame = inspect.currentframe().f_back cls = frame.[?something here?] # cls here should == B (class) c = C() c.test() Basically, C is child of B, B is child of A. Then we create c of type C. Then the call to c.test() actually calls B.test() (via inheritance), which calls to test2(). test2() can get the parent frame frame; code reference to method via frame.f_code; self via frame.f_locals['self']; but type(frame.f_locals['self']) is C (of course), but not B, where method is defined. Any way to get B?
[ "Found a shorter way to do super(B, self).test() -> bubble() from below. \n(Works with multiple inheritance, doesn't require arguments, correcly behaves with sub-classes)\nThe solution was to use inspect.getmro(type(back_self)) (where back_self is a self from callee), then iterating it as cls with method_name in cls.__dict__ and verifying that the code reference we have is the one in this class (realized in find_class_by_code_object(self) nested function). \nbubble() can be easily extended with *args, **kwargs.\nimport inspect\ndef bubble(*args, **kwargs):\n def find_class_by_code_object(back_self, method_name, code):\n for cls in inspect.getmro(type(back_self)):\n if method_name in cls.__dict__:\n method_fun = getattr(cls, method_name)\n if method_fun.im_func.func_code is code:\n return cls\n\n frame = inspect.currentframe().f_back\n back_self = frame.f_locals['self']\n method_name = frame.f_code.co_name\n\n for _ in xrange(5):\n code = frame.f_code\n cls = find_class_by_code_object(back_self, method_name, code)\n if cls:\n super_ = super(cls, back_self)\n return getattr(super_, method_name)(*args, **kwargs)\n try:\n frame = frame.f_back\n except:\n return\n\n\n\nclass A(object):\n def test(self):\n print \"A.test()\"\n\nclass B(A):\n def test(self):\n # instead of \"super(B, self).test()\" we can do\n bubble()\n\nclass C(B):\n pass\n\nc = C()\nc.test() # works!\n\nb = B()\nb.test() # works!\n\nIf anyone has a better idea, let's hear it.\nKnown bug: (thanks doublep) If C.test = B.test --> \"infinite\" recursion. Although that seems un-realistic for child class to actually have a method, that has been ='ed from parent's one.\nKnown bug2: (thanks doublep) Decorated methods won't work (probably unfixable, since decorator returns a closure)... Fixed decorator proble with for _ in xrange(5): ... frame = frame.f_back - will handle up to 5 decorators, increase if needed. I love Python!\nPerformance is 5 times worse than super() call, but we are talking about 200K calls vs a million calls per second, if this isn't in your tightest loops - no reason to worry.\n", "Although this code should never be used for any normal purpose. For the sake of answering the question, here's something working ;)\nimport inspect\n\ndef test2():\n funcname = inspect.stack()[1][3]\n frame = inspect.currentframe().f_back\n self = frame.f_locals['self']\n\n return contains(self.__class__, funcname)\n\ndef contains(class_, funcname):\n if funcname in class_.__dict__:\n return class_\n\n for class_ in class_.__bases__:\n class_ = contains(class_, funcname)\n if class_:\n return class_\n\n" ]
[ 3, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002706623_python.txt
Q: Source for Names to use in web scraping Can anyone suggest a good source of names that I can use to help analyze some tables on web pages. The first column of the tables I am scraping have names alone, names and titles or just titles. The names can be as varied as John Smith to Vikram Saksena. I have been poking around for a compiled list of words that can be found in proper names. Edited I have tried the name set from the Census and it has so much garbage in it that its not worth working with. A: Download the Febrl project source code. It's data folder contains tables for names (given/middle/surnames/etc). You may have to massage the data for your own needs. For surnames you can check around for U.S. Census data. I don't have the link right now, but know I've used the common U.S. surnames from that source before.
Source for Names to use in web scraping
Can anyone suggest a good source of names that I can use to help analyze some tables on web pages. The first column of the tables I am scraping have names alone, names and titles or just titles. The names can be as varied as John Smith to Vikram Saksena. I have been poking around for a compiled list of words that can be found in proper names. Edited I have tried the name set from the Census and it has so much garbage in it that its not worth working with.
[ "Download the Febrl project source code.\nIt's data folder contains tables for names (given/middle/surnames/etc). You may have to massage the data for your own needs.\nFor surnames you can check around for U.S. Census data. I don't have the link right now, but know I've used the common U.S. surnames from that source before.\n" ]
[ 1 ]
[]
[]
[ "python", "web_scraping" ]
stackoverflow_0002706786_python_web_scraping.txt
Q: I want the actual file name that is returned by a PHP script I am writing a python script that downloads a file given by a URL. Unfortuneatly the URL is in the form of a PHP script i.e. www.website.com/generatefilename.php?file=5233 If you visit the link in a browser, you are prompted to download the actual file and extension. I need to send this link to the downloader, but I can't send the downloader the PHP link. How would I get the full file name in a usable variable? A: What you need to do is examine the Content-Disposition header sent by the PHP script. it will look something like: Content-Disposition: attachment; filename=theFilenameYouWant As to how you actually examine that header it depends on the python code you're currently using to fetch the URL. If you post some code I'll be able to give a more detailed answer. A: urllib.urlretrieve(URL, directory + "\\" + filename + "." + extension) This saved the file generated by the PHP file to the designated folder with the designated name and extension. I configured the downloader to automatically check this folder for new files, so this solution works for me.
I want the actual file name that is returned by a PHP script
I am writing a python script that downloads a file given by a URL. Unfortuneatly the URL is in the form of a PHP script i.e. www.website.com/generatefilename.php?file=5233 If you visit the link in a browser, you are prompted to download the actual file and extension. I need to send this link to the downloader, but I can't send the downloader the PHP link. How would I get the full file name in a usable variable?
[ "What you need to do is examine the Content-Disposition header sent by the PHP script. it will look something like:\nContent-Disposition: attachment; filename=theFilenameYouWant\nAs to how you actually examine that header it depends on the python code you're currently using to fetch the URL. If you post some code I'll be able to give a more detailed answer.\n", "urllib.urlretrieve(URL, directory + \"\\\\\" + filename + \".\" + extension)\n\nThis saved the file generated by the PHP file to the designated folder with the designated name and extension. I configured the downloader to automatically check this folder for new files, so this solution works for me.\n" ]
[ 2, 0 ]
[]
[]
[ "php", "python", "scripting", "url" ]
stackoverflow_0002705856_php_python_scripting_url.txt
Q: how to scrape html generated by javascript using python? I want to scrape the html generated by javascript , just like what you can see in Firebug. UPDATE: I've found this article: http://blog.motane.lu/2009/07/07/downloading-a-pages-content-with-python-and-webkit/ which use PyQt to solve the problem and it works well for me. BUT another problem occur: I have to login the website first, but I don't know how to simulate login in PyQt .... :( A: Have a look at this article which describes using Windmill to do scrape a page after Javascript has been executed by the browser. This article will show how to extract the desired information using the same three steps when the web page is not written directly using HTML, but is auto-generated using JavaScript to update the DOM tree. They have some examples I am sure you can easily adapt. A: To be precise with terminology, Javascript does not generate HTML. Javascript generates and manipulates the DOM in your browser. The Firebug is showing you HTML representation of that DOM so that it would be readable. The HTML does not actually exist. :) I don't think an out-of-the box easy solution exists. You may want to look at this blog post and comments that have some pointers. A: You could use python spidermonkey, which is a python wrapper to Firefox's engine: http://code.google.com/p/python-spidermonkey/ But the project seems a bit immature.
how to scrape html generated by javascript using python?
I want to scrape the html generated by javascript , just like what you can see in Firebug. UPDATE: I've found this article: http://blog.motane.lu/2009/07/07/downloading-a-pages-content-with-python-and-webkit/ which use PyQt to solve the problem and it works well for me. BUT another problem occur: I have to login the website first, but I don't know how to simulate login in PyQt .... :(
[ "Have a look at this article which describes using Windmill to do scrape a page after Javascript has been executed by the browser.\n\nThis article will show how to extract the desired information using the same three steps when the web page is not written directly using HTML, but is auto-generated using JavaScript to update the DOM tree.\n\nThey have some examples I am sure you can easily adapt.\n", "To be precise with terminology, Javascript does not generate HTML. Javascript generates and manipulates the DOM in your browser. The Firebug is showing you HTML representation of that DOM so that it would be readable. The HTML does not actually exist. :)\nI don't think an out-of-the box easy solution exists. You may want to look at this blog post and comments that have some pointers.\n", "You could use python spidermonkey, which is a python wrapper to Firefox's engine: http://code.google.com/p/python-spidermonkey/\nBut the project seems a bit immature.\n" ]
[ 5, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002707108_python.txt
Q: Problems inserting file data into sqlite database using python I'm trying to open an image file in python and add that data to an sqlite table. I created the table using: "CREATE TABLE "images" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "description" VARCHAR, "image" BLOB );" I am trying to add the image to the db using: imageFile = open(imageName, 'rb') b = sqlite3.Binary(imageFile.read()) targetCursor.execute("INSERT INTO images (image) values(?)", (b,)) targetCursor.execute("SELECT id from images") for id in targetCursor: imageid= id[0] targetCursor.execute("INSERT INTO %s (questionID,imageID) values(?,?)" % table, (questionId, imageid)) When I print the value of 'b' it looks like binary data but when I call: 'select image from images where id = 1' I get '????' printed to the console. Anyone know what I'm doing wrong? A: It works for me with Python 2.6.4, pysqlite (sqlite3.version) 2.4.1, and a png test image. You have to unpack the tuple. >>> import sqlite3 >>> conn = sqlite3.connect(":memory:") >>> targetCursor = conn.cursor() >>> imageName = "blue.png" >>> imageFile = open(imageName, 'rb') >>> b = sqlite3.Binary(imageFile.read()) >>> print b �PNG ▒ IHDR@% ��sRGB��� pHYs ��▒tIME� 0�\"▒'S�A�:hVO\��8�}^c��"]IEND�B`� >>> targetCursor.execute("create table images (id integer primary key, image BLOB)") <sqlite3.Cursor object at 0xb7688e00> >>> targetCursor.execute("insert into images (image) values(?)", (b,)) <sqlite3.Cursor object at 0xb7688e00> >>> targetCursor.execute("SELECT image from images where id = 1") <sqlite3.Cursor object at 0xb7688e00> >>> for image, in targetCursor: ... print image ... �PNG ▒ IHDR@% ��sRGB��� pHYs ��▒tIME� 0�\"▒'S�A�:hVO\��8�}^c��"]IEND�B`� A: Yeah its weird, when I query the database in python, after inserting the binary data, it shows me the data was successfully inserted (it spews binary data to the screen). When I do: sqlite3 database_file.sqlite "SELECT image from images" on the command line, that's when I see the '????'. Perhaps that's just how the 'sqlite3' command prints binary data? That doesn't seem right. I'm using python 2.6.1
Problems inserting file data into sqlite database using python
I'm trying to open an image file in python and add that data to an sqlite table. I created the table using: "CREATE TABLE "images" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "description" VARCHAR, "image" BLOB );" I am trying to add the image to the db using: imageFile = open(imageName, 'rb') b = sqlite3.Binary(imageFile.read()) targetCursor.execute("INSERT INTO images (image) values(?)", (b,)) targetCursor.execute("SELECT id from images") for id in targetCursor: imageid= id[0] targetCursor.execute("INSERT INTO %s (questionID,imageID) values(?,?)" % table, (questionId, imageid)) When I print the value of 'b' it looks like binary data but when I call: 'select image from images where id = 1' I get '????' printed to the console. Anyone know what I'm doing wrong?
[ "It works for me with Python 2.6.4, pysqlite (sqlite3.version) 2.4.1, and a png test image. You have to unpack the tuple.\n>>> import sqlite3 \n>>> conn = sqlite3.connect(\":memory:\") \n>>> targetCursor = conn.cursor() \n>>> imageName = \"blue.png\" \n>>> imageFile = open(imageName, 'rb') \n>>> b = sqlite3.Binary(imageFile.read()) \n>>> print b \n�PNG \n▒ \nIHDR@% \n ��sRGB��� pHYs \n\n ��▒tIME�\n0�\\\"▒'S�A�:hVO\\��8�}^c��\"]IEND�B`�\n>>> targetCursor.execute(\"create table images (id integer primary key, image BLOB)\")\n<sqlite3.Cursor object at 0xb7688e00>\n>>> targetCursor.execute(\"insert into images (image) values(?)\", (b,))\n<sqlite3.Cursor object at 0xb7688e00>\n>>> targetCursor.execute(\"SELECT image from images where id = 1\")\n<sqlite3.Cursor object at 0xb7688e00>\n>>> for image, in targetCursor:\n... print image\n...\n�PNG\n▒\nIHDR@%\n ��sRGB��� pHYs\n\n ��▒tIME�\n0�\\\"▒'S�A�:hVO\\��8�}^c��\"]IEND�B`�\n\n", "Yeah its weird, when I query the database in python, after inserting the binary data, it shows me the data was successfully inserted (it spews binary data to the screen). When I do: sqlite3 database_file.sqlite \"SELECT image from images\" on the command line, that's when I see the '????'. Perhaps that's just how the 'sqlite3' command prints binary data? That doesn't seem right. I'm using python 2.6.1\n" ]
[ 2, 0 ]
[]
[]
[ "blob", "python", "sqlite" ]
stackoverflow_0002707070_blob_python_sqlite.txt
Q: There is a system alert of (13, 'Permission denied'), how to solve that? def upload_file(request, step_id): def handle_uploaded_file (file): current_step = Step.objects.get(pk=step_id) current_project = Project.objects.get(pk=current_step.project.pk) path = "%s/upload/file/%s/%s" % (settings.MEDIA_ROOT, current_project.project_no, current_step.name) if not os.path.exists (path): os.makedirs(path) fd = open(path) for chunk in file.chunks(): fd.write(chunk) fd.close() if request.method == 'POST': form = UploadFileForm(request.POST, request.FILES) if form.is_valid(): handle_uploaded_file(request.FILES['file']) return HttpResponseRedirect('/success/url/') else: form = UploadFileForm() return render_to_response('projects/upload_file.html', { 'step_id': step_id, 'form': form, }) A: Make sure path has the necessary permissions. The user running the python/django process needs to have write permissions. chmod the path to 0777 - this isn't a good mode for production, but it will quickly verify if filesystem permissions are the root of the problem.
There is a system alert of (13, 'Permission denied'), how to solve that?
def upload_file(request, step_id): def handle_uploaded_file (file): current_step = Step.objects.get(pk=step_id) current_project = Project.objects.get(pk=current_step.project.pk) path = "%s/upload/file/%s/%s" % (settings.MEDIA_ROOT, current_project.project_no, current_step.name) if not os.path.exists (path): os.makedirs(path) fd = open(path) for chunk in file.chunks(): fd.write(chunk) fd.close() if request.method == 'POST': form = UploadFileForm(request.POST, request.FILES) if form.is_valid(): handle_uploaded_file(request.FILES['file']) return HttpResponseRedirect('/success/url/') else: form = UploadFileForm() return render_to_response('projects/upload_file.html', { 'step_id': step_id, 'form': form, })
[ "Make sure path has the necessary permissions. The user running the python/django process needs to have write permissions. chmod the path to 0777 - this isn't a good mode for production, but it will quickly verify if filesystem permissions are the root of the problem.\n" ]
[ 2 ]
[]
[]
[ "django", "file_upload", "python" ]
stackoverflow_0002707344_django_file_upload_python.txt
Q: Python: x-y-plot with matplotlib I want to plot some data. The first column contains the x-data. But matplotlib doesn't plot this. Where is my mistake? import numpy as np from numpy import cos from scipy import * from pylab import plot, show, ylim, yticks from matplotlib import * from pprint import pprint n1 = 1.0 n2 = 1.5 #alpha, beta, intensity data = [ [10, 22, 4.3], [20, 42, 4.2], [30, 62, 3.6], [40, 83, 1.3], [45, 102, 2.8], [50, 123, 3.0], [60, 143, 3.2], [70, 163, 3.8], ] for i in range(len(data)): rhotang1 = (n1 * cos(data[i][0]) - n2 * cos(data[i][1])) rhotang2 = (n1 * cos(data[i][0]) + n2 * cos(data[i][1])) rhotang = rhotang1 / rhotang2 data[i].append(rhotang) #append 4th value pprint(data) x = data[:][0] y1 = data[:][2] y3 = data[:][3] plot(x, y1, x, y3) show() EDIT: http://paste.pocoo.org/show/205534/ But it doesn't work. A: You can do this by converting data to a numpy array: data = np.array(data) # insert this new line after your appends pprint(data) x = data[:,0] # use the multidimensional slicing notation y1 = data[:,2] y3 = data[:,3] plot(x, y1, x, y3) A few additional points: You can do the calculation in a more clear and vectorized way using numpy, like this data = np.array(data) rhotang1 = n1*cos(data[:,0]) - n2*cos(data[:,1]) rhotang2 = n1*cos(data[:,0]) + n2*cos(data[:,1]) y3 = rhotang1 / rhotang2 As you wrote it, your calculation may not give what you want since cos etc take radians as their inputs and your numbers look like degrees. A: x = data[:][0] y1 = data[:][2] y3 = data[:][3] These lines don't do what you think. First they take a slice of the array which is the whole array (that is, just a copy), then they pull out the 0th, 2nd or 3rd ROW from that array, not column. You could try x = [row[0] for row in x] etc. A: Try this: #fresnel formula import numpy as np from numpy import cos from scipy import * from pylab import plot, show, ylim, yticks from matplotlib import * from pprint import pprint n1 = 1.0 n2 = 1.5 #alpha, beta, intensity data = np.array([ [10, 22, 4.3], [20, 42, 4.2], [30, 62, 3.6], [40, 83, 1.3], [45, 102, 2.8], [50, 123, 3.0], [60, 143, 3.2], [70, 163, 3.8], ]) # Populate arrays x = np.array([row[0] for row in data]) y1 = np.array([row[1] for row in data]) rhotang1 = n1*cos(data[:,0]) - n2*cos(data[:,1]) rhotang2 = n1*cos(data[:,0]) + n2*cos(data[:,1]) y3 = rhotang1 / rhotang2 plot(x, y1, 'r--', x, y3, 'g--') show()
Python: x-y-plot with matplotlib
I want to plot some data. The first column contains the x-data. But matplotlib doesn't plot this. Where is my mistake? import numpy as np from numpy import cos from scipy import * from pylab import plot, show, ylim, yticks from matplotlib import * from pprint import pprint n1 = 1.0 n2 = 1.5 #alpha, beta, intensity data = [ [10, 22, 4.3], [20, 42, 4.2], [30, 62, 3.6], [40, 83, 1.3], [45, 102, 2.8], [50, 123, 3.0], [60, 143, 3.2], [70, 163, 3.8], ] for i in range(len(data)): rhotang1 = (n1 * cos(data[i][0]) - n2 * cos(data[i][1])) rhotang2 = (n1 * cos(data[i][0]) + n2 * cos(data[i][1])) rhotang = rhotang1 / rhotang2 data[i].append(rhotang) #append 4th value pprint(data) x = data[:][0] y1 = data[:][2] y3 = data[:][3] plot(x, y1, x, y3) show() EDIT: http://paste.pocoo.org/show/205534/ But it doesn't work.
[ "You can do this by converting data to a numpy array:\ndata = np.array(data) # insert this new line after your appends\n\npprint(data)\nx = data[:,0] # use the multidimensional slicing notation\ny1 = data[:,2]\ny3 = data[:,3]\nplot(x, y1, x, y3)\n\nA few additional points:\nYou can do the calculation in a more clear and vectorized way using numpy, like this\ndata = np.array(data)\nrhotang1 = n1*cos(data[:,0]) - n2*cos(data[:,1])\nrhotang2 = n1*cos(data[:,0]) + n2*cos(data[:,1])\ny3 = rhotang1 / rhotang2\n\nAs you wrote it, your calculation may not give what you want since cos etc take radians as their inputs and your numbers look like degrees.\n", "x = data[:][0]\ny1 = data[:][2]\ny3 = data[:][3]\n\nThese lines don't do what you think.\nFirst they take a slice of the array which is the whole array (that is, just a copy), then they pull out the 0th, 2nd or 3rd ROW from that array, not column.\nYou could try\nx = [row[0] for row in x]\n\netc.\n", "Try this:\n#fresnel formula\n\nimport numpy as np\nfrom numpy import cos\nfrom scipy import *\nfrom pylab import plot, show, ylim, yticks\nfrom matplotlib import *\nfrom pprint import pprint\n\nn1 = 1.0\nn2 = 1.5\n\n#alpha, beta, intensity\ndata = np.array([\n [10, 22, 4.3],\n [20, 42, 4.2],\n [30, 62, 3.6],\n [40, 83, 1.3],\n [45, 102, 2.8],\n [50, 123, 3.0],\n [60, 143, 3.2],\n [70, 163, 3.8],\n ])\n\n# Populate arrays\nx = np.array([row[0] for row in data])\ny1 = np.array([row[1] for row in data])\nrhotang1 = n1*cos(data[:,0]) - n2*cos(data[:,1])\nrhotang2 = n1*cos(data[:,0]) + n2*cos(data[:,1])\ny3 = rhotang1 / rhotang2\n\nplot(x, y1, 'r--', x, y3, 'g--')\nshow()\n\n" ]
[ 5, 2, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0002699353_matplotlib_python.txt
Q: How to slice a list of objects in association of the object attributes I have a list of fixtures.Each fixture has a home club and a away club attribute.I want to slice the list in association of its home club and away club.The sliced list should be of homeclub items and awayclub items. Easier way to implement this is to first slice a list of fixtures.Then make a new list of the corresponding Home Clubs and Away Clubs.I wanted to know if we can do this one step. A: It's not very clear what you're trying to do, but this code will take the first five fixtures, and return a list of tuples, each of which contains a home and an away value of the respective object: result = [(i.home, i.away) for i in fixtures[:5]] This will separate the two into two lists: homes = [i.home for i in fixtures[:5]] aways = [i.away for i in fixtures[:5]] Or on one line: homes, aways = [i.home for i in fixtures[:5]], [i.away for i in fixtures[:5]] A: Not quite the answer you were after, but (assuming [(home1, away1), (home2, away2), ...]) this is about as simple as you'll get. homes = [h for h,a in fixtures] aways = [a for h,a in fixtures] A: Sure, with a bit of work: def split(fixture): home, away = [], [] for i, f in enumerate(fixture): if i >= 5: home.append(f.home) away.append(f.away) return home, away Or: home, away = zip(*itertools.imap(operator.attrgetter('home', 'away'), itertools.islice(fixture, 5, None)))
How to slice a list of objects in association of the object attributes
I have a list of fixtures.Each fixture has a home club and a away club attribute.I want to slice the list in association of its home club and away club.The sliced list should be of homeclub items and awayclub items. Easier way to implement this is to first slice a list of fixtures.Then make a new list of the corresponding Home Clubs and Away Clubs.I wanted to know if we can do this one step.
[ "It's not very clear what you're trying to do, but this code will take the first five fixtures, and return a list of tuples, each of which contains a home and an away value of the respective object:\nresult = [(i.home, i.away) for i in fixtures[:5]]\n\nThis will separate the two into two lists:\nhomes = [i.home for i in fixtures[:5]]\naways = [i.away for i in fixtures[:5]]\n\nOr on one line:\nhomes, aways = [i.home for i in fixtures[:5]], [i.away for i in fixtures[:5]]\n\n", "Not quite the answer you were after, but (assuming [(home1, away1), (home2, away2), ...]) this is about as simple as you'll get.\nhomes = [h for h,a in fixtures]\naways = [a for h,a in fixtures]\n\n", "Sure, with a bit of work:\ndef split(fixture):\n home, away = [], []\n for i, f in enumerate(fixture):\n if i >= 5:\n home.append(f.home)\n away.append(f.away)\n return home, away\n\nOr:\nhome, away = zip(*itertools.imap(operator.attrgetter('home', 'away'),\n itertools.islice(fixture, 5, None)))\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "list", "object", "python" ]
stackoverflow_0002707413_list_object_python.txt
Q: What category of combinatorial problems appear on the logic games section of the LSAT? EDIT: See Solving "Who owns the Zebra" programmatically? for a similar class of problem There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF => AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome. A: This is easy to solve (a few lines of code) as an integer program. Using a tool like the GNU Linear Programming Kit, you specify your constraints in a declarative manner and let the solver come up with the best solution. Here's an example of a GLPK program. You could code this using a general-purpose programming language like Python, but this is the type of thing you'll see in the first few chapters of an integer programming textbook. The most efficient algorithms have already been worked out by others. EDIT: to answer Merjit's question: Define: matrix Y where Y_(ij) = 1 if tape i is played before tape j, and 0 otherwise. vector C, where C_i indicates the time slot when i is played (e.g. 1,2,3,4,5,6,7) Large constant M (look up the term for "big M" in an optimization textbook) Minimize the sum of the vector C subject to the following constraints: Y_(ij) != Y_(ji) // If i is before j, then j must not be before i C_j < C_k + M*Y_(kj) // the time slot of j is greater than the time slot of k only if Y_(kj) = 1 C_O - C_L = 1 // L must be played immediately before O C_N > C_L // news tape must be played at some time after L |C_G - C_P| = 2 // You will need to manipulate this a bit to make it a linear constraint That should get you most of the way there. You want to write up the above constraints in the MathProg language's syntax (as shown in the links), and make sure I haven't left out any constraints. Then run the GLPK solver on the constraints and see what it comes up with. A: Okay, so the way I see it, there are two ways to approach this problem: Go about writing a program that will approach this problem head first. This is going to be difficult. But combinatorics teaches us that the easier way to do this is to count all permutations and subtract the ones that don't satisfy your constraints. I would go with number 2. You can find all permutations of a given string or list by using this algorithm. Using this algorithm, you can get a list of all permutations. You can now apply a number of filters on this list by checking for the various constraints of the problem. def L_before_O(s): return (s.index('L') - s.index('O') == 1) def N_after_L(s): return (s.index('L') < s.index('N')) def G_and_P(s): return (abs(s.index('G') - s.index('P')) == 2) def all_perms(s): #this is from the link if len(s) <=1: yield s else: for perm in all_perms(s[1:]): for i in range(len(perm)+1): yield perm[:i] + s[0:1] + perm[i:] def get_the_answer(): permutations = [i for i in all_perms('GHLOPSN')] #N is the news tape a = [i for i in permutations if L_before_O(i)] b = [i for i in a if N_after_L(i)] c = [i for i in b if G_and_P(i)] return c I haven't tested this, but this is general idea of how I would go about coding such a question. Hope this helps
What category of combinatorial problems appear on the logic games section of the LSAT?
EDIT: See Solving "Who owns the Zebra" programmatically? for a similar class of problem There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF => AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome.
[ "This is easy to solve (a few lines of code) as an integer program. Using a tool like the GNU Linear Programming Kit, you specify your constraints in a declarative manner and let the solver come up with the best solution. Here's an example of a GLPK program.\nYou could code this using a general-purpose programming language like Python, but this is the type of thing you'll see in the first few chapters of an integer programming textbook. The most efficient algorithms have already been worked out by others.\nEDIT: to answer Merjit's question:\nDefine:\n\nmatrix Y where Y_(ij) = 1 if tape i\nis played before tape j, and 0\notherwise. \nvector C, where C_i\nindicates the time slot when i is\nplayed (e.g. 1,2,3,4,5,6,7) \nLarge\nconstant M (look up the term for\n\"big M\" in an optimization textbook)\n\nMinimize the sum of the vector C subject to the following constraints:\nY_(ij) != Y_(ji) // If i is before j, then j must not be before i\nC_j < C_k + M*Y_(kj) // the time slot of j is greater than the time slot of k only if Y_(kj) = 1\nC_O - C_L = 1 // L must be played immediately before O\nC_N > C_L // news tape must be played at some time after L\n|C_G - C_P| = 2 // You will need to manipulate this a bit to make it a linear constraint\n\nThat should get you most of the way there. You want to write up the above constraints in the MathProg language's syntax (as shown in the links), and make sure I haven't left out any constraints. Then run the GLPK solver on the constraints and see what it comes up with.\n", "Okay, so the way I see it, there are two ways to approach this problem:\n\nGo about writing a program that will approach this problem head first. This is going to be difficult.\nBut combinatorics teaches us that the easier way to do this is to count all permutations and subtract the ones that don't satisfy your constraints.\n\nI would go with number 2.\nYou can find all permutations of a given string or list by using this algorithm. Using this algorithm, you can get a list of all permutations. You can now apply a number of filters on this list by checking for the various constraints of the problem.\ndef L_before_O(s):\n return (s.index('L') - s.index('O') == 1)\n\ndef N_after_L(s):\n return (s.index('L') < s.index('N'))\n\ndef G_and_P(s):\n return (abs(s.index('G') - s.index('P')) == 2)\n\ndef all_perms(s): #this is from the link\n if len(s) <=1:\n yield s\n else:\n for perm in all_perms(s[1:]):\n for i in range(len(perm)+1):\n yield perm[:i] + s[0:1] + perm[i:]\n\ndef get_the_answer():\n permutations = [i for i in all_perms('GHLOPSN')] #N is the news tape\n a = [i for i in permutations if L_before_O(i)]\n b = [i for i in a if N_after_L(i)]\n c = [i for i in b if G_and_P(i)]\n return c\n\nI haven't tested this, but this is general idea of how I would go about coding such a question.\nHope this helps\n" ]
[ 1, 0 ]
[]
[]
[ "combinations", "combinatorics", "puzzle", "python" ]
stackoverflow_0002707619_combinations_combinatorics_puzzle_python.txt
Q: Text-based one-on-one chat with Flash interface: what to power the backend? I'm building a website where I hook people up so that they can anonymously vent to strangers. You either choose to be a listener, or a talker, and then you get catapulted into a one-on-one chat room. The reason for the app's construction is because you often can't vent to friends, because your deepest vulnerabilities can often be leveraged against you later on. (Like it or not, this is a part of human nature. Sad.) I'm looking for some insight into how I should architect everything. I found this neat tutorial, http://giantflyingsaucer.com/blog/?p=875, which suggests using python & stackless + flash. Someone else suggested I should try using p2p sockets, but I don't even know where to begin to look for info on that. Any other suggestions? I'd like to keep it simple. :^) A: Unless you expect super high load, this is simple enough that it doesn't really matter what you use on the backend: just pick something you're comfortable with. PHP, Python, Ruby, Even a bash script using CGI - your skill level with the language is likely to make more difference that the language features themselves. A: I would use an XMPP server like ejabberd or OpenFire to power the backend. XMPP contains everything you need for creating chat/real-time applications. You can use a Flex/Flash Actionscript library like Actionscript 3 XIFF to communicate with the XMPP server. A: Flash is user-unfriendly for UI (forms, etc) and it is relatively easy to do what you want using HTML and Javascript on the front-end. One possible approach for reading the messages would be to regularly do an Ajax request from the server for any new messages. Format the new message and insert it into the DOM. You will probably need to answer at least these questions before you continue, though: 1) Are you recreating IRQ (everyone sees your posts), or is this a random one-to-one chat, like chatroulette? 1a) Is this a way for a specific person to talk to another specific person, or is this more like twitter? 2) What is your plan for scaling up if this idea takes off? Memcached should probably be a method of last-resort ("bandaid over a bullet-hole"). What's your roadmap for eventually handling a large volume of messages? 3) Is there any way to ignore users? Talk to certain users? Hide your rants from users? A: Hey Zach I had to create a socket server for a flash game I made. I built my server in C#, but I would use whatever language your familiar with. If you let me know what your most comfortable with I could try to help find a good tutorial. The one thing I spent many hours on was getting flash to work from a website with a socket server. With the newer versions of Flash you need to send back a policy file. In my case this needed to be the first chunk of data sent back to the client when they connected to the socket server. Not sure what to tell you about structuring the back end. I need to know a little bit more about your programming experience. I had an array of all user connections, and was placing them in different "Rooms" so they could play each other. So just some simple arrays and understanding how to send messages to the clients would help you here. If you have any familiarity with C# I would have no problem sending you the source code for my socket server.
Text-based one-on-one chat with Flash interface: what to power the backend?
I'm building a website where I hook people up so that they can anonymously vent to strangers. You either choose to be a listener, or a talker, and then you get catapulted into a one-on-one chat room. The reason for the app's construction is because you often can't vent to friends, because your deepest vulnerabilities can often be leveraged against you later on. (Like it or not, this is a part of human nature. Sad.) I'm looking for some insight into how I should architect everything. I found this neat tutorial, http://giantflyingsaucer.com/blog/?p=875, which suggests using python & stackless + flash. Someone else suggested I should try using p2p sockets, but I don't even know where to begin to look for info on that. Any other suggestions? I'd like to keep it simple. :^)
[ "Unless you expect super high load, this is simple enough that it doesn't really matter what you use on the backend: just pick something you're comfortable with. PHP, Python, Ruby, Even a bash script using CGI - your skill level with the language is likely to make more difference that the language features themselves.\n", "I would use an XMPP server like ejabberd or OpenFire to power the backend. XMPP contains everything you need for creating chat/real-time applications. You can use a Flex/Flash Actionscript library like Actionscript 3 XIFF to communicate with the XMPP server.\n", "Flash is user-unfriendly for UI (forms, etc) and it is relatively easy to do what you want using HTML and Javascript on the front-end.\nOne possible approach for reading the messages would be to regularly do an Ajax request from the server for any new messages. Format the new message and insert it into the DOM.\nYou will probably need to answer at least these questions before you continue, though:\n1) Are you recreating IRQ (everyone sees your posts), or is this a random one-to-one chat, like chatroulette?\n1a) Is this a way for a specific person to talk to another specific person, or is this more like twitter?\n2) What is your plan for scaling up if this idea takes off? Memcached should probably be a method of last-resort (\"bandaid over a bullet-hole\"). What's your roadmap for eventually handling a large volume of messages?\n3) Is there any way to ignore users? Talk to certain users? Hide your rants from users?\n", "Hey Zach I had to create a socket server for a flash game I made. I built my server in C#, but I would use whatever language your familiar with. If you let me know what your most comfortable with I could try to help find a good tutorial.\nThe one thing I spent many hours on was getting flash to work from a website with a socket server. With the newer versions of Flash you need to send back a policy file. In my case this needed to be the first chunk of data sent back to the client when they connected to the socket server.\nNot sure what to tell you about structuring the back end. I need to know a little bit more about your programming experience. I had an array of all user connections, and was placing them in different \"Rooms\" so they could play each other. So just some simple arrays and understanding how to send messages to the clients would help you here.\nIf you have any familiarity with C# I would have no problem sending you the source code for my socket server.\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ "actionscript", "chat", "flash", "python" ]
stackoverflow_0002691955_actionscript_chat_flash_python.txt
Q: Django ORM and multiprocessing I am using Django ORM in my python script in a decoupled fashion i.e. it's not running in context of a normal Django Project. I am also using the multi processing module. And different process in turn are making queries. The process ran successfully for an hr and exited with this message "IOError: [Errno 32] Broken pipe" Upon futhur diagnosis and debugging this error pops up when I call save() on the model instance. I am wondering Is Django ORM Process save ? Why would this error arise else ? Cheers Ankur Found the Answer I was calling a return after starting the process. This error sneaked in as i did a small cut and paste of a function. A: It's a little hard to say without more information, but the problem is probably caused by having an open database connection as you spawn new processes, and then trying to use that database connection in the separate processes. Don't re-use database connections from the parent process in multiprocessing workers you spawn; always recreate database connections.
Django ORM and multiprocessing
I am using Django ORM in my python script in a decoupled fashion i.e. it's not running in context of a normal Django Project. I am also using the multi processing module. And different process in turn are making queries. The process ran successfully for an hr and exited with this message "IOError: [Errno 32] Broken pipe" Upon futhur diagnosis and debugging this error pops up when I call save() on the model instance. I am wondering Is Django ORM Process save ? Why would this error arise else ? Cheers Ankur Found the Answer I was calling a return after starting the process. This error sneaked in as i did a small cut and paste of a function.
[ "It's a little hard to say without more information, but the problem is probably caused by having an open database connection as you spawn new processes, and then trying to use that database connection in the separate processes. Don't re-use database connections from the parent process in multiprocessing workers you spawn; always recreate database connections.\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "message_queue", "multiprocessing", "python" ]
stackoverflow_0002707811_django_django_models_message_queue_multiprocessing_python.txt
Q: Python. Strange class attributes behavior >>> class Abcd: ... a = '' ... menu = ['a', 'b', 'c'] ... >>> a = Abcd() >>> b = Abcd() >>> a.a = 'a' >>> b.a = 'b' >>> a.a 'a' >>> b.a 'b' It's all correct and each object has own 'a', but... >>> a.menu.pop() 'c' >>> a.menu ['a', 'b'] >>> b.menu ['a', 'b'] How could this happen? And how to use list as class attribute? A: This is because the way you're initializing the menu property is setting all of the instances to point to the same list, as opposed to different lists with the same value. Instead, use the __init__ member function of the class to initialize values, thus creating a new list and assigning that list to the property for that particular instance of the class: class Abcd: def __init__(self): self.a = '' self.menu = ['a', 'b', 'c'] A: See class-objects in the tutorial, and notice the use of self. Use instance attributes, not class attributes (and also, new style classes) : >>> class Abcd(object): ... def __init__(self): ... self.a = '' ... self.menu = ['a','b','c'] ... >>> a=Abcd() >>> b=Abcd() >>> a.a='a' >>> b.a='b' >>> a.a 'a' >>> b.a 'b' >>> a.menu.pop() 'c' >>> a.menu ['a', 'b'] >>> b.menu ['a', 'b', 'c'] >>> A: because variables in Python are just "labels" both Abcd.menu and a.menu reference the same list object. in your case you should assign the label to a new object, not modify the object inplace. You can run a.menu = a.menu[:-1] instead of a.menu.pop() to feel the difference
Python. Strange class attributes behavior
>>> class Abcd: ... a = '' ... menu = ['a', 'b', 'c'] ... >>> a = Abcd() >>> b = Abcd() >>> a.a = 'a' >>> b.a = 'b' >>> a.a 'a' >>> b.a 'b' It's all correct and each object has own 'a', but... >>> a.menu.pop() 'c' >>> a.menu ['a', 'b'] >>> b.menu ['a', 'b'] How could this happen? And how to use list as class attribute?
[ "This is because the way you're initializing the menu property is setting all of the instances to point to the same list, as opposed to different lists with the same value.\nInstead, use the __init__ member function of the class to initialize values, thus creating a new list and assigning that list to the property for that particular instance of the class:\nclass Abcd:\n def __init__(self):\n self.a = ''\n self.menu = ['a', 'b', 'c']\n\n", "See class-objects in the tutorial, and notice the use of self.\nUse instance attributes, not class attributes (and also, new style classes) :\n>>> class Abcd(object):\n... def __init__(self):\n... self.a = ''\n... self.menu = ['a','b','c']\n... \n>>> a=Abcd()\n>>> b=Abcd()\n>>> a.a='a'\n>>> b.a='b'\n>>> a.a\n'a'\n>>> b.a\n'b'\n>>> a.menu.pop()\n'c'\n>>> a.menu\n['a', 'b']\n>>> b.menu\n['a', 'b', 'c']\n>>> \n\n", "because variables in Python are just \"labels\"\nboth Abcd.menu and a.menu reference the same list object.\nin your case you should assign the label to a new object, \nnot modify the object inplace.\nYou can run\na.menu = a.menu[:-1]\n\ninstead of\na.menu.pop()\n\nto feel the difference\n" ]
[ 7, 4, 0 ]
[]
[]
[ "attributes", "class", "python" ]
stackoverflow_0002707472_attributes_class_python.txt
Q: Socket: Get user information How can I get information about a user's PC connected to my socket A: a socket is a "virtual" channel established between to electronic devices through a network (a bunch of wires). the only informations available about a remote host are those published on the network. the basic informations are those provided in the TCP/IP headers, namely the remote IP address, the size of the receive buffer, and a bunch of useless flags. for any other informations, you will have to request from other services. a reverse DNS lookup will get you a name associated with the IP address. a traceroute will tell you what is the path to the remote computer (or at least to a machine acting as a gateway/proxy to the remote host). a Geolocation request can give you an approximate location of the remote computer. if the remote host is a server itself accessible to the internet through a registered domain name, a WHOIS request can give you the name of the person in charge of the domain. on a LAN (Local Area Network: a home or enterprise network), an ARP or RARP request will get you a MAC address and many more informations (as much as the network administrator put when they configured the network), possibly the exact location of the computer. there are many many more informations available, but only if they were published. if you know what you are looking for and where to query those informations, you can be very successful. if the remote host is quite hidden and uses some simple stealth technics (anonymous proxy) you will get nothing relevant. A: Look here. See "# Echo server program" section. conn, addr = s.accept() print 'Connected by', addr I am unsure if this is what you are looking for, hth. A: You could try asking identd about the connection, but a lot of hosts don't run that or only put up info there that you can't use.
Socket: Get user information
How can I get information about a user's PC connected to my socket
[ "a socket is a \"virtual\" channel established between to electronic devices through a network (a bunch of wires). the only informations available about a remote host are those published on the network.\nthe basic informations are those provided in the TCP/IP headers, namely the remote IP address, the size of the receive buffer, and a bunch of useless flags. for any other informations, you will have to request from other services.\na reverse DNS lookup will get you a name associated with the IP address. a traceroute will tell you what is the path to the remote computer (or at least to a machine acting as a gateway/proxy to the remote host). a Geolocation request can give you an approximate location of the remote computer. if the remote host is a server itself accessible to the internet through a registered domain name, a WHOIS request can give you the name of the person in charge of the domain. on a LAN (Local Area Network: a home or enterprise network), an ARP or RARP request will get you a MAC address and many more informations (as much as the network administrator put when they configured the network), possibly the exact location of the computer.\nthere are many many more informations available, but only if they were published. if you know what you are looking for and where to query those informations, you can be very successful. if the remote host is quite hidden and uses some simple stealth technics (anonymous proxy) you will get nothing relevant.\n", "Look here. See \"# Echo server program\" section.\nconn, addr = s.accept()\nprint 'Connected by', addr\n\nI am unsure if this is what you are looking for, hth.\n", "You could try asking identd about the connection, but a lot of hosts don't run that or only put up info there that you can't use.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0002707599_python_sockets.txt
Q: Python Pari Library? Pari/GP is an excellent library for functions relating to number theory. The problem is that there doesn't seem to be an up to date wrapper for python anywhere around, (pari-python uses an old version of pari) and I'm wondering if anyone knows of some other library/wrapper that is similar to pari or one that uses pari. I'm aware of SAGE, but it's far too large for my needs. GMPY is excellent as well, but there are some intrinsic pari functions that I miss, and I'd much rather use python than the provided GP environment. NZMATH, mpmath, scipy and sympy were all taken into consideration as well. On a related note, does anyone have any suggestions on loading the pari dll itself and using the functions contained in it? I've tried to very little success, other than loading it and learning about function pointers. A: Actually, pari-python works with the latest stable release of PARI. And it is very easy to use: >>> from pari import * >>> fibonacci(100) 354224848179261915075 >>> intnum(0,1,lambda x:x**2) 0.3333333333333333333333333333 >>>
Python Pari Library?
Pari/GP is an excellent library for functions relating to number theory. The problem is that there doesn't seem to be an up to date wrapper for python anywhere around, (pari-python uses an old version of pari) and I'm wondering if anyone knows of some other library/wrapper that is similar to pari or one that uses pari. I'm aware of SAGE, but it's far too large for my needs. GMPY is excellent as well, but there are some intrinsic pari functions that I miss, and I'd much rather use python than the provided GP environment. NZMATH, mpmath, scipy and sympy were all taken into consideration as well. On a related note, does anyone have any suggestions on loading the pari dll itself and using the functions contained in it? I've tried to very little success, other than loading it and learning about function pointers.
[ "Actually, pari-python works with the latest stable release of PARI. And it is very easy to use:\n>>> from pari import *\n>>> fibonacci(100)\n354224848179261915075\n>>> intnum(0,1,lambda x:x**2)\n0.3333333333333333333333333333\n>>> \n\n" ]
[ 5 ]
[]
[]
[ "pari", "python" ]
stackoverflow_0002506087_pari_python.txt
Q: Strange Syntax Parsing Error in Python? Am I missing something here? Why shouldn't the code under the "Broken" section work? I'm using Python 2.6. #!/usr/bin/env python def func(a,b,c): print a,b,c #Working: Example #1: p={'c':3} func(1, b=2, c=3, ) #Working: Example #2: func(1, b=2, **p) #Broken: Example #3: func(1, b=2, **p, ) A: This is the relevant bit from the grammar: arglist: (argument ',')* (argument [','] |'*' test (',' argument)* [',' '**' test] |'**' test) The first line here allows putting a comma after the last parameter when not using varargs/kwargs (this is why your first example works). However, you are not allowed to place a comma after the kwargs parameter if it is specified, as shown in the second and third lines. By the way, here is an interesting thing shown by the grammar: These are both legal: f(a=1, b=2, c=3,) f(*v, a=1, b=2, c=3) but this is not: f(*v, a=1, b=2, c=3,) It makes sense not to allow a comma after **kwargs, since it must always be the last parameter. I don't know why the language designers chose not to allow my last example though - maybe an oversight? A: Python usually allows extra commas at the end of comma-lists (in argument lists and container literals). The main goal for this is to make code generation slightly easier (you don't have to special-case the last item or double-special-case a singleton tuple). In the definition of the grammar, **kwargs is pulled out separately and without an extra optional comma. It wouldn't ever help with anything practical like code generation (**kwargs will always be the last thing so you do not have to special-case anything) as far as I can imagine, so I don't know why Python would support it.
Strange Syntax Parsing Error in Python?
Am I missing something here? Why shouldn't the code under the "Broken" section work? I'm using Python 2.6. #!/usr/bin/env python def func(a,b,c): print a,b,c #Working: Example #1: p={'c':3} func(1, b=2, c=3, ) #Working: Example #2: func(1, b=2, **p) #Broken: Example #3: func(1, b=2, **p, )
[ "This is the relevant bit from the grammar:\narglist: (argument ',')* (argument [',']\n |'*' test (',' argument)* [',' '**' test] \n |'**' test)\n\nThe first line here allows putting a comma after the last parameter when not using varargs/kwargs (this is why your first example works). However, you are not allowed to place a comma after the kwargs parameter if it is specified, as shown in the second and third lines.\nBy the way, here is an interesting thing shown by the grammar:\nThese are both legal:\nf(a=1, b=2, c=3,)\nf(*v, a=1, b=2, c=3)\n\nbut this is not:\nf(*v, a=1, b=2, c=3,)\n\nIt makes sense not to allow a comma after **kwargs, since it must always be the last parameter. I don't know why the language designers chose not to allow my last example though - maybe an oversight?\n", "Python usually allows extra commas at the end of comma-lists (in argument lists and container literals). The main goal for this is to make code generation slightly easier (you don't have to special-case the last item or double-special-case a singleton tuple).\nIn the definition of the grammar, **kwargs is pulled out separately and without an extra optional comma. It wouldn't ever help with anything practical like code generation (**kwargs will always be the last thing so you do not have to special-case anything) as far as I can imagine, so I don't know why Python would support it.\n" ]
[ 9, 5 ]
[]
[]
[ "python", "syntax_error" ]
stackoverflow_0002708614_python_syntax_error.txt
Q: xlwt data garbled I retrieve the data of chinese characters from DB and write the data into excel by xlwt, code as below: ws0.write(0,0, unicode(cell, 'big5')) It is ok under Windows, but when I deloyed it under Linux, the data in excel garbled, Could you help to do with it? A: It would help if you posted the code that you actually ran. Assuming that ws0 is a Worksheet object, the correct syntax is ws0.write(row_index, column_index, unicode_text). What does cell refer to, and how did you extract it from what database? What does "the data in excel garbled" mean? What are you using on Linux to view the contents of the XLS file? What did you actually see on the screen? Can you get Chinese characters displayed properly on Linux with other software? Try typing this at the Python interactive prompt on Linux: >>> import xlwt >>> b = xlwt.Workbook() >>> s = b.add_sheet('zh') >>> big5_text = '\xa7A\xa6n\xa1I' >>> u_text = big5_text.decode('big5') >>> s.write(0, 0, u_text) >>> b.save('nihao.xls') Then try opening the XLS file with OpenOffice Calc ... what do you see? Update (1) "The code that you ran" must have been more than 1 line; please show it. (2) Please run the small piece of code I gave you, and report the results. If that works, we can concentrate on things like how you are getting what data out of what database [Please answer that question too] (3) Please answer the question about displaying Chinese under Linux. (4) Consider that seeing "???" instead of Chinese (or whatever) characters is usually the result of unicode_text.encode('some_encoding', 'replace') (or other code with the same intent) with an inappropriate encoding (for example, 'ascii') -- perhaps preceded by a similar decode. xlwt does unicode_text.encode() to store your unicode strings in the file; it uses 'latin1' or 'utf_16le' as required for the encoding, and 'strict', not 'replace', for the next arg. If Excel is showing you "???", it is likely that the data is already garbled before you feed it to xlwt. What does print repr(cell) tell you? (5) If you run the same versions of xlwt and Python with the same input data and the same Python script, the output file from Linux should be identical byte-for-byte with the output file from Windows. Differences in xlwt and Python versions are rather unlikely to make the files differ. Please compare the files that result from the short script that I gave you, using a binary comparison (for example, fc /b ... in a Windows "Command Prompt" window). Please state the versions of Python and xlwt that you are using in each environment. (6) Please consider switching over to the usual forum for xlwt questions ... that way you can easily send me files to look at if necessary, and I get e-mail when a new posting is made, instead of having to poll a website at intervals ...
xlwt data garbled
I retrieve the data of chinese characters from DB and write the data into excel by xlwt, code as below: ws0.write(0,0, unicode(cell, 'big5')) It is ok under Windows, but when I deloyed it under Linux, the data in excel garbled, Could you help to do with it?
[ "It would help if you posted the code that you actually ran. Assuming that ws0 is a Worksheet object, the correct syntax is ws0.write(row_index, column_index, unicode_text).\nWhat does cell refer to, and how did you extract it from what database?\nWhat does \"the data in excel garbled\" mean? What are you using on Linux to view the contents of the XLS file? What did you actually see on the screen? Can you get Chinese characters displayed properly on Linux with other software?\nTry typing this at the Python interactive prompt on Linux:\n>>> import xlwt\n>>> b = xlwt.Workbook()\n>>> s = b.add_sheet('zh')\n>>> big5_text = '\\xa7A\\xa6n\\xa1I'\n>>> u_text = big5_text.decode('big5')\n>>> s.write(0, 0, u_text)\n>>> b.save('nihao.xls')\n\nThen try opening the XLS file with OpenOffice Calc ... what do you see?\nUpdate\n(1) \"The code that you ran\" must have been more than 1 line; please show it.\n(2) Please run the small piece of code I gave you, and report the results. If that works, we can concentrate on things like how you are getting what data out of what database [Please answer that question too]\n(3) Please answer the question about displaying Chinese under Linux.\n(4) Consider that seeing \"???\" instead of Chinese (or whatever) characters is usually the result of unicode_text.encode('some_encoding', 'replace') (or other code with the same intent) with an inappropriate encoding (for example, 'ascii') -- perhaps preceded by a similar decode. xlwt does unicode_text.encode() to store your unicode strings in the file; it uses 'latin1' or 'utf_16le' as required for the encoding, and 'strict', not 'replace', for the next arg. If Excel is showing you \"???\", it is likely that the data is already garbled before you feed it to xlwt. What does print repr(cell) tell you?\n(5) If you run the same versions of xlwt and Python with the same input data and the same Python script, the output file from Linux should be identical byte-for-byte with the output file from Windows. Differences in xlwt and Python versions are rather unlikely to make the files differ. Please compare the files that result from the short script that I gave you, using a binary comparison (for example, fc /b ... in a Windows \"Command Prompt\" window). Please state the versions of Python and xlwt that you are using in each environment.\n(6) Please consider switching over to the usual forum for xlwt questions ... that way you can easily send me files to look at if necessary, and I get e-mail when a new posting is made, instead of having to poll a website at intervals ...\n" ]
[ 0 ]
[]
[]
[ "python", "xlwt" ]
stackoverflow_0002708530_python_xlwt.txt
Q: Python script repeated auto start up I am designing a python web app, where people can have an email sent to them on a particular day. So a user puts in his emai and date in a form and it gets stored in my database. My script would then search through the database looking for all records of todays date, retrive the email, sends them out and deletes the entry from the table. Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? If I have a continuously running script, i might go over the CPU limit of my shared web hosting. Or is the effect negligible? Ali A: Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? It's surely possible in general, but it entirely depends on what your shared web hosting provider is offering you. For these purposes, you'd use some kind of cron in any version or variant of Unix, Google App Engine, and so on. But since you tell us nothing about your provider and what services it offers you, we can't guess whether it makes such functionality available at all, or in what form. (Incidentally: this isn't really a programming question, so, if you want to post more details and get help, you might have better luck at serverfault.com, the companion site to stackoverflow.com that deals with system administration questions).
Python script repeated auto start up
I am designing a python web app, where people can have an email sent to them on a particular day. So a user puts in his emai and date in a form and it gets stored in my database. My script would then search through the database looking for all records of todays date, retrive the email, sends them out and deletes the entry from the table. Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? If I have a continuously running script, i might go over the CPU limit of my shared web hosting. Or is the effect negligible? Ali
[ "\nIs it possible to have a setup, where\n the script starts up automatically at\n a give time, say 1 pm everyday, sends\n out the email and then quits?\n\nIt's surely possible in general, but it entirely depends on what your shared web hosting provider is offering you. For these purposes, you'd use some kind of cron in any version or variant of Unix, Google App Engine, and so on. But since you tell us nothing about your provider and what services it offers you, we can't guess whether it makes such functionality available at all, or in what form.\n(Incidentally: this isn't really a programming question, so, if you want to post more details and get help, you might have better luck at serverfault.com, the companion site to stackoverflow.com that deals with system administration questions).\n" ]
[ 3 ]
[]
[]
[ "email", "python" ]
stackoverflow_0002708705_email_python.txt
Q: How to setup RAM disk drive using python or WMI? The background of my question is associated with Tesseract, the free OCR engine (1985-1995 by HP, now hosting in Google). It specifically requires an input file and an output file; the argument only takes filename (not stream / binary string), so in order to use the wrapper API such as pytesser and / or python-tesser.py, the OCR temp files must be created. I, however, have a lot of images need to OCR; frequent disk write and remove is inevitable (and of course the performance hit). The only choice I could think about is changing the wrapper class and point the temp file to RAM disk, which bring this problem up. If you have better solution, please let me know. Thanks a lot. -M A: Are you on linux? You could try to send a file to the program through a pipe and refer to /dev/fd/0 -- it's the standard input's pathname for the current process. It should work if the application does not seek() through it. A: By searching at Google, I found a possible solution (that does not include WMI, but you can use it through subprocess): Download the devcon utility, kind of a command-line device manager. Then, you can use something like: subprocess.call( ("path_to_devcon\\devcon.exe", "INSTALL", "ramdisk.inf", "ramdisk") ) I hope this gives you a start.
How to setup RAM disk drive using python or WMI?
The background of my question is associated with Tesseract, the free OCR engine (1985-1995 by HP, now hosting in Google). It specifically requires an input file and an output file; the argument only takes filename (not stream / binary string), so in order to use the wrapper API such as pytesser and / or python-tesser.py, the OCR temp files must be created. I, however, have a lot of images need to OCR; frequent disk write and remove is inevitable (and of course the performance hit). The only choice I could think about is changing the wrapper class and point the temp file to RAM disk, which bring this problem up. If you have better solution, please let me know. Thanks a lot. -M
[ "Are you on linux? You could try to send a file to the program through a pipe and refer to /dev/fd/0 -- it's the standard input's pathname for the current process. It should work if the application does not seek() through it.\n", "By searching at Google, I found a possible solution (that does not include WMI, but you can use it through subprocess):\nDownload the devcon utility, kind of a command-line device manager.\nThen, you can use something like:\nsubprocess.call( (\"path_to_devcon\\\\devcon.exe\", \"INSTALL\", \"ramdisk.inf\", \"ramdisk\") )\n\nI hope this gives you a start.\n" ]
[ 0, 0 ]
[]
[]
[ "ocr", "python", "tesseract", "wmi" ]
stackoverflow_0002699318_ocr_python_tesseract_wmi.txt
Q: Help to run it in the background Here's a simple python daemon I can't manage to run as a background process: #!/usr/bin/env python import socket host = '' port = 843 backlog = 5 size = 1024 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host,port)) s.listen(backlog) while 1: client, address = s.accept() data = client.recv(size) if data == '<policy-file-request/>\0': client.send('<?xml version="1.0"?><cross-domain-policy><allow-access-from domain="*" to-ports="*"/></cross-domain-policy>') client.close() It's a socket policy file server (you may have heard of the restiction Adope put on socket connection - http://www.adobe.com/devnet/flashplayer/articles/socket_policy_files.html); that works well when gets run like an "ordinary" process - "python that_server.py", - but I get problem to run it in the background. Running like so: "that_server.py &", - does not work. EDIT: Here's what I got from the shell: ircd@smoky43g:~$ ls server.py ircd@smoky43g:~$ sudo nohup python server.py & [8] 19817 ircd@smoky43g:~$ [8]+ Stopped sudo nohup python server.py ircd@smoky43g:~$ I run it then just press the enter button - and it says 'stopped'. What is the problem? Without the sudo command, the similiar happen: ircd@smoky43g:~$ nohup python server.py & [9] 20341 ircd@smoky43g:~$ nohup: ignoring input and appending output to `nohup.out' [9] Exit 1 nohup python server.py ircd@smoky43g:~$ EDIT 2: I fount this in the nohup.out file: python: can't open file 'sudo': [Errno 2] No such file or directory Traceback (most recent call last): File "server.py", line 10, in <module> s.bind((host,port)) File "<string>", line 1, in bind socket.error: [Errno 13] Permission denied UPDATE: I have managed to run it using the root account, but could not as the ircd user (that belongs to the suddoers). And the question now is why not? A: Try nohup python that_server.py & Also, You're trying to use a port below 1024 which will require privileged/root access. Try a higher port. A: Where is the output going? nohup.out? What's in there? Is there an exception trace? A: I instrumented your code and it works fine here: $ cat server.py #!/usr/bin/env python import socket import sys host = '' port = 843 backlog = 5 size = 1024 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print >> sys.stderr, 'socket' s.bind((host,port)) print >> sys.stderr, 'bind' s.listen(backlog) print >> sys.stderr, 'listen' while 1: try: client, address = s.accept() print >> sys.stderr, 'accept' data = client.recv(size) print >> sys.stderr, 'recv' # ignore data because I can't type a '\0' client.send('<?xml version="1.0"?><cross-domain-policy><allow-access-from domain="*" to-ports="*"/></cross-domain-policy>') client.close() print >> sys.stderr, 'close' except Exception as e: print e; s.close(); print >> sys.stderr, 'close' sys.exit(1); $ sudo nohup python server.py & [1] 11218 nohup: ignoring input and appending output to `nohup.out' $ jobs [1]+ Running sudo nohup python server.py & # a couple of telnets to 843 $ jobs [1]+ Running sudo nohup python server.py & $ sudo kill 11218 $ sudo cat nohup.out socket bind listen accept recv close accept recv close A: The problem is probably permissions, as noted in your update. You are connecting to port 843 which is a privileged port -- you need to be root to open that (search Stack Overflow for some other techniques). Sudoers doesn't matter as you aren't using sudo. The real problem here is you aren't seeing the error, which would probably have made this easy to figure out (or at least point you to the right problem). You might want to do: (python server.py > output.txt 2>&1) & This will redirect output to a file before putting the process in the background. A: If the root of this issue is to run the script even after you leave the session, you could use a screen command . With screen you can attach and detach sessions.
Help to run it in the background
Here's a simple python daemon I can't manage to run as a background process: #!/usr/bin/env python import socket host = '' port = 843 backlog = 5 size = 1024 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host,port)) s.listen(backlog) while 1: client, address = s.accept() data = client.recv(size) if data == '<policy-file-request/>\0': client.send('<?xml version="1.0"?><cross-domain-policy><allow-access-from domain="*" to-ports="*"/></cross-domain-policy>') client.close() It's a socket policy file server (you may have heard of the restiction Adope put on socket connection - http://www.adobe.com/devnet/flashplayer/articles/socket_policy_files.html); that works well when gets run like an "ordinary" process - "python that_server.py", - but I get problem to run it in the background. Running like so: "that_server.py &", - does not work. EDIT: Here's what I got from the shell: ircd@smoky43g:~$ ls server.py ircd@smoky43g:~$ sudo nohup python server.py & [8] 19817 ircd@smoky43g:~$ [8]+ Stopped sudo nohup python server.py ircd@smoky43g:~$ I run it then just press the enter button - and it says 'stopped'. What is the problem? Without the sudo command, the similiar happen: ircd@smoky43g:~$ nohup python server.py & [9] 20341 ircd@smoky43g:~$ nohup: ignoring input and appending output to `nohup.out' [9] Exit 1 nohup python server.py ircd@smoky43g:~$ EDIT 2: I fount this in the nohup.out file: python: can't open file 'sudo': [Errno 2] No such file or directory Traceback (most recent call last): File "server.py", line 10, in <module> s.bind((host,port)) File "<string>", line 1, in bind socket.error: [Errno 13] Permission denied UPDATE: I have managed to run it using the root account, but could not as the ircd user (that belongs to the suddoers). And the question now is why not?
[ "Try\nnohup python that_server.py &\n\nAlso,\nYou're trying to use a port below 1024 which will require privileged/root access. Try a higher port.\n", "Where is the output going? nohup.out? What's in there? Is there an exception trace?\n", "I instrumented your code and it works fine here:\n$ cat server.py\n#!/usr/bin/env python \n\nimport socket \nimport sys\n\nhost = '' \nport = 843 \nbacklog = 5 \nsize = 1024 \ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM) \nprint >> sys.stderr, 'socket'\ns.bind((host,port)) \nprint >> sys.stderr, 'bind'\ns.listen(backlog) \nprint >> sys.stderr, 'listen'\nwhile 1: \n try:\n client, address = s.accept() \n print >> sys.stderr, 'accept'\n data = client.recv(size) \n print >> sys.stderr, 'recv'\n # ignore data because I can't type a '\\0'\n client.send('<?xml version=\"1.0\"?><cross-domain-policy><allow-access-from domain=\"*\" to-ports=\"*\"/></cross-domain-policy>') \n client.close()\n print >> sys.stderr, 'close'\n except Exception as e:\n print e;\n s.close();\n print >> sys.stderr, 'close'\n sys.exit(1);\n$ sudo nohup python server.py &\n[1] 11218\nnohup: ignoring input and appending output to `nohup.out'\n$ jobs\n[1]+ Running sudo nohup python server.py &\n# a couple of telnets to 843\n$ jobs\n[1]+ Running sudo nohup python server.py &\n$ sudo kill 11218\n$ sudo cat nohup.out\nsocket\nbind\nlisten\naccept\nrecv\nclose\naccept\nrecv\nclose\n\n", "The problem is probably permissions, as noted in your update. You are connecting to port 843 which is a privileged port -- you need to be root to open that (search Stack Overflow for some other techniques). Sudoers doesn't matter as you aren't using sudo.\nThe real problem here is you aren't seeing the error, which would probably have made this easy to figure out (or at least point you to the right problem). You might want to do:\n(python server.py > output.txt 2>&1) &\n\nThis will redirect output to a file before putting the process in the background.\n", "If the root of this issue is to run the script even after you leave the session, you could use a screen command . With screen you can attach and detach sessions. \n" ]
[ 1, 1, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002708212_python.txt
Q: Stopping long-running requests in Pylons I'm working on an application using Pylons and I was wondering if there was a way to make sure it doesn't spend way too much time handling one request. That is, I would like to find a way to put a timer on each request such that when too much time elapses, the request just stops (and possibly returns some kind of error). The application is supposed to allow users to run some complex calculations but I would like to make sure that if a calculation starts taking too much time, we stop it to allow other calculations to take place. A: Rather than terminate a request with an error, a better approach might be to perform long-running calculations in a separate thread (or threads) or process (or processes): When the calculation request is received, it is added to a queue and identified with a unique id. You redirect to a results page referencing the unique ID, which can have a "Please wait, calculating" message and a refresh button (or auto-refresh via a meta tag). The thread or process which does the calculation pops requests from the queue, updates the final result (and perhaps progress information too), which the results page handler will present to the user when refreshed. When the calculation is complete, the returned refresh page will have no refresh button or refresh tag, but just show the final result.
Stopping long-running requests in Pylons
I'm working on an application using Pylons and I was wondering if there was a way to make sure it doesn't spend way too much time handling one request. That is, I would like to find a way to put a timer on each request such that when too much time elapses, the request just stops (and possibly returns some kind of error). The application is supposed to allow users to run some complex calculations but I would like to make sure that if a calculation starts taking too much time, we stop it to allow other calculations to take place.
[ "Rather than terminate a request with an error, a better approach might be to perform long-running calculations in a separate thread (or threads) or process (or processes):\n\nWhen the calculation request is received, it is added to a queue and identified with a unique id. You redirect to a results page referencing the unique ID, which can have a \"Please wait, calculating\" message and a refresh button (or auto-refresh via a meta tag).\nThe thread or process which does the calculation pops requests from the queue, updates the final result (and perhaps progress information too), which the results page handler will present to the user when refreshed.\nWhen the calculation is complete, the returned refresh page will have no refresh button or refresh tag, but just show the final result.\n\n" ]
[ 3 ]
[]
[]
[ "pylons", "python" ]
stackoverflow_0002709371_pylons_python.txt
Q: OR in regular expression? I have text file with several thousands lines. I want to parse this file into database and decided to write a regexp. Here's part of file: blablabla checked=12 unchecked=1 blablabla unchecked=13 blablabla checked=14 As a result, I would like to get something like (12,1) (0,13) (14,0) Is it possible? A: It's simplest to use two different regexes to pull the two numbers out: r" checked=(\d+)" and r" unchecked=(\d+)". A: import re lines = ["blablabla checked=12 unchecked=1", "blablabla unchecked=13"] p1 = re.compile('checked=(\d)+\sunchecked=(\d)') p2 = re.compile('checked=(\d)') p3 = re.compile('unchecked=(\d)') for line in lines: m = p1.search(line) if m: print m.group(1), m.group(2) else: m = p2.search(line) if m: print m.group(1), "0" else: m = p2.search(line) if m: print "0", m.group(1) A: import re s = """blablabla checked=12 unchecked=1 blablabla unchecked=13 blablabla checked=14""" regex = re.compile(r"blablabla (?:(?:checked=)(\d+))? ?(?:(?:unchecked=)(\d+))?") for line in s.splitlines(): print regex.match(line).groups() This gives you strings (or None if not found), but the idea should be clear. A: An alternative approach: import sys import re r = re.compile(r"((?:un)?checked)=(\d+)") for line in open(sys.argv[1]): d = dict( r.findall(line) ) print d Output: {'checked': '12', 'unchecked': '1'} {'unchecked': '13'} {'checked': '14'} A: This is more generic and reusable, I believe: import re def tuple_producer(input_lines, attributes): """Extract specific attributes from lines 'blabla attribute=value …'""" for line in input_lines: line_attributes= {} for match in re.finditer("(\w+)=(\d+)", line): line_attributes[match.group(1)]= int(match.group(2)) # int cast yield tuple( line_attributes.get(attribute, 0) # int constant for attribute in wanted_attributes) >>> lines= """blablabla checked=12 unchecked=1 blablabla unchecked=13 blablabla checked=14""".split("\n") >>> list(tuple_producer(lines, ("checked", "unchecked"))) [(12, 1), (0, 13), (14, 0)] # and an irrelevant example >>> list(tuple_producer(lines, ("checked", "inexistant"))) [(12, 0), (0, 0), (14, 0)] Note the conversion to integer; if it's undesirable, remove the int casting, and also convert the 0 int constant to "0".
OR in regular expression?
I have text file with several thousands lines. I want to parse this file into database and decided to write a regexp. Here's part of file: blablabla checked=12 unchecked=1 blablabla unchecked=13 blablabla checked=14 As a result, I would like to get something like (12,1) (0,13) (14,0) Is it possible?
[ "It's simplest to use two different regexes to pull the two numbers out: r\" checked=(\\d+)\" and r\" unchecked=(\\d+)\".\n", "import re\n\nlines = [\"blablabla checked=12 unchecked=1\", \"blablabla unchecked=13\"]\n\np1 = re.compile('checked=(\\d)+\\sunchecked=(\\d)')\np2 = re.compile('checked=(\\d)')\np3 = re.compile('unchecked=(\\d)')\nfor line in lines:\n m = p1.search(line)\n if m:\n print m.group(1), m.group(2)\n else:\n m = p2.search(line)\n if m:\n print m.group(1), \"0\"\n else:\n m = p2.search(line)\n if m:\n print \"0\", m.group(1)\n\n", "import re\n\ns = \"\"\"blablabla checked=12 unchecked=1\nblablabla unchecked=13\nblablabla checked=14\"\"\"\n\nregex = re.compile(r\"blablabla (?:(?:checked=)(\\d+))? ?(?:(?:unchecked=)(\\d+))?\")\n\nfor line in s.splitlines():\n print regex.match(line).groups()\n\nThis gives you strings (or None if not found), but the idea should be clear.\n", "An alternative approach:\nimport sys\nimport re\n\nr = re.compile(r\"((?:un)?checked)=(\\d+)\")\n\nfor line in open(sys.argv[1]):\n d = dict( r.findall(line) )\n print d\n\nOutput:\n{'checked': '12', 'unchecked': '1'}\n{'unchecked': '13'}\n{'checked': '14'}\n\n", "This is more generic and reusable, I believe:\nimport re\n\ndef tuple_producer(input_lines, attributes):\n \"\"\"Extract specific attributes from lines 'blabla attribute=value …'\"\"\"\n for line in input_lines:\n line_attributes= {}\n for match in re.finditer(\"(\\w+)=(\\d+)\", line):\n line_attributes[match.group(1)]= int(match.group(2)) # int cast\n yield tuple(\n line_attributes.get(attribute, 0) # int constant\n for attribute in wanted_attributes)\n\n\n>>> lines= \"\"\"blablabla checked=12 unchecked=1\nblablabla unchecked=13\nblablabla checked=14\"\"\".split(\"\\n\")\n>>> list(tuple_producer(lines, (\"checked\", \"unchecked\")))\n[(12, 1), (0, 13), (14, 0)]\n\n# and an irrelevant example\n>>> list(tuple_producer(lines, (\"checked\", \"inexistant\")))\n[(12, 0), (0, 0), (14, 0)]\n\nNote the conversion to integer; if it's undesirable, remove the int casting, and also convert the 0 int constant to \"0\".\n" ]
[ 6, 1, 1, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0002697828_python_regex.txt
Q: Python script to calculate aded combinations from a dictionary I am trying to write a script that will take a dictionary of items, each containing properties of values from 0 - 10, and add the various elements to select which combination of items achieve the desired totals. I also need the script to do this, using only items that have the same "slot" in common. For example: item_list = { 'item_1': {'slot': 'top', 'prop_a': 2, 'prop_b': 0, 'prop_c': 2, 'prop_d': 1 }, 'item_2': {'slot': 'top', 'prop_a': 5, 'prop_b': 0, 'prop_c': 1, 'prop_d':-1 }, 'item_3': {'slot': 'top', 'prop_a': 2, 'prop_b': 5, 'prop_c': 2, 'prop_d':-2 }, 'item_4': {'slot': 'mid', 'prop_a': 5, 'prop_b': 5, 'prop_c':-5, 'prop_d': 0 }, 'item_5': {'slot': 'mid', 'prop_a':10, 'prop_b': 0, 'prop_c':-5, 'prop_d': 0 }, 'item_6': {'slot': 'mid', 'prop_a':-5, 'prop_b': 2, 'prop_c': 3, 'prop_d': 5 }, 'item_7': {'slot': 'bot', 'prop_a': 1, 'prop_b': 3, 'prop_c':-4, 'prop_d': 4 }, 'item_8': {'slot': 'bot', 'prop_a': 2, 'prop_b': 2, 'prop_c': 0, 'prop_d': 0 }, 'item_9': {'slot': 'bot', 'prop_a': 3, 'prop_b': 1, 'prop_c': 4, 'prop_d':-4 }, } The script would then need to select which combinations from the "item_list" dict that using 1 item per "slot" that would achieve a desired result when added. For example, if the desired result was: 'prop_a': 3, 'prop_b': 3, 'prop_c': 8, 'prop_d': 0, the script would select 'item_2', 'item_6', and 'item_9', along with any other combination that worked. 'item_2': {'slot': 'top', 'prop_a': 5, 'prop_b': 0, 'prop_c': 1, 'prop_d':-1 } 'item_6': {'slot': 'mid', 'prop_a':-5, 'prop_b': 2, 'prop_c': 3, 'prop_d': 5 } 'item_9': {'slot': 'bot', 'prop_a': 3, 'prop_b': 1, 'prop_c': 4, 'prop_d':-4 } 'total': 'prop_a': 3, 'prop_b': 3, 'prop_c': 8, 'prop_d': 0 Any ideas how to accomplish this? It does not need to be in python, or even a thorough script, but just an explanation on how to do this in theory would be enough for me. I have tried working out looping through every combination, but that seems to very quickly get our of hand and unmanageable. The actual script will need to do this for about 1,000 items using 20 different "slots", each with 8 properties. Thanks for the help! A: Since the properties can have both positive and negative values, and you need all satisfactory combinations, I believe there is no "essential" optimization possible -- that is, no polynomial-time solution (assuming P != NP...;-). All solutions will come down to enumerating all the one-per-slot combinations and checking the final results, with very minor tweaks possible that may save you some percent effort here or there, but nothing really big. If you have 1000 items in 20 possible slots, say equally distributed at about 50 items per slot, there are around 50**20 possibilities overall, i.e, 9536743164062500000000000000000000 -- about 10**34 (a myriad billions of billions of billions...). You cannot, in general, "prune" any subtree from the "all-solutions search", because no matter the prop values when you have a hypothetical pick for the first 20-p slots, there still might be a pick of the remaining p slots that could satisfy the constraint (or, more than one). If you could find an exact polynomial-time solution for this, a NP-complete problem, you'd basically have revolutionized modern mathematics and computer science -- Turing prizes and Field medals would only be the start of the consequent accolades. This is not very likely. To get down to a feasible problem, you'll have to relax your requirements in some ways (accept the possibility of finding just a subset of the solutions, accept a probabilistic rather than deterministic approach, accept approximate solutions, ...). Once you do, some small optimizations may make sense -- for example, start with summing constants (equal to one more than the smallest negative value of each propriety) to all the property values and targets, so that every prop value is > 0 -- now you can sort the slots by (e.g.) value for some property, or the sum of all properties, and do some pruning based on the knowledge that adding one more slot to a partial hypothetical solution will increase each cumulative prop value by at least X and the total by at least Y (so you can prune that branch if either condition makes the running totals exceed the target). This kind of heuristic approximation need not make the big-O behavior any better, in general, but it may reduce the expected multiplier value by enough to get the problem closer to being computationally feasible. But it's not even worth looking for such clever little tricks if there's no requirement relaxation possible: in that case, the problem will stay computationally unfeasible, so looking for clever little optimizations would not be practically productive anyway. A: This problem is essentially a generalization of the subset-sum problem (which is NP-complete, yes) to multiple dimensions. To restate the problem (to make sure we're solving the same problem): you have 1000 items divided into 20 classes (which you call slots). Each item has an integer value in [-10,10] for each of 8 properties; thus each item can be considered to have a value which is an 8-dimensional vector. You want to pick one item from each slot, so that the total value (adding these 8-dimensional vectors) is a given vector. In the example you gave, you have 4 dimensions, and the 9 items in 3 classes have values (2,0,2,1), (5,0,1,-1), … etc., and you want to pick one item from each class to make the sum (3,3,8,0). Right? Brute-force First, there is the brute-force search that enumerates all possibilities. Assuming your 1000 items are divided equally into the 20 classes (so you have 50 in each), you have 50 choices for each class, which means you'd have to check 5020=9536743164062500000000000000000000 choices (and for each of them you need to add up the 20 elements along each of the 8 coordinates and check, so the running time would be ∝ 5020·20·8): this is not feasible. Dynamic-programming, single-shot Then there is the dynamic-programming solution, which is different, and in practice often works where brute-force is infeasible, but in this case unfortunately seems infeasible as well. (You'd improve it exponentially if you got better bounds on your "property values".) The idea here is to keep track of one way of reaching each possible sum. The sum of 20 numbers from [-10,10] lies in [-200,200], so there are "only" 4008=655360000000000000000 possible sums for your 8-dimensional vector. (This is a tiny fraction of the other search space, but it's no consolation to you. You can also take, for each "property", the difference between the sums of [largest item in each class] and [smallest item in each class] to replace the 400 with a smaller number.) The idea of the dynamic-programming algorithm is the following. Let last[(a,b,c,d,e,f,g,h)][k] denote one item you can take from the kth class (along with one item each from the first k-1 classes) to make the sum exactly (a,b,c,d,e,f,g,h). Then, pseudocode: for k=1 to 20: for each item i in class k: for each vector v for which last[v][k-1] is not null: last[v + value(i)][k] = i Then, if your desired final sum is s, you pick item last[s][k] from the kth class, item last[s-value(i)][k-1] from the (k-1)th class, and so on. This takes time ∝ 20·50·4008·8 in the worst case (only a loose upper bound, not a tight analysis). Dynamic-programming, separately So much for "perfect" solutions. However, if you allow heuristic solutions and those that "will most likely work in practice", you can do better (even for solving the problem exactly). For instance, you can solve the problem separately for each of the 8 dimensions. This is even easier to implement, takes only time ∝ 20·50·400·8=3200000 in the worst case, and you can do it quite easily. If you keep last[][] as a list, instead of a single element, then at the end you have (effectively) a list of subsets which achieve the given sum for that coordinate (in "product form"). In practice, not many subsets may add up exactly to the sum you want, so you can start with the coordinate for which the number of subsets is smallest, then try each of those subsets for the other 7 coordinates. The complexity of this step depends on the data in the problem, but I suspect (or one can hope) that either (1) there will be very few sets with equal sums, in which case this intersection will whittle down the number of sets to check, or (2) there will be many sets with a given sum, in which case you'll find one quite early. In any case, doing the dynamic-programming separately for each coordinate first is definitely going to allow you to search over a much smaller space in the second stage. Approximate algorithms If you don't need the sums to be exactly equal and will accept sums that are within a certain factor of your required sum, there is a well-known idea used to get an FPTAS (fully polynomial-time approximation scheme) for the subset-sum problem, which runs in time polynomial in (number of items, etc.) and 1/ε. I've exhausted my time to explain this, but you can look it up — basically, it just replaces the 4008 space by a smaller one, by e.g. rounding numbers up to the nearest multiple of 5, or whatever. A: This sounds like a variation of the Knapsack problem, which is commonly solved with dynamic programming. But, you could probably write a fairly simple solution (but slower) using recursion: def GetItemsForSlot(item_list, slot): return [ (k,v) for (k,v) in item_list.items() if v['slot'] == slot] def SubtractWeights(current_weights, item_weights): remaining_weights = {} for (k,v) in current_weights.items(): remaining_weights[k] = current_weights[k] - item_weights[k] return remaining_weights def AllWeightsAreZero(remaining_weights): return not [v for v in remaining_weights.values() if v != 0] def choose_items(item_list, remaining_weights, available_slots, accumulated_items=[ ]): print "choose_items: ", remaining_weights, available_slots, \ accumulated_items # Base case: we have no more available slots. if not available_slots: if AllWeightsAreZero(remaining_weights): # This is a solution. print "SOLUTION FOUND: ", accumulated_items return else: # This had remaining weight, not a solution. return # Pick the next available_slot slot = available_slots[0] # Iterate over each item for this slot, checking to see if they're in a # solution. for name, properties in GetItemsForSlot(item_list, slot): choose_items(item_list, # pass the items recursively SubtractWeights(remaining_weights, properties), available_slots[1:], # pass remaining slots accumulated_items + [name]) # Add this item if __name__ == "__main__": total_weights = { 'prop_a': 3, 'prop_b': 3, 'prop_c': 8, 'prop_d': 0 } choose_items(item_list, total_weights, ["top", "mid", "bot"]) This was tested, and seemed to work. No promises though :) Keeping slot & prop_a as properties of the same object made it a little harder to work with. I'd suggest using classes instead of a dictionary to make the code easier to understand. A: I have tried working out looping through every combination, but that seems to very quickly get our of hand and unmanageable. The actual script will need to do this for about 1,000 items using 20 different "slots", each with 8 properties. It might help your thinking to load the structure in a nice object hierarchy first and then solve it piecewise. Example: class Items(dict): def find(self, **clauses): # TODO! class Slots(dict): # TODO! items = Items() for item, slots in item_list.items(): items[item] = Slots(slots) # consider abstracting out slot based on location (top, mid, bot) too print items.find(prop_a=3, prop_b=3, prop_c=8, prop_d=0)
Python script to calculate aded combinations from a dictionary
I am trying to write a script that will take a dictionary of items, each containing properties of values from 0 - 10, and add the various elements to select which combination of items achieve the desired totals. I also need the script to do this, using only items that have the same "slot" in common. For example: item_list = { 'item_1': {'slot': 'top', 'prop_a': 2, 'prop_b': 0, 'prop_c': 2, 'prop_d': 1 }, 'item_2': {'slot': 'top', 'prop_a': 5, 'prop_b': 0, 'prop_c': 1, 'prop_d':-1 }, 'item_3': {'slot': 'top', 'prop_a': 2, 'prop_b': 5, 'prop_c': 2, 'prop_d':-2 }, 'item_4': {'slot': 'mid', 'prop_a': 5, 'prop_b': 5, 'prop_c':-5, 'prop_d': 0 }, 'item_5': {'slot': 'mid', 'prop_a':10, 'prop_b': 0, 'prop_c':-5, 'prop_d': 0 }, 'item_6': {'slot': 'mid', 'prop_a':-5, 'prop_b': 2, 'prop_c': 3, 'prop_d': 5 }, 'item_7': {'slot': 'bot', 'prop_a': 1, 'prop_b': 3, 'prop_c':-4, 'prop_d': 4 }, 'item_8': {'slot': 'bot', 'prop_a': 2, 'prop_b': 2, 'prop_c': 0, 'prop_d': 0 }, 'item_9': {'slot': 'bot', 'prop_a': 3, 'prop_b': 1, 'prop_c': 4, 'prop_d':-4 }, } The script would then need to select which combinations from the "item_list" dict that using 1 item per "slot" that would achieve a desired result when added. For example, if the desired result was: 'prop_a': 3, 'prop_b': 3, 'prop_c': 8, 'prop_d': 0, the script would select 'item_2', 'item_6', and 'item_9', along with any other combination that worked. 'item_2': {'slot': 'top', 'prop_a': 5, 'prop_b': 0, 'prop_c': 1, 'prop_d':-1 } 'item_6': {'slot': 'mid', 'prop_a':-5, 'prop_b': 2, 'prop_c': 3, 'prop_d': 5 } 'item_9': {'slot': 'bot', 'prop_a': 3, 'prop_b': 1, 'prop_c': 4, 'prop_d':-4 } 'total': 'prop_a': 3, 'prop_b': 3, 'prop_c': 8, 'prop_d': 0 Any ideas how to accomplish this? It does not need to be in python, or even a thorough script, but just an explanation on how to do this in theory would be enough for me. I have tried working out looping through every combination, but that seems to very quickly get our of hand and unmanageable. The actual script will need to do this for about 1,000 items using 20 different "slots", each with 8 properties. Thanks for the help!
[ "Since the properties can have both positive and negative values, and you need all satisfactory combinations, I believe there is no \"essential\" optimization possible -- that is, no polynomial-time solution (assuming P != NP...;-). All solutions will come down to enumerating all the one-per-slot combinations and checking the final results, with very minor tweaks possible that may save you some percent effort here or there, but nothing really big.\nIf you have 1000 items in 20 possible slots, say equally distributed at about 50 items per slot, there are around 50**20 possibilities overall, i.e, 9536743164062500000000000000000000 -- about 10**34 (a myriad billions of billions of billions...). You cannot, in general, \"prune\" any subtree from the \"all-solutions search\", because no matter the prop values when you have a hypothetical pick for the first 20-p slots, there still might be a pick of the remaining p slots that could satisfy the constraint (or, more than one).\nIf you could find an exact polynomial-time solution for this, a NP-complete problem, you'd basically have revolutionized modern mathematics and computer science -- Turing prizes and Field medals would only be the start of the consequent accolades. This is not very likely.\nTo get down to a feasible problem, you'll have to relax your requirements in some ways (accept the possibility of finding just a subset of the solutions, accept a probabilistic rather than deterministic approach, accept approximate solutions, ...).\nOnce you do, some small optimizations may make sense -- for example, start with summing constants (equal to one more than the smallest negative value of each propriety) to all the property values and targets, so that every prop value is > 0 -- now you can sort the slots by (e.g.) value for some property, or the sum of all properties, and do some pruning based on the knowledge that adding one more slot to a partial hypothetical solution will increase each cumulative prop value by at least X and the total by at least Y (so you can prune that branch if either condition makes the running totals exceed the target). This kind of heuristic approximation need not make the big-O behavior any better, in general, but it may reduce the expected multiplier value by enough to get the problem closer to being computationally feasible.\nBut it's not even worth looking for such clever little tricks if there's no requirement relaxation possible: in that case, the problem will stay computationally unfeasible, so looking for clever little optimizations would not be practically productive anyway.\n", "This problem is essentially a generalization of the subset-sum problem (which is NP-complete, yes) to multiple dimensions. To restate the problem (to make sure we're solving the same problem): you have 1000 items divided into 20 classes (which you call slots). Each item has an integer value in [-10,10] for each of 8 properties; thus each item can be considered to have a value which is an 8-dimensional vector. You want to pick one item from each slot, so that the total value (adding these 8-dimensional vectors) is a given vector.\nIn the example you gave, you have 4 dimensions, and the 9 items in 3 classes have values (2,0,2,1), (5,0,1,-1), … etc., and you want to pick one item from each class to make the sum (3,3,8,0). Right?\nBrute-force\nFirst, there is the brute-force search that enumerates all possibilities. Assuming your 1000 items are divided equally into the 20 classes (so you have 50 in each), you have 50 choices for each class, which means you'd have to check 5020=9536743164062500000000000000000000 choices (and for each of them you need to add up the 20 elements along each of the 8 coordinates and check, so the running time would be ∝ 5020·20·8): this is not feasible.\nDynamic-programming, single-shot\nThen there is the dynamic-programming solution, which is different, and in practice often works where brute-force is infeasible, but in this case unfortunately seems infeasible as well. (You'd improve it exponentially if you got better bounds on your \"property values\".) The idea here is to keep track of one way of reaching each possible sum. The sum of 20 numbers from [-10,10] lies in [-200,200], so there are \"only\" 4008=655360000000000000000 possible sums for your 8-dimensional vector. (This is a tiny fraction of the other search space, but it's no consolation to you. You can also take, for each \"property\", the difference between the sums of [largest item in each class] and [smallest item in each class] to replace the 400 with a smaller number.) The idea of the dynamic-programming algorithm is the following.\n\nLet last[(a,b,c,d,e,f,g,h)][k] denote one item you can take from the kth class (along with one item each from the first k-1 classes) to make the sum exactly (a,b,c,d,e,f,g,h).\nThen, pseudocode:\nfor k=1 to 20:\n for each item i in class k:\n for each vector v for which last[v][k-1] is not null:\n last[v + value(i)][k] = i\n\n\nThen, if your desired final sum is s, you pick item last[s][k] from the kth class, item last[s-value(i)][k-1] from the (k-1)th class, and so on. This takes time ∝ 20·50·4008·8 in the worst case (only a loose upper bound, not a tight analysis).\nDynamic-programming, separately\nSo much for \"perfect\" solutions. However, if you allow heuristic solutions and those that \"will most likely work in practice\", you can do better (even for solving the problem exactly). For instance, you can solve the problem separately for each of the 8 dimensions. This is even easier to implement, takes only time ∝ 20·50·400·8=3200000 in the worst case, and you can do it quite easily. If you keep last[][] as a list, instead of a single element, then at the end you have (effectively) a list of subsets which achieve the given sum for that coordinate (in \"product form\"). In practice, not many subsets may add up exactly to the sum you want, so you can start with the coordinate for which the number of subsets is smallest, then try each of those subsets for the other 7 coordinates. The complexity of this step depends on the data in the problem, but I suspect (or one can hope) that either (1) there will be very few sets with equal sums, in which case this intersection will whittle down the number of sets to check, or (2) there will be many sets with a given sum, in which case you'll find one quite early.\nIn any case, doing the dynamic-programming separately for each coordinate first is definitely going to allow you to search over a much smaller space in the second stage.\nApproximate algorithms\nIf you don't need the sums to be exactly equal and will accept sums that are within a certain factor of your required sum, there is a well-known idea used to get an FPTAS (fully polynomial-time approximation scheme) for the subset-sum problem, which runs in time polynomial in (number of items, etc.) and 1/ε. I've exhausted my time to explain this, but you can look it up — basically, it just replaces the 4008 space by a smaller one, by e.g. rounding numbers up to the nearest multiple of 5, or whatever.\n", "This sounds like a variation of the Knapsack problem, which is commonly solved with dynamic programming.\nBut, you could probably write a fairly simple solution (but slower) using recursion:\ndef GetItemsForSlot(item_list, slot):\n return [ (k,v) for (k,v) in item_list.items() if v['slot'] == slot]\n\ndef SubtractWeights(current_weights, item_weights):\n remaining_weights = {}\n for (k,v) in current_weights.items():\n remaining_weights[k] = current_weights[k] - item_weights[k]\n return remaining_weights\n\ndef AllWeightsAreZero(remaining_weights):\n return not [v for v in remaining_weights.values() if v != 0]\n\ndef choose_items(item_list, remaining_weights, available_slots,\n accumulated_items=[ ]):\n print \"choose_items: \", remaining_weights, available_slots, \\\n accumulated_items\n # Base case: we have no more available slots.\n if not available_slots:\n if AllWeightsAreZero(remaining_weights):\n # This is a solution.\n print \"SOLUTION FOUND: \", accumulated_items\n return\n else:\n # This had remaining weight, not a solution.\n return\n\n # Pick the next available_slot\n slot = available_slots[0]\n # Iterate over each item for this slot, checking to see if they're in a\n # solution.\n for name, properties in GetItemsForSlot(item_list, slot):\n choose_items(item_list, # pass the items recursively\n SubtractWeights(remaining_weights, properties),\n available_slots[1:], # pass remaining slots\n accumulated_items + [name]) # Add this item\n\n\n\n\nif __name__ == \"__main__\":\n total_weights = {\n 'prop_a': 3,\n 'prop_b': 3,\n 'prop_c': 8,\n 'prop_d': 0\n }\n\n choose_items(item_list, total_weights, [\"top\", \"mid\", \"bot\"])\n\nThis was tested, and seemed to work. No promises though :)\nKeeping slot & prop_a as properties of the same object made it a little harder to work with. I'd suggest using classes instead of a dictionary to make the code easier to understand.\n", "\nI have tried working out looping through every combination, but that seems to very quickly get our of hand and unmanageable. The actual script will need to do this for about 1,000 items using 20 different \"slots\", each with 8 properties.\n\nIt might help your thinking to load the structure in a nice object hierarchy first and then solve it piecewise.\nExample:\nclass Items(dict):\n def find(self, **clauses):\n # TODO!\n\nclass Slots(dict):\n # TODO!\n\nitems = Items()\nfor item, slots in item_list.items():\n items[item] = Slots(slots)\n # consider abstracting out slot based on location (top, mid, bot) too\n\nprint items.find(prop_a=3, prop_b=3, prop_c=8, prop_d=0)\n\n" ]
[ 7, 4, 3, 1 ]
[]
[]
[ "algorithm", "combinations", "combinatorics", "language_agnostic", "python" ]
stackoverflow_0002708913_algorithm_combinations_combinatorics_language_agnostic_python.txt
Q: Setting the vim color theme for highlighted braces How do you change the vim color scheme for highlighted braces? I'm looking to actually edit the .vim theme file to make the change permanent. Regards, Craig A: The automatic highlight colour for matching brackets is called MatchParen. You can change the colour in your .vimrc by doing eg: highlight MatchParen cterm=bold ctermfg=cyan A: After reading the FAQ, I can answer my own question. :) 24.9. Is there a built-in function to syntax-highlight the corresponding matching bracket? No. Vim doesn't support syntax-highlighting matching brackets. You can try using the plugin developed by Charles Campbell: http://vim.sourceforge.net/tips/tip.php?tip_id=177 You can jump to a matching bracket using the '%' key. You can set the 'showmatch' option to temporarily jump to a matching bracket when in insert mode.
Setting the vim color theme for highlighted braces
How do you change the vim color scheme for highlighted braces? I'm looking to actually edit the .vim theme file to make the change permanent. Regards, Craig
[ "The automatic highlight colour for matching brackets is called MatchParen. You can change the colour in your .vimrc by doing eg:\nhighlight MatchParen cterm=bold ctermfg=cyan\n\n", "After reading the FAQ, I can answer my own question. :)\n\n24.9. Is there a built-in function to syntax-highlight the corresponding\n matching bracket?\nNo. Vim doesn't support syntax-highlighting matching brackets. You can try\n using the plugin developed by Charles Campbell:\nhttp://vim.sourceforge.net/tips/tip.php?tip_id=177\nYou can jump to a matching bracket using the '%' key. You can set the\n 'showmatch' option to temporarily jump to a matching bracket when in insert\n mode.\n\n" ]
[ 14, 3 ]
[]
[]
[ "python", "vim" ]
stackoverflow_0002709064_python_vim.txt
Q: setup.py adding options (aka setup.py --enable-feature ) I'm looking for a way to include some feature in a python (extension) module in installation phase. In a practical manner: I have a python library that has 2 implementations of the same function, one internal (slow) and one that depends from an external library (fast, in C). I want that this library is optional and can be activated at compile/install time using a flag like: python setup.py install # (it doesn't include the fast library) python setup.py --enable-fast install I have to use Distutils, however all solution are well accepted! A: The docs for distutils include a section on extending the standard functionality. The relevant suggestion seems to be to subclass the relevant classes from the distutils.command.* modules (such as build_py or install) and tell setup to use your new versions (through the cmdclass argument, which is a dictionary mapping commands to classes which are to be used to execute them). See the source of any of the command classes (e.g. the install command) to get a good idea of what one has to do to add a new option. A: An example of exactly what you want is the sqlalchemy's cextensions, which are there specifically for the same purpose - faster C implementation. In order to see how SA implemented it you need to look at 2 files: 1) setup.py. As you can see from the extract below, they handle the cases with setuptools and distutils: try: from setuptools import setup, Extension, Feature except ImportError: from distutils.core import setup, Extension Feature = None Later there is a check if Feature: and the extension is configured properly for each case using variable extra, which is later added to the setup() function. 2) base.py: here look at how BaseRowProxy is defined: try: from sqlalchemy.cresultproxy import BaseRowProxy except ImportError: class BaseRowProxy(object): #.... So basically once C extensions are installed (using --with-cextensions flag during setup), the C implementation will be used. Otherwise, pure Python implementation of the class/function is used.
setup.py adding options (aka setup.py --enable-feature )
I'm looking for a way to include some feature in a python (extension) module in installation phase. In a practical manner: I have a python library that has 2 implementations of the same function, one internal (slow) and one that depends from an external library (fast, in C). I want that this library is optional and can be activated at compile/install time using a flag like: python setup.py install # (it doesn't include the fast library) python setup.py --enable-fast install I have to use Distutils, however all solution are well accepted!
[ "The docs for distutils include a section on extending the standard functionality. The relevant suggestion seems to be to subclass the relevant classes from the distutils.command.* modules (such as build_py or install) and tell setup to use your new versions (through the cmdclass argument, which is a dictionary mapping commands to classes which are to be used to execute them). See the source of any of the command classes (e.g. the install command) to get a good idea of what one has to do to add a new option.\n", "An example of exactly what you want is the sqlalchemy's cextensions, which are there specifically for the same purpose - faster C implementation. In order to see how SA implemented it you need to look at 2 files:\n1) setup.py. As you can see from the extract below, they handle the cases with setuptools and distutils:\ntry:\n from setuptools import setup, Extension, Feature\nexcept ImportError:\n from distutils.core import setup, Extension\n Feature = None\n\nLater there is a check if Feature: and the extension is configured properly for each case using variable extra, which is later added to the setup() function.\n2) base.py: here look at how BaseRowProxy is defined:\ntry:\n from sqlalchemy.cresultproxy import BaseRowProxy\nexcept ImportError:\n class BaseRowProxy(object):\n #....\n\nSo basically once C extensions are installed (using --with-cextensions flag during setup), the C implementation will be used. Otherwise, pure Python implementation of the class/function is used.\n" ]
[ 4, 2 ]
[]
[]
[ "distutils", "packaging", "python" ]
stackoverflow_0002709278_distutils_packaging_python.txt
Q: Conventional Approaches for Passing Data to Back-End? I'm fairly new to web development, so please pardon the painfully newbie question that's about to follow. My computer science class group and I are developing a web application for class, which is built in Python (under Django) and uses jQuery on the front end. It's primarily an Ajax-ified application, and passing data from the backend to the front end is done through Ajax calls to specific URLs which return in the JSON format. This is probably a stupid question, but what's the conventional approach for passing data in the opposite direction? We don't want to reload the page or anything, so is it an Ajax pass going the other way or something? A: You simply send AJAX request and pack all the data to POST request params then read it in Django. Example of basic voting app in django + ajax: http://lethain.com/entry/2007/dec/11/two-faced-django-part-5-jquery-ajax/
Conventional Approaches for Passing Data to Back-End?
I'm fairly new to web development, so please pardon the painfully newbie question that's about to follow. My computer science class group and I are developing a web application for class, which is built in Python (under Django) and uses jQuery on the front end. It's primarily an Ajax-ified application, and passing data from the backend to the front end is done through Ajax calls to specific URLs which return in the JSON format. This is probably a stupid question, but what's the conventional approach for passing data in the opposite direction? We don't want to reload the page or anything, so is it an Ajax pass going the other way or something?
[ "You simply send AJAX request and pack all the data to POST request params then read it in Django.\nExample of basic voting app in django + ajax: http://lethain.com/entry/2007/dec/11/two-faced-django-part-5-jquery-ajax/\n" ]
[ 3 ]
[]
[]
[ "ajax", "django", "jquery", "python" ]
stackoverflow_0002709893_ajax_django_jquery_python.txt
Q: Python regular expressions assigning to named groups When you use variables (is that the correct word?) in python regular expressions like this: "blah (?P\w+)" ("value" would be the variable), how could you make the variable's value be the text after "blah " to the end of the line or to a certain character not paying any attention to the actual content of the variable. For example, this is pseudo-code for what I want: >>> import re >>> p = re.compile("say (?P<value>continue_until_text_after_assignment_is_recognized) endsay") >>> m = p.match("say Hello hi yo endsay") >>> m.group('value') 'Hello hi yo' Note: The title is probably not understandable. That is because I didn't know how to say it. Sorry if I caused any confusion. A: For that you'd want a regular expression of "say (?P<value>.+) endsay" The period matches any character, and the plus sign indicates that that should be repeated one or more times... so .+ means any sequence of one or more characters. When you put endsay at the end, the regular expression engine will make sure that whatever it matches does in fact end with that string. A: You need to specify what you want to match if the text is, for example, say hello there and endsay but some more endsay If you want to match the whole hello there and endsay but some more substring, @David's answer is correct. Otherwise, to match just hello there and, the pattern needs to be: say (?P<value>.+?) endsay with a question mark after the plus sign to make it non-greedy (by default it's greedy, gobbling up all it possibly can while allowing an overall match; non-greedy means it gobbles as little as possible, again while allowing an overall match).
Python regular expressions assigning to named groups
When you use variables (is that the correct word?) in python regular expressions like this: "blah (?P\w+)" ("value" would be the variable), how could you make the variable's value be the text after "blah " to the end of the line or to a certain character not paying any attention to the actual content of the variable. For example, this is pseudo-code for what I want: >>> import re >>> p = re.compile("say (?P<value>continue_until_text_after_assignment_is_recognized) endsay") >>> m = p.match("say Hello hi yo endsay") >>> m.group('value') 'Hello hi yo' Note: The title is probably not understandable. That is because I didn't know how to say it. Sorry if I caused any confusion.
[ "For that you'd want a regular expression of \n\"say (?P<value>.+) endsay\"\n\nThe period matches any character, and the plus sign indicates that that should be repeated one or more times... so .+ means any sequence of one or more characters. When you put endsay at the end, the regular expression engine will make sure that whatever it matches does in fact end with that string.\n", "You need to specify what you want to match if the text is, for example,\nsay hello there and endsay but some more endsay\n\nIf you want to match the whole hello there and endsay but some more substring, @David's answer is correct. Otherwise, to match just hello there and, the pattern needs to be:\nsay (?P<value>.+?) endsay\n\nwith a question mark after the plus sign to make it non-greedy (by default it's greedy, gobbling up all it possibly can while allowing an overall match; non-greedy means it gobbles as little as possible, again while allowing an overall match).\n" ]
[ 12, 10 ]
[]
[]
[ "python", "regex", "variable_assignment", "variables" ]
stackoverflow_0002710486_python_regex_variable_assignment_variables.txt
Q: How to install python physics engine I want a python physics engine that works on mac and makes it easy to simulate physics. I have VPython and it works fine, but it is not quite what I want. VPython just shows visual elements and all the physics is in formulas. I looked at the documentation for PyODE and it looked like more what I want. It allowed you to add forces to masses and have worlds and things like that. When I tried to install PyODE (I am using a Mac), it didn't work. One reason was that I didn't have pyrex (I do have Cython, so maybe there is some way to have it use that?), but the other was that I didn't have ode installed. I looked and realized that PyODE is dependent on ode. I tried to install ode but that didn't work. Is there some documentation or binary or something that makes it easy to install PyODE on a mac? Or is there a similar module? Edit: This is the error I received when trying to install PyODE: sh: ode-config: command not found sh: ode-config: command not found WARNING: <ode/ode.h> not found. You may have to adjust INC_DIRS. INFO: Creating ode_trimesh.c pyrexc -o ode_trimesh.c -I. -Isrc src/ode.pyx sh: pyrexc: command not found ERROR: An error occured while generating the C source file. I got this error because pyrex and ode weren't installed. There was no documentation for installing ode on mac so there were no error messages for what I tried to do but the errors stayed the same for PyODE so ode wasn't installed. A: You can easily install ODE on your Mac with darwinports -- instructions here. You can easily list PyODE versions for darwinports -- then pick the right one for your chosen Python version -- by entering PyODE on the "search in darwinports" text box, and similarly for Pyrex (Cython is not 100% compatible with Pyrex, so it may not be worth the bother to tweak things for it... even though Cython tends to be better;-). Note that it will be easiest if you also install a Python version with darwinports rather than sticking to the one Apple supplies (the darwinports version will be more up-to-date and will have plenty more extensions available that might be more work to install on the Apple-supplied "system" Python). A: One of the errors indicates that you are missing pyrex. Perhaps try installing that first via darwinports, then work on the include directories.
How to install python physics engine
I want a python physics engine that works on mac and makes it easy to simulate physics. I have VPython and it works fine, but it is not quite what I want. VPython just shows visual elements and all the physics is in formulas. I looked at the documentation for PyODE and it looked like more what I want. It allowed you to add forces to masses and have worlds and things like that. When I tried to install PyODE (I am using a Mac), it didn't work. One reason was that I didn't have pyrex (I do have Cython, so maybe there is some way to have it use that?), but the other was that I didn't have ode installed. I looked and realized that PyODE is dependent on ode. I tried to install ode but that didn't work. Is there some documentation or binary or something that makes it easy to install PyODE on a mac? Or is there a similar module? Edit: This is the error I received when trying to install PyODE: sh: ode-config: command not found sh: ode-config: command not found WARNING: <ode/ode.h> not found. You may have to adjust INC_DIRS. INFO: Creating ode_trimesh.c pyrexc -o ode_trimesh.c -I. -Isrc src/ode.pyx sh: pyrexc: command not found ERROR: An error occured while generating the C source file. I got this error because pyrex and ode weren't installed. There was no documentation for installing ode on mac so there were no error messages for what I tried to do but the errors stayed the same for PyODE so ode wasn't installed.
[ "You can easily install ODE on your Mac with darwinports -- instructions here. You can easily list PyODE versions for darwinports -- then pick the right one for your chosen Python version -- by entering PyODE on the \"search in darwinports\" text box, and similarly for Pyrex (Cython is not 100% compatible with Pyrex, so it may not be worth the bother to tweak things for it... even though Cython tends to be better;-). Note that it will be easiest if you also install a Python version with darwinports rather than sticking to the one Apple supplies (the darwinports version will be more up-to-date and will have plenty more extensions available that might be more work to install on the Apple-supplied \"system\" Python).\n", "One of the errors indicates that you are missing pyrex. Perhaps try installing that first via darwinports, then work on the include directories.\n" ]
[ 2, 0 ]
[]
[]
[ "installation", "ode_library", "physics", "python" ]
stackoverflow_0002710173_installation_ode_library_physics_python.txt
Q: How can I validate form data using Google App Engine? I have no idea about this. A: You can use Django's form library to validate. Google has an article on it. http://code.google.com/appengine/articles/djangoforms.html
How can I validate form data using Google App Engine?
I have no idea about this.
[ "You can use Django's form library to validate. Google has an article on it.\nhttp://code.google.com/appengine/articles/djangoforms.html\n" ]
[ 3 ]
[]
[]
[ "django", "forms", "google_app_engine", "python", "validation" ]
stackoverflow_0002710636_django_forms_google_app_engine_python_validation.txt
Q: Can I use Django templatetags on Google App Engine? My Django site has many templatetags directories, can I use Django templatetags on Google App Engine? A: Yes.
Can I use Django templatetags on Google App Engine?
My Django site has many templatetags directories, can I use Django templatetags on Google App Engine?
[ "Yes.\n" ]
[ 4 ]
[]
[]
[ "django", "google_app_engine", "python", "templatetags" ]
stackoverflow_0002710832_django_google_app_engine_python_templatetags.txt