content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Pandas groupby cumulative sum start from 0 I have the following pandas DataFrame (without the last column): name day show-in-appointment previous-missed-appointments 0 Jack 2020/01/01 show 0 1 Jack 2020/01/02 no-show 0 2 Jill 2020/01/02 no-show 0 3 Jack 2020/01/03 show 1 4 Jill 2020/01/03 show 1 5 Jill 2020/01/04 no-show 1 6 Jack 2020/01/04 show 1 7 Jill 2020/01/05 show 2 8 jack 2020/01/06 no-show 1 9 jack 2020/01/07 show 2 I want to add the last column as the cumulative sum of no-show appointments (sum of previous no-shows for each person). for each person in the new column that is called (previous-missed-appointments), it should start from 0. Here is the data for easier reproducibility: df = pd.DataFrame( data=np.asarray([ ['Jack', 'Jack', 'Jill', 'Jack', 'Jill', 'Jill', 'Jack', 'Jill', 'jack', 'jack'], [ '2020/01/01', '2020/01/02', '2020/01/02', '2020/01/03', '2020/01/03', '2020/01/04', '2020/01/04', '2020/01/05', '2020/01/06', '2020/01/07', ], ['show', 'no-show', 'no-show', 'show', 'show', 'no-show', 'show', 'show', 'no-show', 'show'], ]).T, columns=['name', 'day', 'show-in-appointment'], ) I tried various combos of df.groupby and df.agg(lambda x: cumsum(x)) to no avail. A: import pandas as pd df.name = df.name.str.capitalize() df['order'] = df.index df.day = pd.to_datetime(df.day) df['noshow'] = df['show-in-appointment'].map({'show': 0, 'no-show': 1}) df = df.sort_values(by=['name', 'day']) df['previous-missed-appointments'] = df.groupby('name').noshow.cumsum() df.loc[df.noshow == 1, 'previous-missed-appointments'] -= 1 df = df.sort_values(by='order') df = df.drop(columns=['noshow', 'order']) A: I think the two main methods you can use are groupby and cumsum Have a look at the code below: df.sort_values(by=['name', 'date'], inplace=True, ignore_index=True) df['check'] = np.where(df['show-in-appointment']=='no-show', 1.0, 0.0) df['previous-miss'] = df.groupby('name')['check'].cumsum()
Pandas groupby cumulative sum start from 0
I have the following pandas DataFrame (without the last column): name day show-in-appointment previous-missed-appointments 0 Jack 2020/01/01 show 0 1 Jack 2020/01/02 no-show 0 2 Jill 2020/01/02 no-show 0 3 Jack 2020/01/03 show 1 4 Jill 2020/01/03 show 1 5 Jill 2020/01/04 no-show 1 6 Jack 2020/01/04 show 1 7 Jill 2020/01/05 show 2 8 jack 2020/01/06 no-show 1 9 jack 2020/01/07 show 2 I want to add the last column as the cumulative sum of no-show appointments (sum of previous no-shows for each person). for each person in the new column that is called (previous-missed-appointments), it should start from 0. Here is the data for easier reproducibility: df = pd.DataFrame( data=np.asarray([ ['Jack', 'Jack', 'Jill', 'Jack', 'Jill', 'Jill', 'Jack', 'Jill', 'jack', 'jack'], [ '2020/01/01', '2020/01/02', '2020/01/02', '2020/01/03', '2020/01/03', '2020/01/04', '2020/01/04', '2020/01/05', '2020/01/06', '2020/01/07', ], ['show', 'no-show', 'no-show', 'show', 'show', 'no-show', 'show', 'show', 'no-show', 'show'], ]).T, columns=['name', 'day', 'show-in-appointment'], ) I tried various combos of df.groupby and df.agg(lambda x: cumsum(x)) to no avail.
[ "import pandas as pd\n\ndf.name = df.name.str.capitalize()\ndf['order'] = df.index\ndf.day = pd.to_datetime(df.day)\ndf['noshow'] = df['show-in-appointment'].map({'show': 0, 'no-show': 1})\ndf = df.sort_values(by=['name', 'day'])\ndf['previous-missed-appointments'] = df.groupby('name').noshow.cumsum()\ndf.loc[df.noshow == 1, 'previous-missed-appointments'] -= 1\ndf = df.sort_values(by='order')\ndf = df.drop(columns=['noshow', 'order'])\n\n", "I think the two main methods you can use are groupby and cumsum\nHave a look at the code below:\ndf.sort_values(by=['name', 'date'], inplace=True, ignore_index=True)\ndf['check'] = np.where(df['show-in-appointment']=='no-show', 1.0, 0.0)\ndf['previous-miss'] = df.groupby('name')['check'].cumsum()\n\n" ]
[ 1, 0 ]
[]
[]
[ "cumsum", "dataframe", "group_by", "pandas", "python" ]
stackoverflow_0074467226_cumsum_dataframe_group_by_pandas_python.txt
Q: Using variables in multiple python files I am trying to use variables created in one file and still use them in another file without having to run all of the code from the first file. Would I be better off saving the variables to a text file and calling the text file in my second python file? Ex. File #1 name = input('What is your name') job = input('How do you earn your money') user_input = name, job File #2 I want to be able to call the input from the first file for name without having to import all of rest of the code in file #1 A: I think the answer depends on the nature of the variables you want to share between files. For your case, I think a reasonable solution might be to import one module into another, e.g. # File1.py # ... def name_input(): name = input('What is your name') return name def job_input(): job = input('How do you earn your money') return job # ... And in another file: # File2.py from File1 import name_input # ... name = name_input() # ... Though when the module is imported, the whole thing is executed (e.g. appending name = name_input() to File1.py and running File2.py, you'll be asked for your name twice). See the Python docs on modules for more information. An alternative, as you suggested in your question would be to store the variable outside of your Python code files. This can be done either through environment variables, text files (and reading and parsing whatever format you have it stored in), or INI-like files using Python's configparser. A: Other answer is good advice for someone learning Python. But, I will answer the question: Would I be better off saving the variables to a text file and calling the text file in my second python file? Maybe "No". If it is about Python variables, using pickle might be a better alternative. See the docs. Example based on other SO answer: import pickle name = input('What is your name') job = input('How do you earn your money') user_input = name, job with open('filename.pickle', 'wb') as handle: pickle.dump(user_input , handle) with open('filename.pickle', 'rb') as handle: user_input_unpickled = pickle.load(handle) print(user_input == user_input_unpickled ) Another option is to use JSON or whatever data format is better to store the data (csv, npz, hdf5, parquet, etc).
Using variables in multiple python files
I am trying to use variables created in one file and still use them in another file without having to run all of the code from the first file. Would I be better off saving the variables to a text file and calling the text file in my second python file? Ex. File #1 name = input('What is your name') job = input('How do you earn your money') user_input = name, job File #2 I want to be able to call the input from the first file for name without having to import all of rest of the code in file #1
[ "I think the answer depends on the nature of the variables you want to share between files.\nFor your case, I think a reasonable solution might be to import one module into another, e.g.\n# File1.py\n# ...\ndef name_input():\n name = input('What is your name')\n return name\n\ndef job_input():\n job = input('How do you earn your money')\n return job\n# ...\n\nAnd in another file:\n# File2.py\nfrom File1 import name_input\n# ...\nname = name_input()\n# ...\n\nThough when the module is imported, the whole thing is executed (e.g. appending name = name_input() to File1.py and running File2.py, you'll be asked for your name twice). See the Python docs on modules for more information.\nAn alternative, as you suggested in your question would be to store the variable outside of your Python code files.\nThis can be done either through environment variables, text files (and reading and parsing whatever format you have it stored in), or INI-like files using Python's configparser.\n", "Other answer is good advice for someone learning Python. But, I will answer the question:\n\nWould I be better off saving the variables to a text file and calling the text file in my second python file?\n\nMaybe \"No\". If it is about Python variables, using pickle might be a better alternative.\nSee the docs.\nExample based on other SO answer:\nimport pickle\n\nname = input('What is your name')\njob = input('How do you earn your money')\nuser_input = name, job\n\nwith open('filename.pickle', 'wb') as handle:\n pickle.dump(user_input , handle)\n\nwith open('filename.pickle', 'rb') as handle:\n user_input_unpickled = pickle.load(handle)\n\nprint(user_input == user_input_unpickled )\n\n\nAnother option is to use JSON or whatever data format is better to store the data (csv, npz, hdf5, parquet, etc).\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074463992_python_python_3.x.txt
Q: Join a list of strings in python and wrap each string in quotation marks I've got: words = ['hello', 'world', 'you', 'look', 'nice'] I want to have: '"hello", "world", "you", "look", "nice"' What's the easiest way to do this with Python? A: Update 2021: With f strings in Python3 >>> words = ['hello', 'world', 'you', 'look', 'nice'] >>> ', '.join(f'"{w}"' for w in words) '"hello", "world", "you", "look", "nice"' Original Answer (Supports Python 2.6+) >>> words = ['hello', 'world', 'you', 'look', 'nice'] >>> ', '.join('"{0}"'.format(w) for w in words) '"hello", "world", "you", "look", "nice"' A: You can try this : str(words)[1:-1] A: you may also perform a single format call >>> words = ['hello', 'world', 'you', 'look', 'nice'] >>> '"{0}"'.format('", "'.join(words)) '"hello", "world", "you", "look", "nice"' Update: Some benchmarking (performed on a 2009 mbp): >>> timeit.Timer("""words = ['hello', 'world', 'you', 'look', 'nice'] * 100; ', '.join('"{0}"'.format(w) for w in words)""").timeit(1000) 0.32559704780578613 >>> timeit.Timer("""words = ['hello', 'world', 'you', 'look', 'nice'] * 100; '"{}"'.format('", "'.join(words))""").timeit(1000) 0.018904924392700195 So it seems that format is actually quite expensive Update 2: following @JCode's comment, adding a map to ensure that join will work, Python 2.7.12 >>> timeit.Timer("""words = ['hello', 'world', 'you', 'look', 'nice'] * 100; ', '.join('"{0}"'.format(w) for w in words)""").timeit(1000) 0.08646488189697266 >>> timeit.Timer("""words = ['hello', 'world', 'you', 'look', 'nice'] * 100; '"{}"'.format('", "'.join(map(str, words)))""").timeit(1000) 0.04855608940124512 >>> timeit.Timer("""words = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] * 100; ', '.join('"{0}"'.format(w) for w in words)""").timeit(1000) 0.17348504066467285 >>> timeit.Timer("""words = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] * 100; '"{}"'.format('", "'.join(map(str, words)))""").timeit(1000) 0.06372308731079102 A: >>> ', '.join(['"%s"' % w for w in words]) A: An updated version of @jamylak answer with F Strings (for python 3.6+), I've used backticks for a string used for a SQL script. keys = ['foo', 'bar' , 'omg'] ', '.join(f'`{k}`' for k in keys) # result: '`foo`, `bar`, `omg`' A: words = ['hello', 'world', 'you', 'look', 'nice'] S = "" for _ in range(len(words)-1): S+=f'"{words[_]}"'+', ' S +=f'"{words[len(words)-1]}"' print("'"+S+"'") OUTPUT: '"hello", "world", "you", "look", "nice"' A: find a faster way '"' + '","'.join(words) + '"' test in Python 2.7: words = ['hello', 'world', 'you', 'look', 'nice'] print '"' + '","'.join(words) + '"' print str(words)[1:-1] print '"{0}"'.format('", "'.join(words)) t = time() * 1000 range10000 = range(100000) for i in range10000: '"' + '","'.join(words) + '"' print time() * 1000 - t t = time() * 1000 for i in range10000: str(words)[1:-1] print time() * 1000 - t for i in range10000: '"{0}"'.format('", "'.join(words)) print time() * 1000 - t The resulting output is: # "hello", "world", "you", "look", "nice" # 'hello', 'world', 'you', 'look', 'nice' # "hello", "world", "you", "look", "nice" # 39.6000976562 # 166.892822266 # 220.110839844 A: There are two possible solution and for me we should choose wisely... items = ['A', 'B', 'C'] # Fast since we used only string concat. print("\"" + ("\",\"".join(items)) + "\"") # Slow since we used loop here. print(",".join(["\"{item}\"".format(item=item) for item in items]))
Join a list of strings in python and wrap each string in quotation marks
I've got: words = ['hello', 'world', 'you', 'look', 'nice'] I want to have: '"hello", "world", "you", "look", "nice"' What's the easiest way to do this with Python?
[ "Update 2021: With f strings in Python3\n>>> words = ['hello', 'world', 'you', 'look', 'nice']\n>>> ', '.join(f'\"{w}\"' for w in words)\n'\"hello\", \"world\", \"you\", \"look\", \"nice\"'\n\nOriginal Answer (Supports Python 2.6+)\n>>> words = ['hello', 'world', 'you', 'look', 'nice']\n>>> ', '.join('\"{0}\"'.format(w) for w in words)\n'\"hello\", \"world\", \"you\", \"look\", \"nice\"'\n\n", "You can try this :\nstr(words)[1:-1]\n\n", "you may also perform a single format call\n>>> words = ['hello', 'world', 'you', 'look', 'nice']\n>>> '\"{0}\"'.format('\", \"'.join(words))\n'\"hello\", \"world\", \"you\", \"look\", \"nice\"'\n\nUpdate: Some benchmarking (performed on a 2009 mbp):\n>>> timeit.Timer(\"\"\"words = ['hello', 'world', 'you', 'look', 'nice'] * 100; ', '.join('\"{0}\"'.format(w) for w in words)\"\"\").timeit(1000)\n0.32559704780578613\n\n>>> timeit.Timer(\"\"\"words = ['hello', 'world', 'you', 'look', 'nice'] * 100; '\"{}\"'.format('\", \"'.join(words))\"\"\").timeit(1000)\n0.018904924392700195\n\nSo it seems that format is actually quite expensive\nUpdate 2: following @JCode's comment, adding a map to ensure that join will work, Python 2.7.12\n>>> timeit.Timer(\"\"\"words = ['hello', 'world', 'you', 'look', 'nice'] * 100; ', '.join('\"{0}\"'.format(w) for w in words)\"\"\").timeit(1000)\n0.08646488189697266\n\n>>> timeit.Timer(\"\"\"words = ['hello', 'world', 'you', 'look', 'nice'] * 100; '\"{}\"'.format('\", \"'.join(map(str, words)))\"\"\").timeit(1000)\n0.04855608940124512\n\n>>> timeit.Timer(\"\"\"words = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] * 100; ', '.join('\"{0}\"'.format(w) for w in words)\"\"\").timeit(1000)\n0.17348504066467285\n\n>>> timeit.Timer(\"\"\"words = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] * 100; '\"{}\"'.format('\", \"'.join(map(str, words)))\"\"\").timeit(1000)\n0.06372308731079102\n\n", ">>> ', '.join(['\"%s\"' % w for w in words])\n\n", "An updated version of @jamylak answer with F Strings (for python 3.6+), I've used backticks for a string used for a SQL script.\nkeys = ['foo', 'bar' , 'omg']\n', '.join(f'`{k}`' for k in keys)\n# result: '`foo`, `bar`, `omg`'\n\n", "words = ['hello', 'world', 'you', 'look', 'nice']\nS = \"\"\nfor _ in range(len(words)-1):\n S+=f'\"{words[_]}\"'+', '\nS +=f'\"{words[len(words)-1]}\"'\nprint(\"'\"+S+\"'\")\n\nOUTPUT:\n'\"hello\", \"world\", \"you\", \"look\", \"nice\"'\n\n", "find a faster way\n'\"' + '\",\"'.join(words) + '\"'\n\ntest in Python 2.7:\n words = ['hello', 'world', 'you', 'look', 'nice']\n\n print '\"' + '\",\"'.join(words) + '\"'\n print str(words)[1:-1]\n print '\"{0}\"'.format('\", \"'.join(words))\n\n t = time() * 1000\n range10000 = range(100000)\n\n for i in range10000:\n '\"' + '\",\"'.join(words) + '\"'\n\n print time() * 1000 - t\n t = time() * 1000\n\n for i in range10000:\n str(words)[1:-1]\n print time() * 1000 - t\n\n for i in range10000:\n '\"{0}\"'.format('\", \"'.join(words))\n\n print time() * 1000 - t\n\nThe resulting output is:\n# \"hello\", \"world\", \"you\", \"look\", \"nice\"\n# 'hello', 'world', 'you', 'look', 'nice'\n# \"hello\", \"world\", \"you\", \"look\", \"nice\"\n# 39.6000976562\n# 166.892822266\n# 220.110839844\n\n", "There are two possible solution and for me we should choose wisely...\nitems = ['A', 'B', 'C']\n\n# Fast since we used only string concat.\nprint(\"\\\"\" + (\"\\\",\\\"\".join(items)) + \"\\\"\")\n\n# Slow since we used loop here.\nprint(\",\".join([\"\\\"{item}\\\"\".format(item=item) for item in items]))\n\n" ]
[ 279, 69, 55, 8, 4, 1, 0, 0 ]
[ "# Python3 without for loop\nconc_str = \"'{}'\".format(\"','\".join(['a', 'b', 'c']))\nprint(conc_str) \n\n# \"'a', 'b', 'c'\"\n\n" ]
[ -1 ]
[ "list", "python", "string" ]
stackoverflow_0012007686_list_python_string.txt
Q: coverting list of string coordinates into list of coordinates without string I have a list flat_list =['53295,-46564.2', '53522.6,-46528.4', '54792.9,-46184', '55258.7,-46512.9', '55429.4,-48356.9', '53714.5,-50762.8'] How can I convert it into [[53295,-46564.2], [53522.6,-46528.4], [54792.9,-46184], [55258.7,-46512.9], [55429.4,-48356.9], [53714.5,-50762.8]] I tried l = [i.strip("'") for i in flat_list] nothing works. l = [i.strip("'") for i in flat_list] coords = [map(float,i.split(",")) for i in flat_list] print(coords) gives me <map object at 0x7f7a7715d2b0> A: Why complicate things? Without any builtins such as map and itertools, this approach with a nested list comprehension should be a relatively simple and efficient one. flat_list = ['53295,-46564.2', '53522.6,-46528.4', '54792.9,-46184', '55258.7,-46512.9', '55429.4,-48356.9', '53714.5,-50762.8'] result = [float(f) for pair in flat_list for f in pair.split(',')] print(result) Output: [53295.0, -46564.2, 53522.6, -46528.4, 54792.9, -46184.0, 55258.7, -46512.9, 55429.4, -48356.9, 53714.5, -50762.8] To instead end up with a list of lists, you can change the order of the for statements and then add braces around the sub-list for each str.split result, as shown below: flat_list = ['53295,-46564.2', '53522.6,-46528.4', '54792.9,-46184', '55258.7,-46512.9', '55429.4,-48356.9', '53714.5,-50762.8'] result = [[float(f) for f in pair.split(',')] for pair in flat_list] print(result) Output: [[53295.0, -46564.2], [53522.6, -46528.4], [54792.9, -46184.0], [55258.7, -46512.9], [55429.4, -48356.9], [53714.5, -50762.8]] A: Edit after your comment: to get a list of lists you can use list2 = [[float(f) for f in el.split(",")] for el in flat_list] or list2 = [list(map(float,el.split(","))) for el in flat_list] Deprecated: If you are okay with 2 operations instead of a one-liner, go with: list2 = map(lambda el: el.split(","), flat_list) list3 = [float(el) for sublist in list2 for el in sublist] or import itertools list2 = map(lambda el: el.split(","), flat_list) list3 = list(map(float, itertools.chain.from_iterable(list2))) A: I see that the coordinates come in pairs, so in order to convert them to an integer or float first we need to split them so they become single numbers. flat_list =['53295,-46564.2', '53522.6,-46528.4', '54792.9,-46184', '55258.7,-46512.9', '55429.4,-48356.9', '53714.5,-50762.8'] coordinates = [] for pair in flat_list: coordinates.extend(pair.split(',')) result = [float(x) for x in coordinates] This is not the shortest way to do it, but I think it does the job.
coverting list of string coordinates into list of coordinates without string
I have a list flat_list =['53295,-46564.2', '53522.6,-46528.4', '54792.9,-46184', '55258.7,-46512.9', '55429.4,-48356.9', '53714.5,-50762.8'] How can I convert it into [[53295,-46564.2], [53522.6,-46528.4], [54792.9,-46184], [55258.7,-46512.9], [55429.4,-48356.9], [53714.5,-50762.8]] I tried l = [i.strip("'") for i in flat_list] nothing works. l = [i.strip("'") for i in flat_list] coords = [map(float,i.split(",")) for i in flat_list] print(coords) gives me <map object at 0x7f7a7715d2b0>
[ "Why complicate things?\nWithout any builtins such as map and itertools, this approach with a nested list comprehension should be a relatively simple and efficient one.\nflat_list = ['53295,-46564.2', '53522.6,-46528.4', '54792.9,-46184', '55258.7,-46512.9', '55429.4,-48356.9',\n '53714.5,-50762.8']\n\nresult = [float(f) for pair in flat_list for f in pair.split(',')]\n\nprint(result)\n\nOutput:\n[53295.0, -46564.2, 53522.6, -46528.4, 54792.9, -46184.0, 55258.7, -46512.9, 55429.4, -48356.9, 53714.5, -50762.8]\n\nTo instead end up with a list of lists, you can change the order of the for statements and then add braces around the sub-list for each str.split result, as shown below:\nflat_list = ['53295,-46564.2', '53522.6,-46528.4', '54792.9,-46184', '55258.7,-46512.9', '55429.4,-48356.9',\n '53714.5,-50762.8']\n\nresult = [[float(f) for f in pair.split(',')] for pair in flat_list]\n\nprint(result)\n\nOutput:\n[[53295.0, -46564.2], [53522.6, -46528.4], [54792.9, -46184.0], [55258.7, -46512.9], [55429.4, -48356.9], [53714.5, -50762.8]]\n\n", "Edit after your comment: to get a list of lists you can use\nlist2 = [[float(f) for f in el.split(\",\")] for el in flat_list]\n\nor\nlist2 = [list(map(float,el.split(\",\"))) for el in flat_list]\n\nDeprecated: If you are okay with 2 operations instead of a one-liner, go with:\nlist2 = map(lambda el: el.split(\",\"), flat_list)\nlist3 = [float(el) for sublist in list2 for el in sublist]\n\nor\nimport itertools\nlist2 = map(lambda el: el.split(\",\"), flat_list)\nlist3 = list(map(float, itertools.chain.from_iterable(list2)))\n\n", "I see that the coordinates come in pairs, so in order to convert them to an integer or float first we need to split them so they become single numbers.\n flat_list =['53295,-46564.2', '53522.6,-46528.4', '54792.9,-46184', '55258.7,-46512.9', '55429.4,-48356.9', '53714.5,-50762.8']\ncoordinates = []\nfor pair in flat_list:\n coordinates.extend(pair.split(','))\nresult = [float(x) for x in coordinates]\n\nThis is not the shortest way to do it, but I think it does the job.\n" ]
[ 6, 2, 0 ]
[]
[]
[ "arraylist", "list", "python", "python_3.x" ]
stackoverflow_0074467682_arraylist_list_python_python_3.x.txt
Q: Check if shape is already in space (overlapping issue) So I’m doing a tic tac toe using the the CMU course in python and I want to figure out If there is already a shape either circle or rectangle. How can you stop your mouse when you click on a box/space that already has a shape inside. If you look in the picture. I clicked on the square already there and it put a red circle OVER the blue square. I want this from not happening. I want it so that "If I click on a pre existing shape, it doesn't put a new shape". Thanks, any help is appreciated Using CMU Carnegie Melon University CS course P.S (Since this is a different way of python using CMU, they have certain functions) I attached a picture of that as well. Code and what the Tic Tac Toe looks like Python Functions in CMU I've been trying to figure out how and what to use in terms of functions. I just don't understand this overlapping thing. A: From what I can tell by the image you have attached, you are effectively using a dictionary for storing the game state. The keys are tuples with x and y coordinates, and values are string "red", "blue" or "" (empty) So, regarding the question you pose, you may want something like this: def click(x, y, color): if table.get(x,y) != "": return table[x,y] = color You basically check if your dictionary has a non-empty string for certain coordinates before putting a shape to it. Don't hesitate to let me know if this helped, or further clarification is needed
Check if shape is already in space (overlapping issue)
So I’m doing a tic tac toe using the the CMU course in python and I want to figure out If there is already a shape either circle or rectangle. How can you stop your mouse when you click on a box/space that already has a shape inside. If you look in the picture. I clicked on the square already there and it put a red circle OVER the blue square. I want this from not happening. I want it so that "If I click on a pre existing shape, it doesn't put a new shape". Thanks, any help is appreciated Using CMU Carnegie Melon University CS course P.S (Since this is a different way of python using CMU, they have certain functions) I attached a picture of that as well. Code and what the Tic Tac Toe looks like Python Functions in CMU I've been trying to figure out how and what to use in terms of functions. I just don't understand this overlapping thing.
[ "From what I can tell by the image you have attached, you are effectively using a dictionary for storing the game state. The keys are tuples with x and y coordinates, and values are string \"red\", \"blue\" or \"\" (empty)\nSo, regarding the question you pose, you may want something like this:\ndef click(x, y, color):\n if table.get(x,y) != \"\":\n return\n table[x,y] = color\n\nYou basically check if your dictionary has a non-empty string for certain coordinates before putting a shape to it.\nDon't hesitate to let me know if this helped, or further clarification is needed\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074467706_python.txt
Q: How to prevent auto-start of animation created using the Player class that subclasses FuncAnimation? I'm using the Player class as found at Managing dynamic plotting in matplotlib Animation module to create an animation and can't figure out how to modify the initial values to prevent the animation from starting automatically. Below is the code for Player, along with a simple example, where I graph the unit circle and have arrows tracing out the unit circle as the frames advance: import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from mpl_toolkits.axes_grid1 import make_axes_locatable import mpl_toolkits import matplotlib.widgets from matplotlib.figure import Figure class Player(FuncAnimation): def __init__(self, fig, func, frames=None, init_func=None, fargs=None,save_count=None, mini=0, maxi=100, pos=(0.125, 0.92), **kwargs): self.i = 0 self.min=mini self.max=maxi self.runs = True self.forwards = True self.fig = fig self.func = func self.setup(pos) FuncAnimation.__init__(self,self.fig, self.func, frames=self.play(), init_func=init_func, fargs=fargs, save_count=save_count,interval=500, **kwargs ) def play(self): while self.runs: self.i = self.i+self.forwards-(not self.forwards) if self.i > self.min and self.i < self.max: yield self.i else: self.stop() yield self.i def start(self): if self.i==self.max: self.i=0 self.runs=True self.event_source.start() def stop(self, event=None): self.runs = False self.event_source.stop() def forward(self, event=None): self.forwards = True self.start() def backward(self, event=None): self.forwards = False self.start() def oneforward(self, event=None): if self.i==self.max: self.i=0 self.forwards = True self.onestep() def onebackward(self, event=None): self.forwards = False self.onestep() def onestep(self): if self.i > self.min and self.i < self.max: self.i = self.i+self.forwards-(not self.forwards) elif self.i == self.min and self.forwards: self.i+=1 elif self.i == self.max and not self.forwards: self.i-=1 self.func(self.i) self.fig.canvas.draw_idle() def setup(self, pos): playerax = self.fig.add_axes([0.4, 0.92, 0.22, 0.03]) divider = mpl_toolkits.axes_grid1.make_axes_locatable(playerax) bax = divider.append_axes("right", size="80%", pad=0.05) sax = divider.append_axes("right", size="80%", pad=0.05) fax = divider.append_axes("right", size="80%", pad=0.05) ofax = divider.append_axes("right", size="100%", pad=0.05) self.button_oneback = matplotlib.widgets.Button(playerax , label=u'$\u29CF$') self.button_back = matplotlib.widgets.Button(bax, label=u'$\u25C0$') self.button_stop = matplotlib.widgets.Button(sax, label=u'$\u25A0$') self.button_forward = matplotlib.widgets.Button(fax, label=u'$\u25B6$') self.button_oneforward = matplotlib.widgets.Button(ofax, label=u'$\u29D0$') self.button_oneback.on_clicked(self.onebackward) self.button_back.on_clicked(self.backward) self.button_stop.on_clicked(self.stop) self.button_forward.on_clicked(self.forward) self.button_oneforward.on_clicked(self.oneforward) ######## EXAMPLE ################## ####################################### fig, ax = plt.subplots() t=np.linspace(0,1, num=100) unit_circle_x=np.cos(2*np.pi*t) unit_circle_y=np.sin(2*np.pi*t) ax.plot(unit_circle_x,unit_circle_y) ax.set_xlim(-1.2, 1.2) ax.set_ylim(-1.2,1.2) # arrow tracing out unit circle n=3 unit_vector_x=np.cos(2*np.pi*n*t) unit_vector_y=np.sin(2*np.pi*n*t) unit_circle_vector=ax.quiver(0, 0, unit_vector_x[0], unit_vector_y[0], angles='xy', scale_units='xy', scale=1,headwidth=10) ax.set_aspect("equal") def update(i): unit_circle_vector.set_UVC(unit_vector_x[i],unit_vector_y[i]) ax.set_xlabel(r'$i= $'+str(i),rotation=0,labelpad=10) ani = Player(fig, update, maxi=len(t)-1) plt.show() A: You could initiate your self.runs variable to False and modify the play methods such that it yields the current position when self.runs=False: def play(self): while not self.runs: yield self.i while self.runs: self.i = self.i+self.forwards-(not self.forwards) if self.i > self.min and self.i < self.max: yield self.i else: self.stop() yield self.i See full code below: import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from mpl_toolkits.axes_grid1 import make_axes_locatable import mpl_toolkits import matplotlib.widgets from matplotlib.figure import Figure class Player(FuncAnimation): def __init__(self, fig, func, frames=None, init_func=None, fargs=None,save_count=None, mini=0, maxi=100, pos=(0.125, 0.92), **kwargs): self.i = 0 self.min=mini self.max=maxi self.runs = False self.forwards = True self.fig = fig self.func = func self.setup(pos) FuncAnimation.__init__(self,self.fig, self.func, frames=self.play(), init_func=init_func, fargs=fargs, save_count=save_count,interval=500, **kwargs ) def play(self): while not self.runs: yield self.i while self.runs: self.i = self.i+self.forwards-(not self.forwards) if self.i > self.min and self.i < self.max: yield self.i else: self.stop() yield self.i def start(self): if self.i==self.max: self.i=0 self.runs=True self.event_source.start() def stop(self, event=None): self.runs = False self.event_source.stop() def forward(self, event=None): self.forwards = True self.start() def backward(self, event=None): self.forwards = False self.start() def oneforward(self, event=None): if self.i==self.max: self.i=0 self.forwards = True self.onestep() def onebackward(self, event=None): self.forwards = False self.onestep() def onestep(self): if self.i > self.min and self.i < self.max: self.i = self.i+self.forwards-(not self.forwards) elif self.i == self.min and self.forwards: self.i+=1 elif self.i == self.max and not self.forwards: self.i-=1 self.func(self.i) self.fig.canvas.draw_idle() def setup(self, pos): playerax = self.fig.add_axes([0.4, 0.92, 0.22, 0.03]) divider = mpl_toolkits.axes_grid1.make_axes_locatable(playerax) bax = divider.append_axes("right", size="80%", pad=0.05) sax = divider.append_axes("right", size="80%", pad=0.05) fax = divider.append_axes("right", size="80%", pad=0.05) ofax = divider.append_axes("right", size="100%", pad=0.05) self.button_oneback = matplotlib.widgets.Button(playerax , label=u'$\u29CF$') self.button_back = matplotlib.widgets.Button(bax, label=u'$\u25C0$') self.button_stop = matplotlib.widgets.Button(sax, label=u'$\u25A0$') self.button_forward = matplotlib.widgets.Button(fax, label=u'$\u25B6$') self.button_oneforward = matplotlib.widgets.Button(ofax, label=u'$\u29D0$') self.button_oneback.on_clicked(self.onebackward) self.button_back.on_clicked(self.backward) self.button_stop.on_clicked(self.stop) self.button_forward.on_clicked(self.forward) self.button_oneforward.on_clicked(self.oneforward) ######## EXAMPLE ################## ####################################### fig, ax = plt.subplots() t=np.linspace(0,1, num=100) unit_circle_x=np.cos(2*np.pi*t) unit_circle_y=np.sin(2*np.pi*t) ax.plot(unit_circle_x,unit_circle_y) ax.set_xlim(-1.2, 1.2) ax.set_ylim(-1.2,1.2) # arrow tracing out unit circle n=3 unit_vector_x=np.cos(2*np.pi*n*t) unit_vector_y=np.sin(2*np.pi*n*t) unit_circle_vector=ax.quiver(0, 0, unit_vector_x[0], unit_vector_y[0], angles='xy', scale_units='xy', scale=1,headwidth=10) ax.set_aspect("equal") def update(i): unit_circle_vector.set_UVC(unit_vector_x[i],unit_vector_y[i]) ax.set_xlabel(r'$i= $'+str(i),rotation=0,labelpad=10) ani = Player(fig, update, maxi=len(t)-1) plt.show() And here is the result:
How to prevent auto-start of animation created using the Player class that subclasses FuncAnimation?
I'm using the Player class as found at Managing dynamic plotting in matplotlib Animation module to create an animation and can't figure out how to modify the initial values to prevent the animation from starting automatically. Below is the code for Player, along with a simple example, where I graph the unit circle and have arrows tracing out the unit circle as the frames advance: import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from mpl_toolkits.axes_grid1 import make_axes_locatable import mpl_toolkits import matplotlib.widgets from matplotlib.figure import Figure class Player(FuncAnimation): def __init__(self, fig, func, frames=None, init_func=None, fargs=None,save_count=None, mini=0, maxi=100, pos=(0.125, 0.92), **kwargs): self.i = 0 self.min=mini self.max=maxi self.runs = True self.forwards = True self.fig = fig self.func = func self.setup(pos) FuncAnimation.__init__(self,self.fig, self.func, frames=self.play(), init_func=init_func, fargs=fargs, save_count=save_count,interval=500, **kwargs ) def play(self): while self.runs: self.i = self.i+self.forwards-(not self.forwards) if self.i > self.min and self.i < self.max: yield self.i else: self.stop() yield self.i def start(self): if self.i==self.max: self.i=0 self.runs=True self.event_source.start() def stop(self, event=None): self.runs = False self.event_source.stop() def forward(self, event=None): self.forwards = True self.start() def backward(self, event=None): self.forwards = False self.start() def oneforward(self, event=None): if self.i==self.max: self.i=0 self.forwards = True self.onestep() def onebackward(self, event=None): self.forwards = False self.onestep() def onestep(self): if self.i > self.min and self.i < self.max: self.i = self.i+self.forwards-(not self.forwards) elif self.i == self.min and self.forwards: self.i+=1 elif self.i == self.max and not self.forwards: self.i-=1 self.func(self.i) self.fig.canvas.draw_idle() def setup(self, pos): playerax = self.fig.add_axes([0.4, 0.92, 0.22, 0.03]) divider = mpl_toolkits.axes_grid1.make_axes_locatable(playerax) bax = divider.append_axes("right", size="80%", pad=0.05) sax = divider.append_axes("right", size="80%", pad=0.05) fax = divider.append_axes("right", size="80%", pad=0.05) ofax = divider.append_axes("right", size="100%", pad=0.05) self.button_oneback = matplotlib.widgets.Button(playerax , label=u'$\u29CF$') self.button_back = matplotlib.widgets.Button(bax, label=u'$\u25C0$') self.button_stop = matplotlib.widgets.Button(sax, label=u'$\u25A0$') self.button_forward = matplotlib.widgets.Button(fax, label=u'$\u25B6$') self.button_oneforward = matplotlib.widgets.Button(ofax, label=u'$\u29D0$') self.button_oneback.on_clicked(self.onebackward) self.button_back.on_clicked(self.backward) self.button_stop.on_clicked(self.stop) self.button_forward.on_clicked(self.forward) self.button_oneforward.on_clicked(self.oneforward) ######## EXAMPLE ################## ####################################### fig, ax = plt.subplots() t=np.linspace(0,1, num=100) unit_circle_x=np.cos(2*np.pi*t) unit_circle_y=np.sin(2*np.pi*t) ax.plot(unit_circle_x,unit_circle_y) ax.set_xlim(-1.2, 1.2) ax.set_ylim(-1.2,1.2) # arrow tracing out unit circle n=3 unit_vector_x=np.cos(2*np.pi*n*t) unit_vector_y=np.sin(2*np.pi*n*t) unit_circle_vector=ax.quiver(0, 0, unit_vector_x[0], unit_vector_y[0], angles='xy', scale_units='xy', scale=1,headwidth=10) ax.set_aspect("equal") def update(i): unit_circle_vector.set_UVC(unit_vector_x[i],unit_vector_y[i]) ax.set_xlabel(r'$i= $'+str(i),rotation=0,labelpad=10) ani = Player(fig, update, maxi=len(t)-1) plt.show()
[ "You could initiate your self.runs variable to False and modify the play methods such that it yields the current position when self.runs=False:\ndef play(self):\n while not self.runs:\n yield self.i\n while self.runs:\n self.i = self.i+self.forwards-(not self.forwards)\n if self.i > self.min and self.i < self.max:\n yield self.i\n else:\n self.stop()\n yield self.i\n\nSee full code below:\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nimport mpl_toolkits\nimport matplotlib.widgets\nfrom matplotlib.figure import Figure\n\nclass Player(FuncAnimation):\n def __init__(self, fig, func, frames=None, init_func=None, \n fargs=None,save_count=None, mini=0, maxi=100, pos=(0.125, 0.92), **kwargs):\n self.i = 0\n self.min=mini\n self.max=maxi\n self.runs = False\n self.forwards = True\n\n self.fig = fig\n self.func = func\n self.setup(pos)\n FuncAnimation.__init__(self,self.fig, self.func, frames=self.play(),\n init_func=init_func, fargs=fargs,\n save_count=save_count,interval=500, **kwargs )\n\n def play(self):\n while not self.runs:\n yield self.i\n while self.runs:\n self.i = self.i+self.forwards-(not self.forwards)\n if self.i > self.min and self.i < self.max:\n yield self.i\n else:\n self.stop()\n yield self.i\n\n def start(self):\n if self.i==self.max:\n self.i=0\n self.runs=True\n self.event_source.start()\n\n def stop(self, event=None):\n self.runs = False\n self.event_source.stop()\n\n def forward(self, event=None):\n self.forwards = True\n self.start()\n\n def backward(self, event=None):\n self.forwards = False\n self.start()\n\n def oneforward(self, event=None):\n if self.i==self.max:\n self.i=0\n self.forwards = True\n self.onestep()\n def onebackward(self, event=None):\n self.forwards = False\n self.onestep()\n\n def onestep(self):\n if self.i > self.min and self.i < self.max:\n self.i = self.i+self.forwards-(not self.forwards)\n elif self.i == self.min and self.forwards:\n self.i+=1\n elif self.i == self.max and not self.forwards:\n self.i-=1\n self.func(self.i)\n self.fig.canvas.draw_idle()\n\n def setup(self, pos):\n playerax = self.fig.add_axes([0.4, 0.92, 0.22, 0.03])\n\n divider = mpl_toolkits.axes_grid1.make_axes_locatable(playerax)\n bax = divider.append_axes(\"right\", size=\"80%\", pad=0.05)\n sax = divider.append_axes(\"right\", size=\"80%\", pad=0.05)\n fax = divider.append_axes(\"right\", size=\"80%\", pad=0.05)\n ofax = divider.append_axes(\"right\", size=\"100%\", pad=0.05)\n self.button_oneback = matplotlib.widgets.Button(playerax , label=u'$\\u29CF$')\n self.button_back = matplotlib.widgets.Button(bax, label=u'$\\u25C0$')\n self.button_stop = matplotlib.widgets.Button(sax, label=u'$\\u25A0$')\n self.button_forward = matplotlib.widgets.Button(fax, label=u'$\\u25B6$')\n self.button_oneforward = matplotlib.widgets.Button(ofax, label=u'$\\u29D0$')\n self.button_oneback.on_clicked(self.onebackward)\n self.button_back.on_clicked(self.backward)\n self.button_stop.on_clicked(self.stop)\n self.button_forward.on_clicked(self.forward)\n self.button_oneforward.on_clicked(self.oneforward)\n\n\n######## EXAMPLE ##################\n#######################################\n\nfig, ax = plt.subplots()\n\nt=np.linspace(0,1, num=100)\nunit_circle_x=np.cos(2*np.pi*t)\nunit_circle_y=np.sin(2*np.pi*t)\nax.plot(unit_circle_x,unit_circle_y)\nax.set_xlim(-1.2, 1.2)\nax.set_ylim(-1.2,1.2)\n\n# arrow tracing out unit circle\nn=3\nunit_vector_x=np.cos(2*np.pi*n*t)\nunit_vector_y=np.sin(2*np.pi*n*t)\n\n\nunit_circle_vector=ax.quiver(0, 0, unit_vector_x[0], unit_vector_y[0], angles='xy', scale_units='xy', scale=1,headwidth=10)\nax.set_aspect(\"equal\")\n\ndef update(i):\n\n unit_circle_vector.set_UVC(unit_vector_x[i],unit_vector_y[i])\n ax.set_xlabel(r'$i= $'+str(i),rotation=0,labelpad=10)\n\nani = Player(fig, update, maxi=len(t)-1)\n\nplt.show()\n\nAnd here is the result:\n\n" ]
[ 1 ]
[]
[]
[ "animation", "matplotlib", "python" ]
stackoverflow_0074462964_animation_matplotlib_python.txt
Q: Can we make the values in pandas.pivot_table() a count of column? pd.pivot_table(df, index = col1, columns = col2, value = ?) I want the values to be the count of values in col 1.The values in col 1 are all strings. I basically want to imitate what is happening in the excel file pictured below Id be open to using other functions than pd.pivot_table() if that would make things easier pd.pandas(index = col1, columns = col2, values = col1)? Im not sure how to engage this. Col1 Col2 A Red A Red A Red A Blue A Blue A Blue A Blue A Blue B Blue B Blue C Blue C Blue C Blue C Blue C Orange C Orange A Orange A Orange A Orange A Orange A Red A Red A Red A Red A Red B Red B Green B Green C Green C Green C Green Would it be possible to do something like this Col1 Col2 Col3 A Red Cheetah A Red Cheetah A Red Cheetah A Blue Cheetah A Blue Cheetah A Blue Cheetah A Blue Cheetah A Blue Cheetah B Blue Cheetah B Blue Cheetah C Blue Cheetah C Blue Cheetah C Blue Lion C Blue Lion C Orange Lion C Orange Lion A Orange Lion A Orange Lion A Orange Lion A Orange Lion A Red Lion A Red Lion A Red Bear A Red Bear A Red Bear B Red Bear B Green Bear B Green Bear C Green Bear C Green Bear C Green Bear A: Try to use pd.crosstab. To make the total columns use .sum() (with proper axis=) x = pd.crosstab(df.Col1, df.Col2) x["Grand Total"] = x.sum(axis=1) x = pd.concat([x, x.sum().to_frame().rename(columns={0: "Grand Total"}).T]) x.columns.name, x.index.name = None, None print(x.to_markdown()) Prints: Blue Green Orange Red Grand Total A 5 0 4 8 17 B 2 2 0 1 5 C 4 3 2 0 9 Grand Total 11 5 6 9 31
Can we make the values in pandas.pivot_table() a count of column?
pd.pivot_table(df, index = col1, columns = col2, value = ?) I want the values to be the count of values in col 1.The values in col 1 are all strings. I basically want to imitate what is happening in the excel file pictured below Id be open to using other functions than pd.pivot_table() if that would make things easier pd.pandas(index = col1, columns = col2, values = col1)? Im not sure how to engage this. Col1 Col2 A Red A Red A Red A Blue A Blue A Blue A Blue A Blue B Blue B Blue C Blue C Blue C Blue C Blue C Orange C Orange A Orange A Orange A Orange A Orange A Red A Red A Red A Red A Red B Red B Green B Green C Green C Green C Green Would it be possible to do something like this Col1 Col2 Col3 A Red Cheetah A Red Cheetah A Red Cheetah A Blue Cheetah A Blue Cheetah A Blue Cheetah A Blue Cheetah A Blue Cheetah B Blue Cheetah B Blue Cheetah C Blue Cheetah C Blue Cheetah C Blue Lion C Blue Lion C Orange Lion C Orange Lion A Orange Lion A Orange Lion A Orange Lion A Orange Lion A Red Lion A Red Lion A Red Bear A Red Bear A Red Bear B Red Bear B Green Bear B Green Bear C Green Bear C Green Bear C Green Bear
[ "Try to use pd.crosstab. To make the total columns use .sum() (with proper axis=)\nx = pd.crosstab(df.Col1, df.Col2)\nx[\"Grand Total\"] = x.sum(axis=1)\nx = pd.concat([x, x.sum().to_frame().rename(columns={0: \"Grand Total\"}).T])\nx.columns.name, x.index.name = None, None\n\nprint(x.to_markdown())\n\nPrints:\n\n\n\n\n\nBlue\nGreen\nOrange\nRed\nGrand Total\n\n\n\n\nA\n5\n0\n4\n8\n17\n\n\nB\n2\n2\n0\n1\n5\n\n\nC\n4\n3\n2\n0\n9\n\n\nGrand Total\n11\n5\n6\n9\n31\n\n\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "pivot", "python" ]
stackoverflow_0074467730_pandas_pivot_python.txt
Q: How to use a for loop to print every other item in a list? I have a list named "ul_children", I need to use a for loop to print every other item in that list starting with the second item or the index[1]. I am new to for loops in python and I am struggling with this. I have tried a few different things I thought would work, but I have been unsuccessful so far. Any help would be appreciated. The closest I have gotten so far is by using: i = 1 for li in ul_children: print(ul_children[i]) but I don't know how to get 'i' to increase each time the loop is performed. A: You can do it like this: ul_children = ["r","a","t","o","n"] for i in ul_children[1:]: print(i)
How to use a for loop to print every other item in a list?
I have a list named "ul_children", I need to use a for loop to print every other item in that list starting with the second item or the index[1]. I am new to for loops in python and I am struggling with this. I have tried a few different things I thought would work, but I have been unsuccessful so far. Any help would be appreciated. The closest I have gotten so far is by using: i = 1 for li in ul_children: print(ul_children[i]) but I don't know how to get 'i' to increase each time the loop is performed.
[ "You can do it like this:\nul_children = [\"r\",\"a\",\"t\",\"o\",\"n\"]\n\nfor i in ul_children[1:]:\n\n print(i)\n\n" ]
[ 1 ]
[]
[]
[ "for_loop", "python" ]
stackoverflow_0074467827_for_loop_python.txt
Q: How to compare different dataframes by column and row? I have two csv files with 200 columns each. The two files have the exact same numbers in rows and columns. I want to compare each columns separately. The idea would be to compare column 1 value of file "a" to column 1 value of file "b" and check the difference and so on for all the numbers in the column (there are 100 rows) and write out a number that in how many cases were the difference more than 3. I would like to repeat the same for all the columns. Thanks in advance! import pandas as pd dk = pd.read_csv('C:/Users/D/1_top_a.csv', sep=',', header=None) dk = dk.dropna(how='all') dk = dk.dropna(how='all', axis=1) print(dk) dl = pd.read_csv('C:/Users/D/1_top_b.csv', sep=',', header=None) dl = dl.dropna(how='all') dl = dl.dropna(how='all', axis=1) print(dl) rows=dk.shape[0] print(rows) for i print(dk._get_value(0,0)) A: I suggest you use the iloc attribute of a DataFrame: import pandas as pd dk = pd.read_csv('C:/Users/D/1_top_a.csv', sep=',', header=None) dk = dk.dropna(how='all') dk = dk.dropna(how='all', axis=1) print(dk.head()) dl = pd.read_csv('C:/Users/D/1_top_b.csv', sep=',', header=None) dl = dl.dropna(how='all') dl = dl.dropna(how='all', axis=1) print(dl.head()) for row in range(len(dl)): for col in range(len(dl.columns)): if dl.iloc[row, col] != dk.iloc[row, col]: print(dk.iloc[row, col])
How to compare different dataframes by column and row?
I have two csv files with 200 columns each. The two files have the exact same numbers in rows and columns. I want to compare each columns separately. The idea would be to compare column 1 value of file "a" to column 1 value of file "b" and check the difference and so on for all the numbers in the column (there are 100 rows) and write out a number that in how many cases were the difference more than 3. I would like to repeat the same for all the columns. Thanks in advance! import pandas as pd dk = pd.read_csv('C:/Users/D/1_top_a.csv', sep=',', header=None) dk = dk.dropna(how='all') dk = dk.dropna(how='all', axis=1) print(dk) dl = pd.read_csv('C:/Users/D/1_top_b.csv', sep=',', header=None) dl = dl.dropna(how='all') dl = dl.dropna(how='all', axis=1) print(dl) rows=dk.shape[0] print(rows) for i print(dk._get_value(0,0))
[ "I suggest you use the iloc attribute of a DataFrame:\nimport pandas as pd\n\ndk = pd.read_csv('C:/Users/D/1_top_a.csv', sep=',', header=None)\ndk = dk.dropna(how='all')\ndk = dk.dropna(how='all', axis=1)\nprint(dk.head())\n\ndl = pd.read_csv('C:/Users/D/1_top_b.csv', sep=',', header=None)\ndl = dl.dropna(how='all')\ndl = dl.dropna(how='all', axis=1)\nprint(dl.head())\n\n\nfor row in range(len(dl)):\n for col in range(len(dl.columns)):\n if dl.iloc[row, col] != dk.iloc[row, col]:\n print(dk.iloc[row, col])\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "for_loop", "pandas", "python" ]
stackoverflow_0074467738_dataframe_for_loop_pandas_python.txt
Q: Manipulate Dataframe Lets say I'm working on a dataset: # dummy dataset import pandas as pd data = pd.DataFrame({"Name_id" : ["John","Deep","Julia","John","Sandy",'Deep'], "Month_id" : ["December","March","May","April","May","July"], "Colour_id" : ["Red",'Purple','Green','Black','Yellow','Orange']}) data How can I convert this data frame into something like this: Where the A_id is unique and forms new columns based on both the value and the existence / non-existence of the other columns in order of appearance? I have tried to use pivot but I noticed it's more used for numerical data instead of categorical. A: Probably you should try pivot data['Rowid'] = data.groupby('Name_id').cumcount()+1 d = data.pivot(index='Name_id', columns='Rowid',values = ['Month_id','Colour_id']) d.reset_index(inplace=True) d.columns = ['Name_id','Month_id1', 'Colour_id1', 'Month_id2', 'Colour_id2'] which gives Name_id Month_id1 Colour_id1 Month_id2 Colour_id2 0 Deep March July Purple Orange 1 John December April Red Black 2 Julia May NaN Green NaN 3 Sandy May NaN Yellow NaN
Manipulate Dataframe
Lets say I'm working on a dataset: # dummy dataset import pandas as pd data = pd.DataFrame({"Name_id" : ["John","Deep","Julia","John","Sandy",'Deep'], "Month_id" : ["December","March","May","April","May","July"], "Colour_id" : ["Red",'Purple','Green','Black','Yellow','Orange']}) data How can I convert this data frame into something like this: Where the A_id is unique and forms new columns based on both the value and the existence / non-existence of the other columns in order of appearance? I have tried to use pivot but I noticed it's more used for numerical data instead of categorical.
[ "Probably you should try pivot\ndata['Rowid'] = data.groupby('Name_id').cumcount()+1\nd = data.pivot(index='Name_id', columns='Rowid',values = ['Month_id','Colour_id'])\nd.reset_index(inplace=True)\nd.columns = ['Name_id','Month_id1', 'Colour_id1', 'Month_id2', 'Colour_id2']\n\nwhich gives\n Name_id Month_id1 Colour_id1 Month_id2 Colour_id2\n0 Deep March July Purple Orange\n1 John December April Red Black\n2 Julia May NaN Green NaN\n3 Sandy May NaN Yellow NaN\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074467693_pandas_python.txt
Q: I can't find a method to prevent my program slowing down as it loads more sprites python I have created a simple simulation to show evolution. It works through a simuple window that contains many squares representing single-celled organisms. The screen looks like this: The single-celled organisms (dubbed amoebae for conciseness) move around randomly. If they collide with another amoebae they produce an offspring. However, to prevent them reproducing infinitely I introduced an age measure. Amoebae must attain a certain age before they reproduce and once they do their age is reset to 1. Now for the evolution part. As you can see, the amoebae are different colours. This represents the 'gene' that is passed down to offspring through reproduction (there is a chance of mutation controlled by a constant called maturingSpeed, which I set very high to increase the speed of evolution). It's called maturingSpeed and it controls the speed at which the amoebae age, which means that amoebae that have a higher maturingSpeed with reproduce faster and pass on their gene. In this way, they should gradually evolve through natural selection so all of the amoebae have a very high maturingSpeed. A high maturingSpeed translates to a brighter colour on the screen. There is one other thing I should mention, which is the life countdown on each amoeba. It starts out at 10000 and ticks down by one each time the amoeba is updated. This is to gradually kill off the old amoebae, also increasing the rate of evolution and making it more lifelike. My problem is that before the amoebae all evolve to get a high maturingSpeed (the highest I've had is around 65%), they become too numerous and the simulation starts slowing down as it struggles to load them all. I need a method to make the amoebae die off faster as more of them are produced. I have tried to cull them if they are above a certain number, or increase their countdown rate based on the number of amoebae however all of these methods cause them to eventually stop reproducing and die off for some reason. I have deleted these sections from my code now because they didn't work but I could add them again if needed. My source code: import pygame import random import time import itertools from pygame.locals import ( QUIT ) pygame.init() SCREEN_WIDTH = 500 SCREEN_HEIGHT = 500 screen = pygame.display.set_mode([500, 500]) amoebas = pygame.sprite.Group() all_sprites = pygame.sprite.Group() idList = [] mutationConstant = 254 class Amoeba(pygame.sprite.Sprite): id_iter = itertools.count() def __init__(self, maturingSpeed, x, y): super(Amoeba, self).__init__() self.id = 'amoeba' + str(next(Amoeba.id_iter)) idList.append(self.id) self.surf = pygame.Surface((10,10)) if maturingSpeed <= 0: maturingSpeed = 1 elif maturingSpeed >= 255: maturingSpeed = 254 print(maturingSpeed) self.surf.fill((maturingSpeed, 0, 0)) self.rect = self.surf.get_rect( center=( x, y, ) ) self.speed = 2 self.age = 1 self.maturingSpeed = int(maturingSpeed) self.life = 9999 def update(self): if self.rect.left <= 0: direction = 1 elif self.rect.right >= SCREEN_WIDTH: direction = 2 elif self.rect.top <= 0: direction = 3 elif self.rect.bottom >= SCREEN_HEIGHT: direction = 4 else: direction = random.randint(1, 4) if direction == 1: self.rect.move_ip(self.speed, 0) elif direction == 2: self.rect.move_ip(-self.speed, 0) elif direction == 3: self.rect.move_ip(0, self.speed) elif direction == 4: self.rect.move_ip(0, -self.speed) self.life = self.life - 1 if self.life <= 0: self.kill() modMaturingSpeed = self.maturingSpeed / 1240 self.age = self.age + (1 * modMaturingSpeed) @classmethod def collide(cls): global collisionSuccess collisionSuccess = False global posList posList = [[amoeba.rect.left, amoeba.rect.bottom] for amoeba in amoebas] length = len(posList) for i in range(length): for amoeba in amoebas: if amoeba.id == str(idList[i]): ageOne = getattr(amoeba, 'age') for h in range(i+1, length): for amoeba in amoebas: if amoeba.id == str(idList[h]): ageTwo = getattr(amoeba, 'age') OneX = int(posList[i][0]) OneY = int(posList[i][1]) TwoX = int(posList[h][0]) TwoY = int(posList[h][1]) if ageOne >= 100 and ageTwo >= 100: if (OneX < TwoX + 10 and OneX + 10 > TwoX and OneY < TwoY + 10 and 10 + OneY > TwoY): for amoeba in amoebas: if amoeba.id == str(idList[i]): setattr(amoeba, 'age', 1) pOMSinitial = int(getattr(amoeba, 'maturingSpeed')) for amoeba in amoebas: if amoeba.id == str(idList[h]): setattr(amoeba, 'age', 1) pTMSinitial = int(getattr(amoeba, 'maturingSpeed')) locationX = OneX + random.randint(-10, 10) locationY = OneY + random.randint(-10, 10) if pOMSinitial >= pTMSinitial: pOMSfinal = pOMSinitial + mutationConstant pTMSfinal = pTMSinitial - mutationConstant newMaturingSpeed = random.randint(pTMSfinal, pOMSfinal) else: pOMSfinal = pOMSinitial - mutationConstant pTMSfinal = pTMSinitial + mutationConstant newMaturingSpeed = random.randint(pOMSfinal, pTMSfinal) collisionSuccess = True return cls(newMaturingSpeed, locationX, locationY) screen.fill((255, 255, 255)) for i in range(15): amoebaname = Amoeba(random.randint(100, 150), random.randint(0, SCREEN_WIDTH), random.randint(0, SCREEN_HEIGHT)) amoebas.add(amoebaname) all_sprites.add(amoebaname) p = 0 while True: ageArray = [amoeba.age for amoeba in amoebas] if p == 1000: print(amoebas) five = 0 four = 0 three = 0 two = 0 one = 0 for amoeba in amoebas: if amoeba.maturingSpeed >= 200: five = five + 1 elif amoeba.maturingSpeed >=150: four = four + 1 elif amoeba.maturingSpeed >= 100: three = three + 1 elif amoeba.maturingSpeed >= 50: two = two + 1 else: one = one + 1 total = one + two + three + four + five DivFive = five / total DivFour = four / total DivThree = three / total DivTwo = two / total DivOne = one / total print(DivFive, DivFour, DivThree, DivTwo, DivOne) p = 0 else: p = p + 1 time.sleep(0.0000001) screen.fill((255, 255, 255)) for event in pygame.event.get(): if event.type == QUIT: break amoebas.update() amoebaname = Amoeba.collide() if collisionSuccess == True: amoebas.add(amoebaname) all_sprites.add(amoebaname) for entity in all_sprites: screen.blit(entity.surf, entity.rect) pygame.display.flip() pygame.quit() A: Too many nested loops and unneeded data structures. I did some cleanup and it's faster now. And it seems that the mutation constant was far to high. I changed the value from 254 to 25. import pygame import random import time import itertools from pygame.locals import ( QUIT ) SCREEN_WIDTH = 500 SCREEN_HEIGHT = 500 MUTATION_CONSTANT = 25 pygame.init() screen = pygame.display.set_mode([SCREEN_WIDTH, SCREEN_HEIGHT]) amoebas = pygame.sprite.Group() class Amoeba(pygame.sprite.Sprite): id_iter = itertools.count() def __init__(self, maturing_speed, x, y): super().__init__() self.id = 'amoeba' + str(next(Amoeba.id_iter)) self.surf = pygame.Surface((10, 10)) self.maturing_speed = min(max(maturing_speed, 1), 254) self.surf.fill((self.maturing_speed, 0, 0)) self.rect = self.surf.get_rect(center=(x, y,)) self.speed = 2 self.age = 1 self.life = 9999 def update(self): if self.rect.left <= 0: direction = 1 elif self.rect.right >= SCREEN_WIDTH: direction = 2 elif self.rect.top <= 0: direction = 3 elif self.rect.bottom >= SCREEN_HEIGHT: direction = 4 else: direction = random.randint(1, 4) if direction == 1: self.rect.move_ip(self.speed, 0) elif direction == 2: self.rect.move_ip(-self.speed, 0) elif direction == 3: self.rect.move_ip(0, self.speed) elif direction == 4: self.rect.move_ip(0, -self.speed) self.life = self.life - 1 if self.life <= 0: self.kill() self.age = self.age + (1 * self.maturing_speed / 1240) @classmethod def collide(cls): for amoeba_1, amoeba_2 in itertools.combinations(amoebas, 2): if amoeba_1.age >= 100 and amoeba_2.age >= 100 and ( pygame.sprite.collide_rect(amoeba_1, amoeba_2) ): amoeba_1.age = 1 amoeba_2.age = 1 location_x = amoeba_1.rect.left + random.randint(-10, 10) location_y = amoeba_1.rect.bottom + random.randint(-10, 10) speed_low = min(amoeba_1.maturing_speed, amoeba_2.maturing_speed) - MUTATION_CONSTANT speed_high = max(amoeba_1.maturing_speed, amoeba_2.maturing_speed) + MUTATION_CONSTANT new_maturing_speed = random.randint(speed_low, speed_high) return cls(new_maturing_speed, location_x, location_y) return None def main(): screen.fill((255, 255, 255)) for i in range(25): amoeba = Amoeba(random.randint(100, 150), random.randint(0, SCREEN_WIDTH), random.randint(0, SCREEN_HEIGHT)) amoebas.add(amoeba) step_counter = 0 while True: step_counter += 1 if step_counter % 100 == 0: print(step_counter, amoebas) five = 0 four = 0 three = 0 two = 0 one = 0 for amoeba in amoebas: if amoeba.maturing_speed >= 200: five = five + 1 elif amoeba.maturing_speed >= 150: four = four + 1 elif amoeba.maturing_speed >= 100: three = three + 1 elif amoeba.maturing_speed >= 50: two = two + 1 else: one = one + 1 total = one + two + three + four + five print(f'{five/total:.4f} {four/total:.4f} {three/total:.4f} {two/total:.4f} {one/total:.4f}') time.sleep(0.0000001) screen.fill((255, 255, 255)) for event in pygame.event.get(): if event.type == QUIT: break amoebas.update() amoeba = Amoeba.collide() if amoeba: amoebas.add(amoeba) for amoeba in amoebas: screen.blit(amoeba.surf, amoeba.rect) pygame.display.flip() pygame.quit() if __name__ == '__main__': main()
I can't find a method to prevent my program slowing down as it loads more sprites python
I have created a simple simulation to show evolution. It works through a simuple window that contains many squares representing single-celled organisms. The screen looks like this: The single-celled organisms (dubbed amoebae for conciseness) move around randomly. If they collide with another amoebae they produce an offspring. However, to prevent them reproducing infinitely I introduced an age measure. Amoebae must attain a certain age before they reproduce and once they do their age is reset to 1. Now for the evolution part. As you can see, the amoebae are different colours. This represents the 'gene' that is passed down to offspring through reproduction (there is a chance of mutation controlled by a constant called maturingSpeed, which I set very high to increase the speed of evolution). It's called maturingSpeed and it controls the speed at which the amoebae age, which means that amoebae that have a higher maturingSpeed with reproduce faster and pass on their gene. In this way, they should gradually evolve through natural selection so all of the amoebae have a very high maturingSpeed. A high maturingSpeed translates to a brighter colour on the screen. There is one other thing I should mention, which is the life countdown on each amoeba. It starts out at 10000 and ticks down by one each time the amoeba is updated. This is to gradually kill off the old amoebae, also increasing the rate of evolution and making it more lifelike. My problem is that before the amoebae all evolve to get a high maturingSpeed (the highest I've had is around 65%), they become too numerous and the simulation starts slowing down as it struggles to load them all. I need a method to make the amoebae die off faster as more of them are produced. I have tried to cull them if they are above a certain number, or increase their countdown rate based on the number of amoebae however all of these methods cause them to eventually stop reproducing and die off for some reason. I have deleted these sections from my code now because they didn't work but I could add them again if needed. My source code: import pygame import random import time import itertools from pygame.locals import ( QUIT ) pygame.init() SCREEN_WIDTH = 500 SCREEN_HEIGHT = 500 screen = pygame.display.set_mode([500, 500]) amoebas = pygame.sprite.Group() all_sprites = pygame.sprite.Group() idList = [] mutationConstant = 254 class Amoeba(pygame.sprite.Sprite): id_iter = itertools.count() def __init__(self, maturingSpeed, x, y): super(Amoeba, self).__init__() self.id = 'amoeba' + str(next(Amoeba.id_iter)) idList.append(self.id) self.surf = pygame.Surface((10,10)) if maturingSpeed <= 0: maturingSpeed = 1 elif maturingSpeed >= 255: maturingSpeed = 254 print(maturingSpeed) self.surf.fill((maturingSpeed, 0, 0)) self.rect = self.surf.get_rect( center=( x, y, ) ) self.speed = 2 self.age = 1 self.maturingSpeed = int(maturingSpeed) self.life = 9999 def update(self): if self.rect.left <= 0: direction = 1 elif self.rect.right >= SCREEN_WIDTH: direction = 2 elif self.rect.top <= 0: direction = 3 elif self.rect.bottom >= SCREEN_HEIGHT: direction = 4 else: direction = random.randint(1, 4) if direction == 1: self.rect.move_ip(self.speed, 0) elif direction == 2: self.rect.move_ip(-self.speed, 0) elif direction == 3: self.rect.move_ip(0, self.speed) elif direction == 4: self.rect.move_ip(0, -self.speed) self.life = self.life - 1 if self.life <= 0: self.kill() modMaturingSpeed = self.maturingSpeed / 1240 self.age = self.age + (1 * modMaturingSpeed) @classmethod def collide(cls): global collisionSuccess collisionSuccess = False global posList posList = [[amoeba.rect.left, amoeba.rect.bottom] for amoeba in amoebas] length = len(posList) for i in range(length): for amoeba in amoebas: if amoeba.id == str(idList[i]): ageOne = getattr(amoeba, 'age') for h in range(i+1, length): for amoeba in amoebas: if amoeba.id == str(idList[h]): ageTwo = getattr(amoeba, 'age') OneX = int(posList[i][0]) OneY = int(posList[i][1]) TwoX = int(posList[h][0]) TwoY = int(posList[h][1]) if ageOne >= 100 and ageTwo >= 100: if (OneX < TwoX + 10 and OneX + 10 > TwoX and OneY < TwoY + 10 and 10 + OneY > TwoY): for amoeba in amoebas: if amoeba.id == str(idList[i]): setattr(amoeba, 'age', 1) pOMSinitial = int(getattr(amoeba, 'maturingSpeed')) for amoeba in amoebas: if amoeba.id == str(idList[h]): setattr(amoeba, 'age', 1) pTMSinitial = int(getattr(amoeba, 'maturingSpeed')) locationX = OneX + random.randint(-10, 10) locationY = OneY + random.randint(-10, 10) if pOMSinitial >= pTMSinitial: pOMSfinal = pOMSinitial + mutationConstant pTMSfinal = pTMSinitial - mutationConstant newMaturingSpeed = random.randint(pTMSfinal, pOMSfinal) else: pOMSfinal = pOMSinitial - mutationConstant pTMSfinal = pTMSinitial + mutationConstant newMaturingSpeed = random.randint(pOMSfinal, pTMSfinal) collisionSuccess = True return cls(newMaturingSpeed, locationX, locationY) screen.fill((255, 255, 255)) for i in range(15): amoebaname = Amoeba(random.randint(100, 150), random.randint(0, SCREEN_WIDTH), random.randint(0, SCREEN_HEIGHT)) amoebas.add(amoebaname) all_sprites.add(amoebaname) p = 0 while True: ageArray = [amoeba.age for amoeba in amoebas] if p == 1000: print(amoebas) five = 0 four = 0 three = 0 two = 0 one = 0 for amoeba in amoebas: if amoeba.maturingSpeed >= 200: five = five + 1 elif amoeba.maturingSpeed >=150: four = four + 1 elif amoeba.maturingSpeed >= 100: three = three + 1 elif amoeba.maturingSpeed >= 50: two = two + 1 else: one = one + 1 total = one + two + three + four + five DivFive = five / total DivFour = four / total DivThree = three / total DivTwo = two / total DivOne = one / total print(DivFive, DivFour, DivThree, DivTwo, DivOne) p = 0 else: p = p + 1 time.sleep(0.0000001) screen.fill((255, 255, 255)) for event in pygame.event.get(): if event.type == QUIT: break amoebas.update() amoebaname = Amoeba.collide() if collisionSuccess == True: amoebas.add(amoebaname) all_sprites.add(amoebaname) for entity in all_sprites: screen.blit(entity.surf, entity.rect) pygame.display.flip() pygame.quit()
[ "Too many nested loops and unneeded data structures. I did some cleanup and it's faster now. And it seems that the mutation constant was far to high. I changed the value from 254 to 25.\nimport pygame\nimport random\nimport time\nimport itertools\n\nfrom pygame.locals import (\n QUIT\n)\n\nSCREEN_WIDTH = 500\nSCREEN_HEIGHT = 500\nMUTATION_CONSTANT = 25\n\npygame.init()\nscreen = pygame.display.set_mode([SCREEN_WIDTH, SCREEN_HEIGHT])\namoebas = pygame.sprite.Group()\n\n\nclass Amoeba(pygame.sprite.Sprite):\n id_iter = itertools.count()\n\n def __init__(self, maturing_speed, x, y):\n super().__init__()\n self.id = 'amoeba' + str(next(Amoeba.id_iter))\n self.surf = pygame.Surface((10, 10))\n self.maturing_speed = min(max(maturing_speed, 1), 254)\n self.surf.fill((self.maturing_speed, 0, 0))\n self.rect = self.surf.get_rect(center=(x, y,))\n self.speed = 2\n self.age = 1\n self.life = 9999\n\n def update(self):\n if self.rect.left <= 0:\n direction = 1\n elif self.rect.right >= SCREEN_WIDTH:\n direction = 2\n elif self.rect.top <= 0:\n direction = 3\n elif self.rect.bottom >= SCREEN_HEIGHT:\n direction = 4\n else:\n direction = random.randint(1, 4)\n\n if direction == 1:\n self.rect.move_ip(self.speed, 0)\n elif direction == 2:\n self.rect.move_ip(-self.speed, 0)\n elif direction == 3:\n self.rect.move_ip(0, self.speed)\n elif direction == 4:\n self.rect.move_ip(0, -self.speed)\n\n self.life = self.life - 1\n if self.life <= 0:\n self.kill()\n\n self.age = self.age + (1 * self.maturing_speed / 1240)\n\n @classmethod\n def collide(cls):\n for amoeba_1, amoeba_2 in itertools.combinations(amoebas, 2):\n if amoeba_1.age >= 100 and amoeba_2.age >= 100 and (\n pygame.sprite.collide_rect(amoeba_1, amoeba_2)\n ):\n amoeba_1.age = 1\n amoeba_2.age = 1\n\n location_x = amoeba_1.rect.left + random.randint(-10, 10)\n location_y = amoeba_1.rect.bottom + random.randint(-10, 10)\n\n speed_low = min(amoeba_1.maturing_speed, amoeba_2.maturing_speed) - MUTATION_CONSTANT\n speed_high = max(amoeba_1.maturing_speed, amoeba_2.maturing_speed) + MUTATION_CONSTANT\n new_maturing_speed = random.randint(speed_low, speed_high)\n \n return cls(new_maturing_speed, location_x, location_y)\n return None\n\n\ndef main():\n screen.fill((255, 255, 255))\n\n for i in range(25):\n amoeba = Amoeba(random.randint(100, 150), random.randint(0, SCREEN_WIDTH), random.randint(0, SCREEN_HEIGHT))\n amoebas.add(amoeba)\n\n step_counter = 0\n while True:\n step_counter += 1\n if step_counter % 100 == 0:\n print(step_counter, amoebas)\n five = 0\n four = 0\n three = 0\n two = 0\n one = 0\n\n for amoeba in amoebas:\n if amoeba.maturing_speed >= 200:\n five = five + 1\n elif amoeba.maturing_speed >= 150:\n four = four + 1\n elif amoeba.maturing_speed >= 100:\n three = three + 1\n elif amoeba.maturing_speed >= 50:\n two = two + 1\n else:\n one = one + 1\n\n total = one + two + three + four + five\n print(f'{five/total:.4f} {four/total:.4f} {three/total:.4f} {two/total:.4f} {one/total:.4f}')\n\n time.sleep(0.0000001)\n screen.fill((255, 255, 255))\n\n for event in pygame.event.get():\n if event.type == QUIT:\n break\n\n amoebas.update()\n amoeba = Amoeba.collide()\n if amoeba:\n amoebas.add(amoeba)\n\n for amoeba in amoebas:\n screen.blit(amoeba.surf, amoeba.rect)\n\n pygame.display.flip()\n\n pygame.quit()\n\n\nif __name__ == '__main__':\n main()\n\n" ]
[ 2 ]
[]
[]
[ "lag", "oop", "pygame", "python", "simulation" ]
stackoverflow_0074466865_lag_oop_pygame_python_simulation.txt
Q: Find all objects of a certain class that do not have any active links with other objects I have a class A which is used as a Foreign Key in many other classes. class A(models.Model): pass class B(models.Model): a: A = ForeignKey(A) class C(models.Model): other_name: A = ForeignKey(A) Now I have a database with a huge table of A objects and many classes like B and C who reference A (say potentially dozens). In this table, there are many objects (100k+) and I want to clean up all objects that are not actively referenced by other objects with a Foreign Key. For example, object 1 of class A is not referenced by class B and C. How would I do this? I already came up with the following code: a_list: list = list() classes: list[tuple] = [(B, "a"), (C, "other_name")] for cl, field in classes: field_object: Field = cl._meta.get_field(field) for obj in cl.objects.all(): a: A = field_object.value_from_object(obj) a_list.append(a) to_remove: list[A] = [a for a in A.objects.all() if a not in a_list] for a in to_remove(): a.remove() This leaves me with a few questions: What if I don't know the full list of classes and fields (the case since it is a large group)? Is this the most efficient way to do this for a large table with many unrelated objects (say 95%)? I guess I can optimize this a lot. A: You can filter with: A.objects.filter(b=None, c=None).delete() This will make proper JOINs and thus determine the items in a single querying, without having to fetch all other model records from the database. But this will be expensive anyway, since the triggers are done by Django that will thus "collect" all A objects. If you do not know what is referencing A, you can work with the meta of the model, so: from django.db.models.fields.reverse_related import OneToOneRel fields = { f.related_query_name: None for f in A._meta.get_fields() if isinstance(f, ManyToOneRel) } A.objects.filter(**fields).delete() This will look for all ForeignKeys and OneToOneFields from other models that target (directly) the A model, then make LEFT OUTER JOINs and filter on NULL, and then delete those. I would advise to first inspect A.objects.filter(**fields) however, and make sure you do not remove any items that are still necessary.
Find all objects of a certain class that do not have any active links with other objects
I have a class A which is used as a Foreign Key in many other classes. class A(models.Model): pass class B(models.Model): a: A = ForeignKey(A) class C(models.Model): other_name: A = ForeignKey(A) Now I have a database with a huge table of A objects and many classes like B and C who reference A (say potentially dozens). In this table, there are many objects (100k+) and I want to clean up all objects that are not actively referenced by other objects with a Foreign Key. For example, object 1 of class A is not referenced by class B and C. How would I do this? I already came up with the following code: a_list: list = list() classes: list[tuple] = [(B, "a"), (C, "other_name")] for cl, field in classes: field_object: Field = cl._meta.get_field(field) for obj in cl.objects.all(): a: A = field_object.value_from_object(obj) a_list.append(a) to_remove: list[A] = [a for a in A.objects.all() if a not in a_list] for a in to_remove(): a.remove() This leaves me with a few questions: What if I don't know the full list of classes and fields (the case since it is a large group)? Is this the most efficient way to do this for a large table with many unrelated objects (say 95%)? I guess I can optimize this a lot.
[ "You can filter with:\nA.objects.filter(b=None, c=None).delete()\nThis will make proper JOINs and thus determine the items in a single querying, without having to fetch all other model records from the database.\nBut this will be expensive anyway, since the triggers are done by Django that will thus \"collect\" all A objects.\nIf you do not know what is referencing A, you can work with the meta of the model, so:\nfrom django.db.models.fields.reverse_related import OneToOneRel\n\nfields = {\n f.related_query_name: None\n for f in A._meta.get_fields()\n if isinstance(f, ManyToOneRel)\n}\n\nA.objects.filter(**fields).delete()\nThis will look for all ForeignKeys and OneToOneFields from other models that target (directly) the A model, then make LEFT OUTER JOINs and filter on NULL, and then delete those.\nI would advise to first inspect A.objects.filter(**fields) however, and make sure you do not remove any items that are still necessary.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074467841_django_python.txt
Q: Python Pandas: Joining Dataframes I have table A and Table B. I want to join them to get Table C. I tried the following code. But it is not giving me the result that I want. C = pd.merge(A, B, how = 'inner', left_on = ['ID1', 'ID2', 'ID3'], right_on = ['IDA', 'IDB', 'IDC']) Table A ID1 ID2 ID3 Color Flag A 1 1 White Y B 1 2 Black Y A 1 3 Green N E 2 3 Blue Y D 4 5 Blue N C 6 7 Red N F 9 7 Black Y Table B IDA IDB IDC A 1 1 F 9 7 A 1 3 D 4 5 Table C ID1 ID2 ID3 Color Flag A 1 1 White Y A 1 3 Green N D 4 5 Blue N F 9 7 Black Y A: here is one way to do it # do a left merge and rop the null rows out=(pd.merge(df, df2, how = 'left', left_on = ['ID1', 'ID2', 'ID3'], right_on = ['IDA', 'IDB', 'IDC']) .dropna() .drop(columns=['IDA', 'IDB','IDC'])) ID1 ID2 ID3 Color Flag 0 A 1 1 White Y 2 A 1 3 Green N 4 D 4 5 Blue N 6 F 9 7 Black Y alternately, if these are the only columns in your DF, you can convert these to string to make them of same type. that too is only for join and not affecting the DFs (pd.merge(df.astype(str), df2.astype(str), how = 'left', left_on = ['ID1', 'ID2', 'ID3'], right_on = ['IDA', 'IDB', 'IDC']) .dropna() .drop(columns=['IDA', 'IDB','IDC'])) ID1 ID2 ID3 Color Flag 0 A 1 1 White Y 2 A 1 3 Green N 4 D 4 5 Blue N 6 F 9 7 Black Y
Python Pandas: Joining Dataframes
I have table A and Table B. I want to join them to get Table C. I tried the following code. But it is not giving me the result that I want. C = pd.merge(A, B, how = 'inner', left_on = ['ID1', 'ID2', 'ID3'], right_on = ['IDA', 'IDB', 'IDC']) Table A ID1 ID2 ID3 Color Flag A 1 1 White Y B 1 2 Black Y A 1 3 Green N E 2 3 Blue Y D 4 5 Blue N C 6 7 Red N F 9 7 Black Y Table B IDA IDB IDC A 1 1 F 9 7 A 1 3 D 4 5 Table C ID1 ID2 ID3 Color Flag A 1 1 White Y A 1 3 Green N D 4 5 Blue N F 9 7 Black Y
[ "here is one way to do it\n# do a left merge and rop the null rows\nout=(pd.merge(df, df2, \n how = 'left', \n left_on = ['ID1', 'ID2', 'ID3'], \n right_on = ['IDA', 'IDB', 'IDC'])\n .dropna()\n .drop(columns=['IDA', 'IDB','IDC']))\n\n\nID1 ID2 ID3 Color Flag\n0 A 1 1 White Y\n2 A 1 3 Green N\n4 D 4 5 Blue N\n6 F 9 7 Black Y\n\nalternately, if these are the only columns in your DF, you can convert these to string to make them of same type. that too is only for join and not affecting the DFs\n(pd.merge(df.astype(str), df2.astype(str), \n how = 'left', \n left_on = ['ID1', 'ID2', 'ID3'], \n right_on = ['IDA', 'IDB', 'IDC'])\n .dropna()\n .drop(columns=['IDA', 'IDB','IDC']))\n\n ID1 ID2 ID3 Color Flag\n0 A 1 1 White Y\n2 A 1 3 Green N\n4 D 4 5 Blue N\n6 F 9 7 Black Y\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074467734_dataframe_pandas_python.txt
Q: How to create list of urls from csv file to iterate? I am working on a webscrape code, he work fine, now I want replace the url, with a CSV file who containt thousand of url, it's like this : url1 url2 url3 . . .urlX my first line web scrape code is a basic : from bs4 import BeautifulSoup import requests from csv import writer url= "HERE THE URL FROM EACH LINE OF THE CSV FILE" page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') how can i do for tell to python, to use the urls from the CSV, i think to do a dico, but i dont very know how i can do that, anyone have a solution please ? i know it's seams very simple for you, but it will be very usefull for me. A: If this is just a list of urls, you don't really need the csv module. But here is a solution assuming the url is in column 0 of the file. You want a csv reader, not writer, and then its a simple case of iterating the rows and taking action. from bs4 import BeautifulSoup import requests import csv with open("url-collection.csv", newline="") as fileobj: for row in csv.reader(fileobj): # TODO: add try/except to handle errors url = row[0] page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser')
How to create list of urls from csv file to iterate?
I am working on a webscrape code, he work fine, now I want replace the url, with a CSV file who containt thousand of url, it's like this : url1 url2 url3 . . .urlX my first line web scrape code is a basic : from bs4 import BeautifulSoup import requests from csv import writer url= "HERE THE URL FROM EACH LINE OF THE CSV FILE" page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') how can i do for tell to python, to use the urls from the CSV, i think to do a dico, but i dont very know how i can do that, anyone have a solution please ? i know it's seams very simple for you, but it will be very usefull for me.
[ "If this is just a list of urls, you don't really need the csv module. But here is a solution assuming the url is in column 0 of the file. You want a csv reader, not writer, and then its a simple case of iterating the rows and taking action.\nfrom bs4 import BeautifulSoup\nimport requests\nimport csv\n\nwith open(\"url-collection.csv\", newline=\"\") as fileobj:\n for row in csv.reader(fileobj):\n # TODO: add try/except to handle errors\n url = row[0]\n page = requests.get(url)\n soup = BeautifulSoup(page.content, 'html.parser')\n\n" ]
[ 2 ]
[]
[]
[ "list", "loops", "python", "web_scraping" ]
stackoverflow_0074467955_list_loops_python_web_scraping.txt
Q: Setting up env variables in github actions workflow yml So I'm trying to set up an env variable, and print it to see if it actually is set up. Bellow is only an example on how I m trying to set the env vars, in realiy I'm trying to set secrets as env variables but it dosent work. I use the env variables in python scripts, but they are not being set os.getenv("key") returns none whatever I'd do name: test on: push: branches: [ "master" ] pull_request: branches: [ "master" ] permissions: id-token: write contents: read jobs: build: runs-on: ubuntu-latest env: SERVICE_NAME: daily monitoring steps: - name: "Setup env vars" run: echo "Test" env: TEST_VAR: "test variable value" - name: "Var is " run: echo "var is ${{ steps.outputs.TEST_VAR }}" For the above yml Setup env vars step logs: Run echo "Test" echo "Test" shell: /usr/bin/bash -e {0} env: SERVICE_NAME: daily monitoring TEST_VAR: test variable value Test Var is step logs: Run echo "var is " var is A: You're setting two environment variables (one globally, and one specific to the "Setup env vars" task. In both cases, they're working correctly: if you were to modify your "Setup env vars" task like this... steps: - name: "Setup env vars" run: | echo "$SERVICE_NAME" echo "$TEST_VAR" env: TEST_VAR: "test variable value" ...you would see the values you expect. But in your "Var is task"... - name: "Var is " run: echo "var is ${{ steps.outputs.TEST_VAR }}" ...you're not asking for an environment variable. You're asking for ${{ steps.outputs.TEST_VAR }}, and there are a few problems there: The format for that expression is steps.<step_id>.outputs.<name>, and you're not setting the id for any of your steps. You're not setting outputs in any of your tasks If you want to define outputs for a task, you need to follow these docs. That would look something like: jobs: build: runs-on: ubuntu-latest env: SERVICE_NAME: daily monitoring steps: - name: "Setup env vars" id: envvars run: | echo "TEST_VAR=$TEST_VAR" >> $GITHUB_OUTPUT echo "SERVICE_NAME=$SERVICE_NAME" >> $GITHUB_OUTPUT env: TEST_VAR: "test variable value" - name: "Var is " run: | echo "${{ steps.envvars.outputs.TEST_VAR }}" echo "${{ steps.envvars.outputs.SERVICE_NAME }}"
Setting up env variables in github actions workflow yml
So I'm trying to set up an env variable, and print it to see if it actually is set up. Bellow is only an example on how I m trying to set the env vars, in realiy I'm trying to set secrets as env variables but it dosent work. I use the env variables in python scripts, but they are not being set os.getenv("key") returns none whatever I'd do name: test on: push: branches: [ "master" ] pull_request: branches: [ "master" ] permissions: id-token: write contents: read jobs: build: runs-on: ubuntu-latest env: SERVICE_NAME: daily monitoring steps: - name: "Setup env vars" run: echo "Test" env: TEST_VAR: "test variable value" - name: "Var is " run: echo "var is ${{ steps.outputs.TEST_VAR }}" For the above yml Setup env vars step logs: Run echo "Test" echo "Test" shell: /usr/bin/bash -e {0} env: SERVICE_NAME: daily monitoring TEST_VAR: test variable value Test Var is step logs: Run echo "var is " var is
[ "You're setting two environment variables (one globally, and one specific to the \"Setup env vars\" task. In both cases, they're working correctly: if you were to modify your \"Setup env vars\" task like this...\n steps:\n - name: \"Setup env vars\"\n run: |\n echo \"$SERVICE_NAME\"\n echo \"$TEST_VAR\"\n env:\n TEST_VAR: \"test variable value\"\n\n...you would see the values you expect.\nBut in your \"Var is task\"...\n - name: \"Var is \"\n run: echo \"var is ${{ steps.outputs.TEST_VAR }}\"\n\n...you're not asking for an environment variable. You're asking for ${{ steps.outputs.TEST_VAR }}, and there are a few problems there:\n\nThe format for that expression is steps.<step_id>.outputs.<name>, and you're not setting the id for any of your steps.\n\nYou're not setting outputs in any of your tasks\n\n\nIf you want to define outputs for a task, you need to follow these docs. That would look something like:\njobs:\n\n build:\n\n runs-on: ubuntu-latest\n\n env:\n SERVICE_NAME: daily monitoring\n\n steps:\n - name: \"Setup env vars\"\n id: envvars\n run: |\n echo \"TEST_VAR=$TEST_VAR\" >> $GITHUB_OUTPUT\n echo \"SERVICE_NAME=$SERVICE_NAME\" >> $GITHUB_OUTPUT\n env:\n TEST_VAR: \"test variable value\"\n \n - name: \"Var is \"\n run: |\n echo \"${{ steps.envvars.outputs.TEST_VAR }}\"\n echo \"${{ steps.envvars.outputs.SERVICE_NAME }}\"\n\n" ]
[ 0 ]
[]
[]
[ "github_actions", "python" ]
stackoverflow_0074467400_github_actions_python.txt
Q: How to multiply a tensor row-wise by a vector in PyTorch? When I have a tensor m of shape [12, 10] and a vector s of scalars with shape [12], how can I multiply each row of m with the corresponding scalar in s? A: You need to add a corresponding singleton dimension: m * s[:, None] s[:, None] has size of (12, 1) when multiplying a (12, 10) tensor by a (12, 1) tensor pytorch knows to broadcast s along the second singleton dimension and perform the "element-wise" product correctly. A: You can broadcast a vector to a higher dimensional tensor like so: def row_mult(input, vector): extra_dims = (1,)*(input.dim()-1) return t * vector.view(-1, *extra_dims) A: Shai's answer works if you know the number of dimensions in advance and can hardcode the correct number of None's. This can be extended to extra dimentions is required: mask = (torch.rand(12) > 0.5).int() data = (torch.rand(12, 2, 3, 4)) result = data * mask[:,None,None,None] result.shape # torch.Size([12, 2, 3, 4]) mask[:,None,None,None].shape # torch.Size([12, 1, 1, 1]) If you are dealing with data of variable or unknown dimensions, then it may require manually extending mask to the correct shape mask = (torch.rand(12) > 0.5).int() while mask.dim() < data.dim(): mask.unsqueeze_(1) result = data * mask result.shape # torch.Size([12, 2, 3, 4]) mask.shape # torch.Size([12, 1, 1, 1]) This is a bit of an ugly solution, but it does work. There is probably a much more elegant way to correctly reshape the mask tensor inline for a variable number of dimensions A: A slighty hard to understand at first, but very powerful technique is to use Einstein summation: torch.einsum('i,ij->ij', s, m)
How to multiply a tensor row-wise by a vector in PyTorch?
When I have a tensor m of shape [12, 10] and a vector s of scalars with shape [12], how can I multiply each row of m with the corresponding scalar in s?
[ "You need to add a corresponding singleton dimension:\nm * s[:, None]\n\ns[:, None] has size of (12, 1) when multiplying a (12, 10) tensor by a (12, 1) tensor pytorch knows to broadcast s along the second singleton dimension and perform the \"element-wise\" product correctly.\n", "You can broadcast a vector to a higher dimensional tensor like so:\ndef row_mult(input, vector):\n extra_dims = (1,)*(input.dim()-1)\n return t * vector.view(-1, *extra_dims)\n\n", "Shai's answer works if you know the number of dimensions in advance and can hardcode the correct number of None's. This can be extended to extra dimentions is required:\nmask = (torch.rand(12) > 0.5).int() \ndata = (torch.rand(12, 2, 3, 4))\n\nresult = data * mask[:,None,None,None]\n\nresult.shape # torch.Size([12, 2, 3, 4])\nmask[:,None,None,None].shape # torch.Size([12, 1, 1, 1])\n\nIf you are dealing with data of variable or unknown dimensions, then it may require manually extending mask to the correct shape\nmask = (torch.rand(12) > 0.5).int()\nwhile mask.dim() < data.dim(): mask.unsqueeze_(1)\nresult = data * mask\n\nresult.shape # torch.Size([12, 2, 3, 4])\nmask.shape # torch.Size([12, 1, 1, 1])\n\nThis is a bit of an ugly solution, but it does work. There is probably a much more elegant way to correctly reshape the mask tensor inline for a variable number of dimensions\n", "A slighty hard to understand at first, but very powerful technique is to use Einstein summation:\ntorch.einsum('i,ij->ij', s, m)\n\n" ]
[ 32, 3, 0, 0 ]
[]
[]
[ "python", "pytorch", "scalar", "tensor" ]
stackoverflow_0053987906_python_pytorch_scalar_tensor.txt
Q: Return variables on the same line - Python I've got a for loop which iterates through three elements in a list: ["123", "456", "789"]. So, with the first iteration, it will perform a calculation on each digit within the first element, then add the digits back up. This repeats for the other two elements. The outputs are then converted into strings and outputted. for x in digits: if len(x) == 3: result1 = int(x[0]) * 8 ** (3 - 1) result2 = int(x[1]) * 8 ** (2 - 1) result3 = int(x[2]) * 8 ** (1 - 1) result = result1 + result2 + result decimal = [] decimal.append(result) string = " ".join(str(i) for i in decimal) return string Problem is, when outputting the results of the calculations, it outputs them on separate lines, but I need them to be on the same line. For example: 123 456 789 I need them to be like this: 123 456 789 I've tried putting the results of the calculations into a list, which is then converted to a string and outputted, but no dice - it still returns the values on separate lines instead of one. EDIT: I know how to do this using the print function: print(str(result), end=" ") But need to use the return function. Is there any way to do this? A: you can append each variable on a list using a_list.append('variable') and print it using ' '.join(a_list) whenever needed. A: The problem is that you are creating the list and returning the result inside the for instead of outside. You only make a single calculation and return. def foo(digits): results = [] for x in digits: if len(x) == 3: result1 = int(x[0]) * 8 ** (3 - 1) result2 = int(x[1]) * 8 ** (2 - 1) result3 = int(x[2]) * 8 ** (1 - 1) results.append(result1 + result2 + result3) return " ".join(str(i) for i in results)
Return variables on the same line - Python
I've got a for loop which iterates through three elements in a list: ["123", "456", "789"]. So, with the first iteration, it will perform a calculation on each digit within the first element, then add the digits back up. This repeats for the other two elements. The outputs are then converted into strings and outputted. for x in digits: if len(x) == 3: result1 = int(x[0]) * 8 ** (3 - 1) result2 = int(x[1]) * 8 ** (2 - 1) result3 = int(x[2]) * 8 ** (1 - 1) result = result1 + result2 + result decimal = [] decimal.append(result) string = " ".join(str(i) for i in decimal) return string Problem is, when outputting the results of the calculations, it outputs them on separate lines, but I need them to be on the same line. For example: 123 456 789 I need them to be like this: 123 456 789 I've tried putting the results of the calculations into a list, which is then converted to a string and outputted, but no dice - it still returns the values on separate lines instead of one. EDIT: I know how to do this using the print function: print(str(result), end=" ") But need to use the return function. Is there any way to do this?
[ "you can append each variable on a list using a_list.append('variable') and print it using ' '.join(a_list)\nwhenever needed.\n", "The problem is that you are creating the list and returning the result inside the for instead of outside. You only make a single calculation and return.\ndef foo(digits):\n results = []\n for x in digits:\n if len(x) == 3:\n result1 = int(x[0]) * 8 ** (3 - 1)\n result2 = int(x[1]) * 8 ** (2 - 1)\n result3 = int(x[2]) * 8 ** (1 - 1)\n results.append(result1 + result2 + result3)\n return \" \".join(str(i) for i in results)\n\n" ]
[ 0, 0 ]
[]
[]
[ "list", "python", "return" ]
stackoverflow_0074467867_list_python_return.txt
Q: Can i rewrite this code to make it work faster? Is it actually possible to make this run faster? I need to get half of all possible grids (all elements can be either -1 or 1) of size 4*Lx (for counting energies in Ising model). def get_grid(Lx): a = list() count = 0 t = list(product([1,-1], repeat=Lx)) for i in range(len(t)): for j in range(len(t)): for k in range(len(t)): for l in range(len(t)): count += 1 a.append([t[i], t[j], t[k], t[l]]) if count == 2**(Lx*4)/2: return np.array(a) Tried using numba, but that didn't work out. A: First of all, Numba does not like lists. If you want an efficient code, then you need to operate on arrays (except when you really do not know the size at runtime and estimating it is hard/slow). Here the size of the output array is already known so it is better to preallocate it and then fill it. Numba does not like much high-level features like generators, you should prefer using basic loops which are faster (as long as they are executed in a JITed function). The Cartesian product can be replaced by the efficient computation of an array based on the bits of an increasing integer. The whole computation is mainly memory-bound so it is better to use small integer datatypes like uint8 which take 4 times less space in RAM (and thus about 4 times faster to fill). Here is the resulting code: import numpy as np import numba as nb @nb.njit('int8[:,:,:](int64,)') def get_grid_numba(Lx): t = np.empty((2**Lx, Lx), dtype=np.int8) for i in range(2**Lx): for j in range(Lx): t[i, Lx-1-j] = 1 - 2 * ((i >> j) & 1) outSize = 2**(Lx*4 - 1) out = np.empty((outSize, 4, Lx), dtype=np.int8) cur = 0 for i in range(len(t)): for j in range(len(t)): for k in range(len(t)): for l in range(len(t)): out[cur, 0, :] = t[i, :] out[cur, 1, :] = t[j, :] out[cur, 2, :] = t[k, :] out[cur, 3, :] = t[l, :] cur += 1 if cur == outSize: return out return out For Lx=4, the initial code takes 66.8 ms while this Numba code takes 0.36 ms on my i5-9600KF processor. It is thus 185 times faster. Note that the size of the output array exponentially grows very quickly. For Lx=7, the output shape is (134217728, 4, 7) and it takes 3.5 GiB of RAM. The Numba code takes 2.47 s to generate it, that is 1.4 GiB/s. If this is not enough to you, then you can write specific implementation from Lx=1 to Lx=8, use loops for the out slice assignment and even use multiple threads for Lx>=5. For small arrays, you can pre-compute them once. This should be an order of magnitude faster.
Can i rewrite this code to make it work faster?
Is it actually possible to make this run faster? I need to get half of all possible grids (all elements can be either -1 or 1) of size 4*Lx (for counting energies in Ising model). def get_grid(Lx): a = list() count = 0 t = list(product([1,-1], repeat=Lx)) for i in range(len(t)): for j in range(len(t)): for k in range(len(t)): for l in range(len(t)): count += 1 a.append([t[i], t[j], t[k], t[l]]) if count == 2**(Lx*4)/2: return np.array(a) Tried using numba, but that didn't work out.
[ "First of all, Numba does not like lists. If you want an efficient code, then you need to operate on arrays (except when you really do not know the size at runtime and estimating it is hard/slow). Here the size of the output array is already known so it is better to preallocate it and then fill it. Numba does not like much high-level features like generators, you should prefer using basic loops which are faster (as long as they are executed in a JITed function). The Cartesian product can be replaced by the efficient computation of an array based on the bits of an increasing integer. The whole computation is mainly memory-bound so it is better to use small integer datatypes like uint8 which take 4 times less space in RAM (and thus about 4 times faster to fill). Here is the resulting code:\nimport numpy as np\nimport numba as nb\n\n@nb.njit('int8[:,:,:](int64,)')\ndef get_grid_numba(Lx):\n t = np.empty((2**Lx, Lx), dtype=np.int8)\n for i in range(2**Lx):\n for j in range(Lx):\n t[i, Lx-1-j] = 1 - 2 * ((i >> j) & 1)\n outSize = 2**(Lx*4 - 1)\n out = np.empty((outSize, 4, Lx), dtype=np.int8)\n cur = 0\n for i in range(len(t)):\n for j in range(len(t)):\n for k in range(len(t)):\n for l in range(len(t)):\n out[cur, 0, :] = t[i, :]\n out[cur, 1, :] = t[j, :]\n out[cur, 2, :] = t[k, :]\n out[cur, 3, :] = t[l, :]\n cur += 1\n if cur == outSize:\n return out\n return out\n\nFor Lx=4, the initial code takes 66.8 ms while this Numba code takes 0.36 ms on my i5-9600KF processor. It is thus 185 times faster.\n\nNote that the size of the output array exponentially grows very quickly. For Lx=7, the output shape is (134217728, 4, 7) and it takes 3.5 GiB of RAM. The Numba code takes 2.47 s to generate it, that is 1.4 GiB/s. If this is not enough to you, then you can write specific implementation from Lx=1 to Lx=8, use loops for the out slice assignment and even use multiple threads for Lx>=5. For small arrays, you can pre-compute them once. This should be an order of magnitude faster.\n" ]
[ 2 ]
[]
[]
[ "loops", "performance", "python" ]
stackoverflow_0074465691_loops_performance_python.txt
Q: How to extract and store x, z coordinates associated to a specific y coordinate on a UnstructuredGrid in Python? Starting from an image I did some processing (like thresholding) and I obtained its representation as UnstructuredGrid using VTK and PyVista. I would like to create an array of shape (n, 3) filled with x, y, z coordinates associated with a specific y coordinate of which I know the value, but not the position of corresponding cells in the UnstructuredGrid. I didn't understand too well what an UnstructuredGrid is so I don't know how to access and extract specific point values and coordinates. My goal is to create a list of coordinates of the front face of the image, that will be the input for a ray tracing algorithm. A: Most of the tooling you need is the UnstructuredGrid.extract_cells() filter, which lets you select cells based on a boolean mask array or integer indices. Building such a mask is fairly easy if you compare the y coordinates of cell centers with the specific value you are looking for: import pyvista as pv from pyvista.examples import download_puppy # example data mesh = download_puppy().threshold(80) # example point coordinates: use middle cell's center mesh_cell_centers = mesh.cell_centers() x0, y0, z0 = mesh_cell_centers.points[mesh.n_cells // 2, :] # plot example mesh def plot_puppy(): """Helper function to plot puppy twice.""" pl = pv.Plotter() pl.background_color = 'lightblue' pl.add_mesh(mesh, scalars='JPEGImage', rgb=True) pl.add_bounding_box() pl.camera.tight(padding=0.1) return pl plotter = plot_puppy() plotter.show() # extract cells with center y coordinate same as y0 indices_to_keep = mesh_cell_centers.points[:, 1] == y0 # boolean mask # for inexact matching we could use np.isclose(mesh_cell_centers.points[:, 1], y0) subset = mesh.extract_cells(indices_to_keep) # visualize extracted cell centers with points plotter = plot_puppy() plotter.add_points(subset, color='red', render_points_as_spheres=True) plotter.show() The x0, y0, z0 in my example are the coordinates of the "middle" cell left after thresholding, in your actual use case you need something like y0 = mesh_cell_centers.points[:, 1].min() if you want to match the cells with the lowest y coordinate. In any case calling cell_centers() is an important step to obtain the cell coordinates as points of an auxiliary mesh. Here's what the thresholded puppy looks like: The thresholding turns the original UniformGrid image into an UnstructuredGrid of scattered pixels. Starting from a UniformGrid is also useful for matching y coordinates exactly using ==, but even without this we could use np.isclose for approximate matching of float values (as I pointed out in a comment). Here's the second image, where red spheres are superimposed on the puppy at positions that were matched in the mesh subset (another UnstructuredGrid): This agrees with our expectations: we only see cells with a specific y coordinate, and we only see cells where the puppy is not transparent. Since you need the coordinates of the corresponding cells, you can just use subset.cell_centers().points for an (n, 3)-shaped array, or pick out the x and z coordinates with subset.cell_centers().points[:, [0, 2]] with shape (n, 2).
How to extract and store x, z coordinates associated to a specific y coordinate on a UnstructuredGrid in Python?
Starting from an image I did some processing (like thresholding) and I obtained its representation as UnstructuredGrid using VTK and PyVista. I would like to create an array of shape (n, 3) filled with x, y, z coordinates associated with a specific y coordinate of which I know the value, but not the position of corresponding cells in the UnstructuredGrid. I didn't understand too well what an UnstructuredGrid is so I don't know how to access and extract specific point values and coordinates. My goal is to create a list of coordinates of the front face of the image, that will be the input for a ray tracing algorithm.
[ "Most of the tooling you need is the UnstructuredGrid.extract_cells() filter, which lets you select cells based on a boolean mask array or integer indices. Building such a mask is fairly easy if you compare the y coordinates of cell centers with the specific value you are looking for:\nimport pyvista as pv\nfrom pyvista.examples import download_puppy\n\n# example data\nmesh = download_puppy().threshold(80)\n# example point coordinates: use middle cell's center\nmesh_cell_centers = mesh.cell_centers()\nx0, y0, z0 = mesh_cell_centers.points[mesh.n_cells // 2, :]\n\n# plot example mesh\ndef plot_puppy():\n \"\"\"Helper function to plot puppy twice.\"\"\"\n pl = pv.Plotter()\n pl.background_color = 'lightblue'\n pl.add_mesh(mesh, scalars='JPEGImage', rgb=True)\n pl.add_bounding_box()\n pl.camera.tight(padding=0.1)\n return pl\nplotter = plot_puppy()\nplotter.show()\n\n# extract cells with center y coordinate same as y0\nindices_to_keep = mesh_cell_centers.points[:, 1] == y0 # boolean mask\n# for inexact matching we could use np.isclose(mesh_cell_centers.points[:, 1], y0)\nsubset = mesh.extract_cells(indices_to_keep)\n\n# visualize extracted cell centers with points\nplotter = plot_puppy()\nplotter.add_points(subset, color='red', render_points_as_spheres=True)\nplotter.show()\n\nThe x0, y0, z0 in my example are the coordinates of the \"middle\" cell left after thresholding, in your actual use case you need something like y0 = mesh_cell_centers.points[:, 1].min() if you want to match the cells with the lowest y coordinate. In any case calling cell_centers() is an important step to obtain the cell coordinates as points of an auxiliary mesh.\nHere's what the thresholded puppy looks like:\n\nThe thresholding turns the original UniformGrid image into an UnstructuredGrid of scattered pixels. Starting from a UniformGrid is also useful for matching y coordinates exactly using ==, but even without this we could use np.isclose for approximate matching of float values (as I pointed out in a comment).\nHere's the second image, where red spheres are superimposed on the puppy at positions that were matched in the mesh subset (another UnstructuredGrid):\n\nThis agrees with our expectations: we only see cells with a specific y coordinate, and we only see cells where the puppy is not transparent.\nSince you need the coordinates of the corresponding cells, you can just use subset.cell_centers().points for an (n, 3)-shaped array, or pick out the x and z coordinates with subset.cell_centers().points[:, [0, 2]] with shape (n, 2).\n" ]
[ 0 ]
[]
[]
[ "coordinates", "python", "pyvista", "vtk" ]
stackoverflow_0074428715_coordinates_python_pyvista_vtk.txt
Q: RuntimeError: main thread is not in main loop When I call self.client = ThreadedClient() in my Python program, I get the error "RuntimeError: main thread is not in main loop" I have already done some googling, but I am making an error somehow ... Can someone please help me out? Full error: Exception in thread Thread-1: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 530, in __bootstrap_inner File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 483, in run File "/Users/Wim/Bird Swarm/bird_swarm.py", line 156, in workerGuiThread self.root.after(200, self.workerGuiThread) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 501, in after File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1098, in _register RuntimeError: main thread is not in main loop Classes: class ThreadedClient(object): def __init__(self): self.queue = Queue.Queue( ) self.gui = GuiPart(self.queue, self.endApplication) self.root = self.gui.getRoot() self.running = True self.GuiThread = threading.Thread(target=self.workerGuiThread) self.GuiThread.start() def workerGuiThread(self): while self.running: self.root.after(200, self.workerGuiThread) self.gui.processIncoming( ) def endApplication(self): self.running = False def tc_TekenVogel(self,vogel): self.queue.put(vogel) class GuiPart(object): def __init__(self, queue, endCommand): self.queue = queue self.root = Tkinter.Tk() Tkinter.Canvas(self.root,width=g_groottescherm,height=g_groottescherm).pack() Tkinter.Button(self.root, text="Move 1 tick", command=self.doSomething).pack() self.vogelcords = {} #register of bird and their corresponding coordinates def getRoot(self): return self.root def doSomething(): pass #button action def processIncoming(self): while self.queue.qsize( ): try: msg = self.queue.get(0) try: vogel = msg l = vogel.geeflocatie() if self.vogelcords.has_key(vogel): cirkel = self.vogelcords[vogel] self.gcanvas.coords(cirkel,l.geefx()-g_groottevogel,l.geefy()-g_groottevogel,l.geefx()+g_groottevogel,l.geefy()+g_groottevogel) else: cirkel = self.gcanvas.create_oval(l.geefx()-g_groottevogel,l.geefy()-g_groottevogel,l.geefx()+g_groottevogel,l.geefy()+g_groottevogel,fill='red',outline='black',width=1) self.vogelcords[vogel] = cirkel self.gcanvas.update() except: print('Failed, was van het type %' % type(msg)) except Queue.Empty: pass A: You're running your main GUI loop in a thread besides the main thread. You cannot do this. The docs mention offhandedly in a few places that Tkinter is not quite thread safe, but as far as I know, never quite come out and say that you can only talk to Tk from the main thread. The reason is that the truth is somewhat complicated. Tkinter itself is thread-safe, but it's hard to use in a multithreaded way. The closest to official documentation on this seems to be this page: Q. Is there an alternative to Tkinter that is thread safe? Tkinter? Just run all UI code in the main thread, and let the writers write to a Queue object… (The sample code given isn't great, but it's enough to figure out what they're suggesting and do things properly.) There actually is a thread-safe alternative to Tkinter, mtTkinter. And its docs actually explain the situation pretty well: Although Tkinter is technically thread-safe (assuming Tk is built with --enable-threads), practically speaking there are still problems when used in multithreaded Python applications. The problems stem from the fact that the _tkinter module attempts to gain control of the main thread via a polling technique when processing calls from other threads. I believe this is exactly what you're seeing: your Tkinter code in Thread-1 is trying to peek into the main thread to find the main loop, and it's not there. So, here are some options: Do what the Tkinter docs recommend and use TkInter from the main thread. Possibly by moving your current main thread code into a worker thread. If you're using some other library that wants to take over the main thread (e.g., twisted), it may have a way to integrate with Tkinter, in which case you should use that. Use mkTkinter to solve the problem. Also, while I didn't find any exact duplicates of this question, there are a number of related questions on SO. See this question, this answer, and many more for more information. A: I know this is late, but I set my thread to a Daemon, and no exception was raised: t = threading.Thread(target=your_func) t.setDaemon(True) t.start() A: I found a way to solve it. it might look like a joke but you just should add plt.switch_backend('agg') A: Since all this did help my problem but did not solve it completely here is an additional thing to keep in mind: In my case I started off importing the pyplot library in many threads and using it there. After moving all the library calls to my main thread I still got that error. I did get rid of it by removing all import statements of that library in other files used in other threads. Even if they did not use the library the same error was caused by it. A: from tkinter import * from threading import Thread from time import sleep from random import randint class GUI(): def __init__(self): self.root = Tk() self.root.geometry("200x200") self.btn = Button(self.root,text="lauch") self.btn.pack(expand=True) self.btn.config(command=self.action) def run(self): self.root.mainloop() def add(self,string,buffer): while self.txt: msg = str(randint(1,100))+string+"\n" self.txt.insert(END,msg) sleep(0.5) def reset_lbl(self): self.txt = None self.second.destroy() def action(self): self.second = Toplevel() self.second.geometry("100x100") self.txt = Text(self.second) self.txt.pack(expand=True,fill="both") self.t = Thread(target=self.add,args=("new",None)) self.t.setDaemon(True) self.t.start() self.second.protocol("WM_DELETE_WINDOW",self.reset_lbl) a = GUI() a.run() maybe this example would help someone. A: Write it at the end: root.mainloop() Of course, in place of root should be the name of your Tk object if it is not root. A: You cannot modify your main GIU from another thread you need to send event to the main GUI in order to avoid exceptions Use window.write_event_value instead, this method allows you to send events from your threads you can take a look a this too: window.perform_long_operation A: I know this question was asked a long time ago, but I wanted to tell you how I solved it. In my case, I have a program that sends and receives messages through the serial port and uses the TKinter library. If I do: while (True): #more code here window.update_idletasks() window.update() The code crashes when a thread tries to access a tkinter function. But, if I do this: window.mainloop() All the threads execute normaly. Hope this helps someone.
RuntimeError: main thread is not in main loop
When I call self.client = ThreadedClient() in my Python program, I get the error "RuntimeError: main thread is not in main loop" I have already done some googling, but I am making an error somehow ... Can someone please help me out? Full error: Exception in thread Thread-1: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 530, in __bootstrap_inner File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 483, in run File "/Users/Wim/Bird Swarm/bird_swarm.py", line 156, in workerGuiThread self.root.after(200, self.workerGuiThread) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 501, in after File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1098, in _register RuntimeError: main thread is not in main loop Classes: class ThreadedClient(object): def __init__(self): self.queue = Queue.Queue( ) self.gui = GuiPart(self.queue, self.endApplication) self.root = self.gui.getRoot() self.running = True self.GuiThread = threading.Thread(target=self.workerGuiThread) self.GuiThread.start() def workerGuiThread(self): while self.running: self.root.after(200, self.workerGuiThread) self.gui.processIncoming( ) def endApplication(self): self.running = False def tc_TekenVogel(self,vogel): self.queue.put(vogel) class GuiPart(object): def __init__(self, queue, endCommand): self.queue = queue self.root = Tkinter.Tk() Tkinter.Canvas(self.root,width=g_groottescherm,height=g_groottescherm).pack() Tkinter.Button(self.root, text="Move 1 tick", command=self.doSomething).pack() self.vogelcords = {} #register of bird and their corresponding coordinates def getRoot(self): return self.root def doSomething(): pass #button action def processIncoming(self): while self.queue.qsize( ): try: msg = self.queue.get(0) try: vogel = msg l = vogel.geeflocatie() if self.vogelcords.has_key(vogel): cirkel = self.vogelcords[vogel] self.gcanvas.coords(cirkel,l.geefx()-g_groottevogel,l.geefy()-g_groottevogel,l.geefx()+g_groottevogel,l.geefy()+g_groottevogel) else: cirkel = self.gcanvas.create_oval(l.geefx()-g_groottevogel,l.geefy()-g_groottevogel,l.geefx()+g_groottevogel,l.geefy()+g_groottevogel,fill='red',outline='black',width=1) self.vogelcords[vogel] = cirkel self.gcanvas.update() except: print('Failed, was van het type %' % type(msg)) except Queue.Empty: pass
[ "You're running your main GUI loop in a thread besides the main thread. You cannot do this.\nThe docs mention offhandedly in a few places that Tkinter is not quite thread safe, but as far as I know, never quite come out and say that you can only talk to Tk from the main thread. The reason is that the truth is somewhat complicated. Tkinter itself is thread-safe, but it's hard to use in a multithreaded way. The closest to official documentation on this seems to be this page:\n\nQ. Is there an alternative to Tkinter that is thread safe?\nTkinter?\nJust run all UI code in the main thread, and let the writers write to a Queue object…\n\n(The sample code given isn't great, but it's enough to figure out what they're suggesting and do things properly.)\nThere actually is a thread-safe alternative to Tkinter, mtTkinter. And its docs actually explain the situation pretty well:\n\nAlthough Tkinter is technically thread-safe (assuming Tk is built with --enable-threads), practically speaking there are still problems when used in multithreaded Python applications. The problems stem from the fact that the _tkinter module attempts to gain control of the main thread via a polling technique when processing calls from other threads.\n\nI believe this is exactly what you're seeing: your Tkinter code in Thread-1 is trying to peek into the main thread to find the main loop, and it's not there.\nSo, here are some options:\n\nDo what the Tkinter docs recommend and use TkInter from the main thread. Possibly by moving your current main thread code into a worker thread.\nIf you're using some other library that wants to take over the main thread (e.g., twisted), it may have a way to integrate with Tkinter, in which case you should use that.\nUse mkTkinter to solve the problem.\n\nAlso, while I didn't find any exact duplicates of this question, there are a number of related questions on SO. See this question, this answer, and many more for more information.\n", "I know this is late, but I set my thread to a Daemon, and no exception was raised:\nt = threading.Thread(target=your_func)\nt.setDaemon(True)\nt.start()\n\n", "I found a way to solve it.\nit might look like a joke but you just should add\nplt.switch_backend('agg')\n\n", "Since all this did help my problem but did not solve it completely here is an additional thing to keep in mind:\nIn my case I started off importing the pyplot library in many threads and using it there. After moving all the library calls to my main thread I still got that error.\nI did get rid of it by removing all import statements of that library in other files used in other threads. Even if they did not use the library the same error was caused by it.\n", "from tkinter import *\nfrom threading import Thread\nfrom time import sleep\nfrom random import randint\n\nclass GUI():\n\n def __init__(self):\n self.root = Tk()\n self.root.geometry(\"200x200\")\n\n self.btn = Button(self.root,text=\"lauch\")\n self.btn.pack(expand=True)\n\n self.btn.config(command=self.action)\n\n def run(self):\n self.root.mainloop()\n\n def add(self,string,buffer):\n while self.txt:\n msg = str(randint(1,100))+string+\"\\n\"\n self.txt.insert(END,msg)\n sleep(0.5)\n\n def reset_lbl(self):\n self.txt = None\n self.second.destroy()\n\n def action(self):\n self.second = Toplevel()\n self.second.geometry(\"100x100\")\n self.txt = Text(self.second)\n self.txt.pack(expand=True,fill=\"both\")\n\n self.t = Thread(target=self.add,args=(\"new\",None))\n self.t.setDaemon(True)\n self.t.start()\n\n self.second.protocol(\"WM_DELETE_WINDOW\",self.reset_lbl)\n\na = GUI()\na.run()\n\nmaybe this example would help someone.\n", "Write it at the end:\nroot.mainloop()\n\nOf course, in place of root should be the name of your Tk object if it is not root.\n", "You cannot modify your main GIU from another thread\nyou need to send event to the main GUI in order to avoid exceptions\nUse window.write_event_value instead, this method allows you to send events from your threads\nyou can take a look a this too: window.perform_long_operation\n", "I know this question was asked a long time ago, but I wanted to tell you how I solved it. In my case, I have a program that sends and receives messages through the serial port and uses the TKinter library.\nIf I do:\nwhile (True):\n #more code here\n window.update_idletasks()\n window.update()\n\nThe code crashes when a thread tries to access a tkinter function.\nBut, if I do this:\nwindow.mainloop()\n\nAll the threads execute normaly.\nHope this helps someone.\n" ]
[ 55, 19, 17, 4, 2, 2, 1, 0 ]
[]
[]
[ "multithreading", "python", "tkinter" ]
stackoverflow_0014694408_multithreading_python_tkinter.txt
Q: how to install Mediapipe when calling Python script with Streamlit? I am trying to call a python script using Streamlit. I have a requirements.txt file that installs the libraries used in the script: ... mediapipe==0.8.10.1 ... All the libraries successfully download but the Mediapipe library does not install no matter what I do and gives me this error: ERROR: No matching distribution found for mediapipe==0.8.10.1 Please help with the installation of the Mediapipe. Thanks! A: This has happened to me as well. I fixed by using a VPN(Try https://1.1.1.1). Ironically that package was getting blocked by my ISP.
how to install Mediapipe when calling Python script with Streamlit?
I am trying to call a python script using Streamlit. I have a requirements.txt file that installs the libraries used in the script: ... mediapipe==0.8.10.1 ... All the libraries successfully download but the Mediapipe library does not install no matter what I do and gives me this error: ERROR: No matching distribution found for mediapipe==0.8.10.1 Please help with the installation of the Mediapipe. Thanks!
[ "This has happened to me as well. I fixed by using a VPN(Try https://1.1.1.1).\nIronically that package was getting blocked by my ISP.\n" ]
[ 0 ]
[]
[]
[ "mediapipe", "python", "streamlit" ]
stackoverflow_0074468009_mediapipe_python_streamlit.txt
Q: Control after changing format date Pandas I want to control a date after changing date format df["Date start"] = pd.to_datetime(df["Date start"]) df["Date start"] = df["Date start"].dt.strftime('%d/%m/%Y') df = df[df["Date start"] > "01/01/2022"] But I do have an error like this UserWarning: Parsing '16/04/2012' in DD/MM/YYYY format. Provide format or specify infer_datetime_format=True for consistent parsing. df["Date start"] = pd.to_datetime(fd["Date start"],infer_datetime_format=True) How can I fix it? Tried all these methods: df["date start"] = pd.to_datetime(df["date start"], format='%d/%m/%Y', dayfirst=True).dt.strftime('%d/%m/%Y') df["date start"] = pd.to_datetime(df["date start"], dayfirst=True) df["date start"] = df["date start"].dt.strftime("%d/%m/%Y") A: for date comparison, make the date as yyyy-mm-dd (without or without hyphens). This ensures the dates being compared are in chronological order when you have day as first in a date when comparing. it will group all months and all year with the day 1, before day 2 and so on # inline conversion of date to yyyy-mm-dd (default) format and then comparing # and filtering the result using LOC df.loc[pd.to_datetime(df["date start"], dayfirst=True) > "2022-01-01"]
Control after changing format date Pandas
I want to control a date after changing date format df["Date start"] = pd.to_datetime(df["Date start"]) df["Date start"] = df["Date start"].dt.strftime('%d/%m/%Y') df = df[df["Date start"] > "01/01/2022"] But I do have an error like this UserWarning: Parsing '16/04/2012' in DD/MM/YYYY format. Provide format or specify infer_datetime_format=True for consistent parsing. df["Date start"] = pd.to_datetime(fd["Date start"],infer_datetime_format=True) How can I fix it? Tried all these methods: df["date start"] = pd.to_datetime(df["date start"], format='%d/%m/%Y', dayfirst=True).dt.strftime('%d/%m/%Y') df["date start"] = pd.to_datetime(df["date start"], dayfirst=True) df["date start"] = df["date start"].dt.strftime("%d/%m/%Y")
[ "for date comparison, make the date as yyyy-mm-dd (without or without hyphens). This ensures the dates being compared are in chronological order\nwhen you have day as first in a date when comparing. it will group all months and all year with the day 1, before day 2 and so on\n# inline conversion of date to yyyy-mm-dd (default) format and then comparing\n# and filtering the result using LOC\ndf.loc[pd.to_datetime(df[\"date start\"], dayfirst=True) > \"2022-01-01\"]\n\n\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074468077_pandas_python.txt
Q: sqlite error: no such column: (and whatever the arguement is) so i am making a discord bot with using sqlite and discord.py thats the command that gives the error: @bot.command() @commands.has_permissions(administrator=True) async def set_ip(ctx, arg=None): if arg == None: await ctx.send("You must type the IP adress next to the command!") elif arg.endswith('.aternos.me') == False: await ctx.send('IP must end with .aternos.me') elif ctx.guild.id == None: await ctx.send("This is a guild-only command!") else: ipas = None id = ctx.guild.id conn.execute(f'''DROP TABLE IF EXISTS guild_{id}''') conn.execute(f'''CREATE TABLE IF NOT EXISTS guild_{id} ( ip TEXT NOT NULL )''') conn.execute(f'''INSERT INTO guild_{id} ("ip") VALUES ({arg})''') cursor = conn.execute(f'''SELECT ip FROM guild_{id}''') for row in cursor: ipas = row[0] if ipas == None: await ctx.send("Failed to set IP!") conn.execute(f'''DROP TABLE IF EXISTS guild_{id}''') else: await ctx.send(f"Your guild ip is now -> {ipas}") print("An ip has been set!") i tried to create a table that if not exist with name of guild_(and the discord server id) and check that it is set or not Error is: OperationalError: no such column: (the arg) sqlite gives this error and i cant figure it out, please help me. A: You are passing the column name as a string: conn.execute(f'''INSERT INTO guild_{id} ("ip") VALUES ({arg})''') should be: conn.execute(f'''INSERT INTO guild_{id} (ip) VALUES ({arg})''')
sqlite error: no such column: (and whatever the arguement is)
so i am making a discord bot with using sqlite and discord.py thats the command that gives the error: @bot.command() @commands.has_permissions(administrator=True) async def set_ip(ctx, arg=None): if arg == None: await ctx.send("You must type the IP adress next to the command!") elif arg.endswith('.aternos.me') == False: await ctx.send('IP must end with .aternos.me') elif ctx.guild.id == None: await ctx.send("This is a guild-only command!") else: ipas = None id = ctx.guild.id conn.execute(f'''DROP TABLE IF EXISTS guild_{id}''') conn.execute(f'''CREATE TABLE IF NOT EXISTS guild_{id} ( ip TEXT NOT NULL )''') conn.execute(f'''INSERT INTO guild_{id} ("ip") VALUES ({arg})''') cursor = conn.execute(f'''SELECT ip FROM guild_{id}''') for row in cursor: ipas = row[0] if ipas == None: await ctx.send("Failed to set IP!") conn.execute(f'''DROP TABLE IF EXISTS guild_{id}''') else: await ctx.send(f"Your guild ip is now -> {ipas}") print("An ip has been set!") i tried to create a table that if not exist with name of guild_(and the discord server id) and check that it is set or not Error is: OperationalError: no such column: (the arg) sqlite gives this error and i cant figure it out, please help me.
[ "You are passing the column name as a string:\nconn.execute(f'''INSERT INTO guild_{id} (\"ip\") VALUES ({arg})''')\n\nshould be:\nconn.execute(f'''INSERT INTO guild_{id} (ip) VALUES ({arg})''')\n\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python", "python_3.x", "sql", "sqlite" ]
stackoverflow_0074467950_discord.py_python_python_3.x_sql_sqlite.txt
Q: Converting json data in dataframe I'm analyzing club participation. Getting data as json through url request. This is the json I get and load with json_loads: df = [{"club_id":"1234", "sum_totalparticipation":227, "level":1, "idsubdatatable":1229, "segment": "club_id==1234;eventName==national%2520participation,eventName==local%2520partipation,eventName==global%2520participation", "subtable":[{"label":"national participation", "sum_events_totalevents":105,"level":2},{"label":"local participation","sum_events_totalevents":100,"level":2},{"label":"global_participation","sum_events_totalevents":22,"level":2}]}] when I use json_normalize, this is how df looks: so, specific participations are aggregated and only sum is available, and I need them flatten, with global/national/local participation in separate rows. Can you help by providing code? A: If you want to see the details of the subtable field (which is another list of dictionaries itself), then you can do the following: ... df = pd.DataFrame(*data) for i in range(len(df)): df.loc[i, 'label'] = df.loc[i, 'subtable']['label'] df.loc[i, 'sum_events_totalevents'] = df.loc[i, 'subtable']['sum_events_totalevents'] df.loc[i, 'sublevel'] = int(df.loc[i, 'subtable']['level']) Note: I purposely renamed the level field inside the subtable as sublevel, the reason is there is already a column named level in the dataframe, and thus avoiding name conflict
Converting json data in dataframe
I'm analyzing club participation. Getting data as json through url request. This is the json I get and load with json_loads: df = [{"club_id":"1234", "sum_totalparticipation":227, "level":1, "idsubdatatable":1229, "segment": "club_id==1234;eventName==national%2520participation,eventName==local%2520partipation,eventName==global%2520participation", "subtable":[{"label":"national participation", "sum_events_totalevents":105,"level":2},{"label":"local participation","sum_events_totalevents":100,"level":2},{"label":"global_participation","sum_events_totalevents":22,"level":2}]}] when I use json_normalize, this is how df looks: so, specific participations are aggregated and only sum is available, and I need them flatten, with global/national/local participation in separate rows. Can you help by providing code?
[ "If you want to see the details of the subtable field (which is another list of dictionaries itself), then you can do the following:\n...\n \ndf = pd.DataFrame(*data)\n\nfor i in range(len(df)):\n df.loc[i, 'label'] = df.loc[i, 'subtable']['label']\n df.loc[i, 'sum_events_totalevents'] = df.loc[i, 'subtable']['sum_events_totalevents']\n df.loc[i, 'sublevel'] = int(df.loc[i, 'subtable']['level'])\n\nNote: I purposely renamed the level field inside the subtable as sublevel, the reason is there is already a column named level in the dataframe, and thus avoiding name conflict\n" ]
[ 0 ]
[ "The data you show us after your json.load looks quite dirty, some quotes look missing, especially after \"segment\":\"club_id==1234\", and the ; separator at the beginning does not fit the keys separator inside a dict.\nNonetheless, let's consider the data you get is supposed to look like this (a list of dictionaries):\nimport pandas as pd\n\ndata = [{\"club_id\":\"1234\", \"sum_totalparticipation\":227, \"level\":1, \"idsubdatatable\":1229, \"segment\": \"club_id==1234;eventName==national%2520participation,eventName==local%2520partipation,eventName==global%2520participation\",\n\"subtable\":[{\"label\":\"national participation\", \"sum_events_totalevents\":105,\"level\":2},{\"label\":\"local participation\",\"sum_events_totalevents\":100,\"level\":2},{\"label\":\"global_participation\",\"sum_events_totalevents\":22,\"level\":2}]}]\n\nYou can see the result with rows separated by unpacking your data inside a DataFrame:\ndf = pd.DataFrame(*data)\n\nThis is the table we get:\n\nHope this helps\n" ]
[ -1 ]
[ "json", "pandas", "python" ]
stackoverflow_0074467178_json_pandas_python.txt
Q: Pythona, create 2D Numpy array and append data vertically I have an SQL query that creates an array with 9 entries, I want to create a table with Numpy and append data as rows The following code gives me an error ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s) How can initialize the numpy array correctly, and append the array as a row to the table, sql_query = "select top 100 Passed, f.ID, Yld, Line, Location, Type, Name, ErrorID, Site from dw.table1 f join dw.table2 d on f.ID = d.ID where Type like '%test%'" table_array = numpy.empty((0, 9)) cursor.execute(sql_query) row = cursor.fetchone() while row: table_array = numpy.append(table_array, row, axis=0) row = cursor.fetchone()
Pythona, create 2D Numpy array and append data vertically
I have an SQL query that creates an array with 9 entries, I want to create a table with Numpy and append data as rows The following code gives me an error ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s) How can initialize the numpy array correctly, and append the array as a row to the table, sql_query = "select top 100 Passed, f.ID, Yld, Line, Location, Type, Name, ErrorID, Site from dw.table1 f join dw.table2 d on f.ID = d.ID where Type like '%test%'" table_array = numpy.empty((0, 9)) cursor.execute(sql_query) row = cursor.fetchone() while row: table_array = numpy.append(table_array, row, axis=0) row = cursor.fetchone()
[]
[]
[ "Turns out I cannot simply append SQL row as a numpy array, had to fix it this way:\ntable_array = numpy.append(\n table_array, numpy.array([[row[0], row[1], row[2], row[3], row[4], row[5], row[6], row[7], row[8]]]), axis=0)\n\n", "Don't repeatedly append to a numpy array in a loop. Since numpy arrays are contiguous blocks of memory, that requires the entire array to be copied over to new memory. This operation can slow down your loop significantly. Instead, pre-allocate the amount of memory you will need, and assign values when you get them from your query. When you do this, you can simply set one row of your array to the row you get from your query (assuming the type of all elements of your query is compatible with your numpy array)\ntable_array = numpy.empty((100, 9)) # You know you're getting the top 100\nrow_num = 0\ncursor.execute(sql_query)\nrow = cursor.fetchone()\nwhile row and row_num < table_array.shape[0]:\n table_array[row_num, :] = row\n row = cursor.fetchone()\n row_num += 1\n\n# If you want, you can then slice away the unfilled rows.\ntable_array = table_array[:row_num, :]\n\n" ]
[ -1, -1 ]
[ "python", "sql" ]
stackoverflow_0074467833_python_sql.txt
Q: Strip file names from files and open recursively? Saving previous strings? - PYTHON I have a question about reading in a .txt rile and taking the string from inside to be used later on in the code. If I have a file called 'file0.txt' and it contains: file1.txt file2.txt The rest of the files either contain more string file names or are empty. How can I save both of these strings for later use. What I attempted to do was: infile = open(file, 'r') line = infile.readline() line.split('\n') But that returned the following: ['file1.txt', ''] I understand that readline only reads one line, but I thought that by splitting it by the return key it would also grab the next file string. I am attempting to simulate a file tree or to show which files are connected together, but as it stands now it is only going through the first file string in each .txt file. Currently my output is: File 1 crawled. File 3 crawled. Dead end reached. My hope was that instead of just recursivley crawling the first file it would go through the entire web, but that goes back to my issue of not giving the program the second file name in the first place. I'm not asking for a specific answer, just a push in the right direction on how to better handle the strings from the files and be able to store both of them instead of 1. My current code is pretty ugly, but hopefully it gets the idea across, I will just post it for reference to what I'm trying to accomplish. def crawl(file): infile = open(file, 'r') line = infile.readline() print(line.split('\n')) if 'file1.txt' in line: print('File 1 crawled.') return crawl('file1.txt') if 'file2.txt' in line: print('File 2 crawled.') return crawl('file2.txt') if 'file3.txt' in line: print('File 3 crawled.') return crawl('file3.txt') if 'file4.txt' in line: print('File 4 crawled.') return crawl('file4.txt') if 'file5.txt' in line: print('File 5 crawled.') return crawl('file5.txt') #etc...etc... else: print('Dead end reached.') Outside the function: file = 'file0.txt' crawl(file) A: Using read() or readlines() will help. e.g. infile = open(file, 'r') lines = infile.readlines() print list(lines) gives ['file1.txt\n', 'file2.txt\n'] or infile = open(file, 'r') lines = infile.read() print list(lines.split('\n')) gives ['file1.txt', 'file2.txt'] A: Readline only gets one line from the file so it has a newline at the end. What you want is file.read() which will give you the whole file as a single string. Split that using newline and you should have what you need. Also remember that you need to save the list of lines as a new variable i.e. assign to your line.split('\n') action. You could also just use readlines which will get a list of lines from the file. A: change readline to readlines. and no need to split(\n), its already a list. here is a tutorial you should read A: I prepared file0.txt with two files in it, file1.txt, with one file in it, plus file2.txt and file3.txt, which contained no data. Note, this won't extract values already in the list def get_files(current_file, files=[]): # Initialize file list with previous values, or intial value new_files = [] if not files: new_files = [current_file] else: new_files = files # Read files not already in list, to the list with open(current_file, "r") as f_in: for new_file in f_in.read().splitlines(): if new_file not in new_files: new_files.append(new_file.strip()) # Do we need to recurse? cur_file_index = new_files.index(current_file) if cur_file_index < len(new_files) - 1: next_file = new_files[cur_file_index + 1] # Recurse get_files(next_file, new_files) # We're done return new_files initial_file = "file0.txt" files = get_files(initial_file) print(files) Returns: ['file0.txt', 'file1.txt', 'file2.txt', 'file3.txt'] file0.txt file1.txt file2.txt file1.txt file3.txt file2.txt and file3.txt were blank Edits: Added .strip() for safety, and added the contents of the data files so this can be replicated.
Strip file names from files and open recursively? Saving previous strings? - PYTHON
I have a question about reading in a .txt rile and taking the string from inside to be used later on in the code. If I have a file called 'file0.txt' and it contains: file1.txt file2.txt The rest of the files either contain more string file names or are empty. How can I save both of these strings for later use. What I attempted to do was: infile = open(file, 'r') line = infile.readline() line.split('\n') But that returned the following: ['file1.txt', ''] I understand that readline only reads one line, but I thought that by splitting it by the return key it would also grab the next file string. I am attempting to simulate a file tree or to show which files are connected together, but as it stands now it is only going through the first file string in each .txt file. Currently my output is: File 1 crawled. File 3 crawled. Dead end reached. My hope was that instead of just recursivley crawling the first file it would go through the entire web, but that goes back to my issue of not giving the program the second file name in the first place. I'm not asking for a specific answer, just a push in the right direction on how to better handle the strings from the files and be able to store both of them instead of 1. My current code is pretty ugly, but hopefully it gets the idea across, I will just post it for reference to what I'm trying to accomplish. def crawl(file): infile = open(file, 'r') line = infile.readline() print(line.split('\n')) if 'file1.txt' in line: print('File 1 crawled.') return crawl('file1.txt') if 'file2.txt' in line: print('File 2 crawled.') return crawl('file2.txt') if 'file3.txt' in line: print('File 3 crawled.') return crawl('file3.txt') if 'file4.txt' in line: print('File 4 crawled.') return crawl('file4.txt') if 'file5.txt' in line: print('File 5 crawled.') return crawl('file5.txt') #etc...etc... else: print('Dead end reached.') Outside the function: file = 'file0.txt' crawl(file)
[ "Using read() or readlines() will help. e.g.\ninfile = open(file, 'r')\nlines = infile.readlines()\nprint list(lines)\n\ngives\n['file1.txt\\n', 'file2.txt\\n']\n\nor \ninfile = open(file, 'r')\nlines = infile.read()\nprint list(lines.split('\\n'))\n\ngives\n['file1.txt', 'file2.txt']\n\n", "Readline only gets one line from the file so it has a newline at the end. What you want is file.read() which will give you the whole file as a single string. Split that using newline and you should have what you need. Also remember that you need to save the list of lines as a new variable i.e. assign to your line.split('\\n') action. You could also just use readlines which will get a list of lines from the file.\n", "change readline to readlines. and no need to split(\\n), its already a list.\nhere is a tutorial you should read\n", "I prepared file0.txt with two files in it, file1.txt, with one file in it, plus file2.txt and file3.txt, which contained no data. Note, this won't extract values already in the list\ndef get_files(current_file, files=[]):\n # Initialize file list with previous values, or intial value\n new_files = []\n if not files:\n new_files = [current_file]\n else:\n new_files = files\n # Read files not already in list, to the list\n with open(current_file, \"r\") as f_in:\n for new_file in f_in.read().splitlines():\n if new_file not in new_files:\n new_files.append(new_file.strip())\n # Do we need to recurse?\n cur_file_index = new_files.index(current_file)\n if cur_file_index < len(new_files) - 1:\n next_file = new_files[cur_file_index + 1]\n # Recurse\n get_files(next_file, new_files)\n # We're done\n return new_files\n \n\ninitial_file = \"file0.txt\"\nfiles = get_files(initial_file)\nprint(files)\n\nReturns: ['file0.txt', 'file1.txt', 'file2.txt', 'file3.txt']\n\nfile0.txt\nfile1.txt\nfile2.txt\n\nfile1.txt\nfile3.txt\n\nfile2.txt and file3.txt were blank\nEdits: Added .strip() for safety, and added the contents of the data files so this can be replicated.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "function", "python", "recursion" ]
stackoverflow_0022009254_function_python_recursion.txt
Q: How to run Python commands in VS Code Terminal I have installed latest Python Latest Python 3 (python-3.11.0-amd64) and latest VS Code (VSCodeUserSetup-x64-1.73.1). I also installed the Python Extension for Visual Studio Code. I have selected the interpreter as: But I am not able to run any Python Command in the terminal even as an administrator. No error and no complain but just empty line: Why is this happening? A: Has Python been added to your path? There's a checkbox for this in the dialogue when you install it, but if you didn't check that box, then its possible that Python hasn't been added to your path. system properties edit path A: Have you checked python path? system properties--->environment variables--->system variables--->path
How to run Python commands in VS Code Terminal
I have installed latest Python Latest Python 3 (python-3.11.0-amd64) and latest VS Code (VSCodeUserSetup-x64-1.73.1). I also installed the Python Extension for Visual Studio Code. I have selected the interpreter as: But I am not able to run any Python Command in the terminal even as an administrator. No error and no complain but just empty line: Why is this happening?
[ "Has Python been added to your path? There's a checkbox for this in the dialogue when you install it, but if you didn't check that box, then its possible that Python hasn't been added to your path.\nsystem properties\n\nedit path\n", "Have you checked python path?\n\nsystem properties--->environment variables--->system variables--->path\n\n" ]
[ 2, 1 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0074468107_python_visual_studio_code.txt
Q: xarray mask outside list of coordinates I have an Xarray DataArray with values over rectangular 2D grid, and a list of points (pairs of coordinate values) from an arbitrary subset of that grid contained in a pandas Dataframe. How do I mask out values (i.e. set equal to NaN) in the DataArray whose grid coordinates do not appear in the list points? e.g. consider the DataArray In [35]: da = xr.DataArray(data=np.random.randint(10, size=(5, 6)), coords={"x": np.linspace(0, 10, 5), "y": np.linspace(0, 12, 6)}) In [36]: da Out[36]: <xarray.DataArray (x: 5, y: 6)> array([[6, 0, 2, 3, 9, 8], [7, 6, 4, 8, 5, 8], [7, 4, 4, 5, 4, 7], [9, 8, 8, 1, 8, 0], [8, 9, 4, 3, 3, 6]]) Coordinates: * x (x) float64 0.0 2.5 5.0 7.5 10.0 * y (y) float64 0.0 2.4 4.8 7.2 9.6 12.0 and dataframe In [44]: coords = pd.DataFrame([[2.5, 4.8], [2.5, 7.2], [5.0, 12.0], [7.5, 7.2], [10.0, 2.4]], columns=["x_coord", "y_coord"]) In [45]: coords Out[45]: x_coord y_coord 0 2.5 4.8 1 2.5 7.2 2 5.0 12.0 3 7.5 7.2 4 10.0 2.4 then I expect the output to be: Out[84]: <xarray.DataArray (x: 5, y: 6)> array([[nan, nan, nan, nan, nan, nan], [nan, nan, 4., 8., nan, nan], [nan, nan, nan, nan, nan, 7.], [nan, nan, nan, 1., nan, nan], [ 8., nan, nan, nan, nan, nan]]) Coordinates: * x (x) float64 0.0 2.5 5.0 7.5 10.0 * y (y) float64 0.0 2.4 4.8 7.2 9.6 12.0 A: You can convert the dataframe to an xarray object by setting the x and y coordinates as the index, then using to_xarray. since you don't have any data left, I'll just assign a "flag" variable: In [20]: flag = ( ...: coords.assign(flag=1) ...: .set_index(["x_coord", "y_coord"]) ...: .flag ...: .to_xarray() ...: .fillna(0) ...: .rename({"x_coord": "x", "y_coord": "y"}) ...: ) In [21]: flag Out[21]: <xarray.DataArray 'flag' (x: 4, y: 4)> array([[0., 1., 1., 0.], [0., 0., 0., 1.], [0., 0., 1., 0.], [1., 0., 0., 0.]]) Coordinates: * x (x) float64 2.5 5.0 7.5 10.0 * y (y) float64 2.4 4.8 7.2 12.0 To deal with floating point issues, I'll reindex the array to ensure the dims are consistent with the arrays: In [22]: flag = flag.reindex(x=da.x, y=da.y, method="nearest", tolerance=1e-9, fill_value=0) In [23]: flag Out[23]: <xarray.DataArray 'flag' (x: 5, y: 6)> array([[0., 0., 0., 0., 0., 0.], [0., 0., 1., 1., 0., 0.], [0., 0., 0., 0., 0., 1.], [0., 0., 0., 1., 0., 0.], [0., 1., 0., 0., 0., 0.]]) Coordinates: * x (x) float64 0.0 2.5 5.0 7.5 10.0 * y (y) float64 0.0 2.4 4.8 7.2 9.6 12.0 This is now the same shape as your array and can can be used as a mask: In [24]: da.where(flag) Out[24]: <xarray.DataArray (x: 5, y: 6)> array([[nan, nan, nan, nan, nan, nan], [nan, nan, 7., 0., nan, nan], [nan, nan, nan, nan, nan, 5.], [nan, nan, nan, 8., nan, nan], [nan, 8., nan, nan, nan, nan]]) Coordinates: * x (x) float64 0.0 2.5 5.0 7.5 10.0 * y (y) float64 0.0 2.4 4.8 7.2 9.6 12.0 Just in case it's useful, if you wanted to do the opposite; that is, extract the values from the DataArray at the points given in your dataframe, you could use xarray's advanced indexing rules to pull specific points out of the array using DataArray indexers: In [28]: da.sel( ...: x=coords.x_coord.to_xarray(), ...: y=coords.y_coord.to_xarray(), ...: method="nearest", ...: tolerance=1e-9, # use a (low) tolerance to handle floating-point error ...: ) Out[28]: <xarray.DataArray (index: 5)> array([7, 0, 5, 8, 8]) Coordinates: x (index) float64 2.5 2.5 5.0 7.5 10.0 y (index) float64 4.8 7.2 12.0 7.2 2.4 * index (index) int64 0 1 2 3 4
xarray mask outside list of coordinates
I have an Xarray DataArray with values over rectangular 2D grid, and a list of points (pairs of coordinate values) from an arbitrary subset of that grid contained in a pandas Dataframe. How do I mask out values (i.e. set equal to NaN) in the DataArray whose grid coordinates do not appear in the list points? e.g. consider the DataArray In [35]: da = xr.DataArray(data=np.random.randint(10, size=(5, 6)), coords={"x": np.linspace(0, 10, 5), "y": np.linspace(0, 12, 6)}) In [36]: da Out[36]: <xarray.DataArray (x: 5, y: 6)> array([[6, 0, 2, 3, 9, 8], [7, 6, 4, 8, 5, 8], [7, 4, 4, 5, 4, 7], [9, 8, 8, 1, 8, 0], [8, 9, 4, 3, 3, 6]]) Coordinates: * x (x) float64 0.0 2.5 5.0 7.5 10.0 * y (y) float64 0.0 2.4 4.8 7.2 9.6 12.0 and dataframe In [44]: coords = pd.DataFrame([[2.5, 4.8], [2.5, 7.2], [5.0, 12.0], [7.5, 7.2], [10.0, 2.4]], columns=["x_coord", "y_coord"]) In [45]: coords Out[45]: x_coord y_coord 0 2.5 4.8 1 2.5 7.2 2 5.0 12.0 3 7.5 7.2 4 10.0 2.4 then I expect the output to be: Out[84]: <xarray.DataArray (x: 5, y: 6)> array([[nan, nan, nan, nan, nan, nan], [nan, nan, 4., 8., nan, nan], [nan, nan, nan, nan, nan, 7.], [nan, nan, nan, 1., nan, nan], [ 8., nan, nan, nan, nan, nan]]) Coordinates: * x (x) float64 0.0 2.5 5.0 7.5 10.0 * y (y) float64 0.0 2.4 4.8 7.2 9.6 12.0
[ "You can convert the dataframe to an xarray object by setting the x and y coordinates as the index, then using to_xarray. since you don't have any data left, I'll just assign a \"flag\" variable:\nIn [20]: flag = (\n ...: coords.assign(flag=1)\n ...: .set_index([\"x_coord\", \"y_coord\"])\n ...: .flag\n ...: .to_xarray()\n ...: .fillna(0)\n ...: .rename({\"x_coord\": \"x\", \"y_coord\": \"y\"})\n ...: )\n\nIn [21]: flag\nOut[21]:\n<xarray.DataArray 'flag' (x: 4, y: 4)>\narray([[0., 1., 1., 0.],\n [0., 0., 0., 1.],\n [0., 0., 1., 0.],\n [1., 0., 0., 0.]])\nCoordinates:\n * x (x) float64 2.5 5.0 7.5 10.0\n * y (y) float64 2.4 4.8 7.2 12.0\n\nTo deal with floating point issues, I'll reindex the array to ensure the dims are consistent with the arrays:\nIn [22]: flag = flag.reindex(x=da.x, y=da.y, method=\"nearest\", tolerance=1e-9, fill_value=0)\n\nIn [23]: flag\nOut[23]:\n<xarray.DataArray 'flag' (x: 5, y: 6)>\narray([[0., 0., 0., 0., 0., 0.],\n [0., 0., 1., 1., 0., 0.],\n [0., 0., 0., 0., 0., 1.],\n [0., 0., 0., 1., 0., 0.],\n [0., 1., 0., 0., 0., 0.]])\nCoordinates:\n * x (x) float64 0.0 2.5 5.0 7.5 10.0\n * y (y) float64 0.0 2.4 4.8 7.2 9.6 12.0\n\nThis is now the same shape as your array and can can be used as a mask:\nIn [24]: da.where(flag)\nOut[24]:\n<xarray.DataArray (x: 5, y: 6)>\narray([[nan, nan, nan, nan, nan, nan],\n [nan, nan, 7., 0., nan, nan],\n [nan, nan, nan, nan, nan, 5.],\n [nan, nan, nan, 8., nan, nan],\n [nan, 8., nan, nan, nan, nan]])\nCoordinates:\n * x (x) float64 0.0 2.5 5.0 7.5 10.0\n * y (y) float64 0.0 2.4 4.8 7.2 9.6 12.0\n\nJust in case it's useful, if you wanted to do the opposite; that is, extract the values from the DataArray at the points given in your dataframe, you could use xarray's advanced indexing rules to pull specific points out of the array using DataArray indexers:\nIn [28]: da.sel(\n ...: x=coords.x_coord.to_xarray(),\n ...: y=coords.y_coord.to_xarray(),\n ...: method=\"nearest\",\n ...: tolerance=1e-9, # use a (low) tolerance to handle floating-point error\n ...: )\n\nOut[28]:\n<xarray.DataArray (index: 5)>\narray([7, 0, 5, 8, 8])\nCoordinates:\n x (index) float64 2.5 2.5 5.0 7.5 10.0\n y (index) float64 4.8 7.2 12.0 7.2 2.4\n * index (index) int64 0 1 2 3 4\n\n" ]
[ 1 ]
[]
[]
[ "netcdf", "python", "python_xarray" ]
stackoverflow_0074467216_netcdf_python_python_xarray.txt
Q: Bar plot not appearing normally using df.plot.bar() I have the following code. I am trying to loop through variables (dataframe columns) and create bar plots. I have attached below an example of a graph for the column newerdf['age']. I believe this should produce 3 bars (one for each option - male (value = 1), female (value = 2), other(value = 3)). However, the graph below does not seem to show this. I would be so grateful for a helping hand as to where I am going wrong! listedvariables = ['age','gender-quantised','hours_of_sleep','frequency_of_alarm_usage','nap_duration_mins','frequency_of_naps','takes_naps_yes/no','highest_education_level_acheived','hours_exercise_per_week_in_last_6_months','drink_alcohol_yes/no','drink_caffeine_yes/no','hours_exercise_per_week','hours_of_phone_use_per_week','video_game_phone/tablet_hours_per_week','video_game_all_devices_hours_per_week'] for i in range(0,len(listedvariables)): fig = newerdf[[listedvariables[i]]].plot.bar(figsize=(30,20)) fig.tick_params(axis='x',labelsize=40) fig.tick_params(axis='y',labelsize=40) plt.tight_layout() newerdf['age'] age 0 2 1 2 2 4 3 3 5 2 ... ... 911 2 912 1 913 2 914 3 915 2 A: The data are not grouped into categories yet, so a value count is needed before calling the plotting method: for var in listedvariables: ax = newerdf[var].value_counts().plot.bar(figsize=(30,20)) ax.tick_params(axis='x', labelsize=40) ax.tick_params(axis='y', labelsize=40) plt.tight_layout() plt.show()
Bar plot not appearing normally using df.plot.bar()
I have the following code. I am trying to loop through variables (dataframe columns) and create bar plots. I have attached below an example of a graph for the column newerdf['age']. I believe this should produce 3 bars (one for each option - male (value = 1), female (value = 2), other(value = 3)). However, the graph below does not seem to show this. I would be so grateful for a helping hand as to where I am going wrong! listedvariables = ['age','gender-quantised','hours_of_sleep','frequency_of_alarm_usage','nap_duration_mins','frequency_of_naps','takes_naps_yes/no','highest_education_level_acheived','hours_exercise_per_week_in_last_6_months','drink_alcohol_yes/no','drink_caffeine_yes/no','hours_exercise_per_week','hours_of_phone_use_per_week','video_game_phone/tablet_hours_per_week','video_game_all_devices_hours_per_week'] for i in range(0,len(listedvariables)): fig = newerdf[[listedvariables[i]]].plot.bar(figsize=(30,20)) fig.tick_params(axis='x',labelsize=40) fig.tick_params(axis='y',labelsize=40) plt.tight_layout() newerdf['age'] age 0 2 1 2 2 4 3 3 5 2 ... ... 911 2 912 1 913 2 914 3 915 2
[ "The data are not grouped into categories yet, so a value count is needed before calling the plotting method:\nfor var in listedvariables: \n ax = newerdf[var].value_counts().plot.bar(figsize=(30,20))\n ax.tick_params(axis='x', labelsize=40)\n ax.tick_params(axis='y', labelsize=40)\n plt.tight_layout()\n plt.show()\n\n" ]
[ 1 ]
[]
[]
[ "bar_chart", "dataframe", "jupyter_notebook", "pandas", "python" ]
stackoverflow_0074466260_bar_chart_dataframe_jupyter_notebook_pandas_python.txt
Q: If statement in for loop, index out of range with one additional condition? I'm trying to create an if statement in a for loop to look at an element in a list and compare it to the next element with enumerate(). arr = ["NORTH", "SOUTH", "SOUTH", "EAST", "WEST", "NORTH", "WEST"] liste = [] for idx,i in enumerate(arr): if (i == 'NORTH' and arr[idx+1] == 'SOUTH') or (i == 'SOUTH' and arr[idx+1] == 'NORTH') or (i == 'EAST' and arr[idx+1] == 'WEST') or (i == 'WEST' and arr[idx+1] == 'EAST'): liste.append(idx) liste.append(idx+1) print(liste) expected [0, 1, 3, 4] got --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Input In [44], in <cell line: 2>() 1 liste = [] 2 for idx,i in enumerate(arr): ----> 3 if (i == 'NORTH' and arr[idx+1] == 'SOUTH') or (i == 'SOUTH' and arr[idx+1] == 'NORTH') or (i == 'EAST' and arr[idx+1] == 'WEST') or (i == 'WEST' and arr[idx+1] == 'EAST'): 4 liste.append(idx) 5 liste.append(idx+1) IndexError: list index out of range but if the original if is (without the last "or") for idx,i in enumerate(arr): if (i == 'NORTH' and arr[idx+1] == 'SOUTH') or (i == 'SOUTH' and arr[idx+1] == 'NORTH') or (i == 'EAST' and arr[idx+1] == 'WEST'): it goes through fine and gives the expected outcome (this case has no reversed west/east anyway, but I of course want it to work for random lists). What's up with that? It's a codewars problem and I can come up with a workaround myself, so I don't want the solution to the whole problem, I'm just trying to understand why it's behaving this way. EDIT: I just realized it's because the last element in the list is actually "WEST" so then it's checking idx+1 which for the last element is not in the list. In that case I would be interested in how to avoid that! A: When you reach your last element, idx+1 is out of bounds for the array. You would want to keep that in mind in your logic. One way to resolve is to enumerate over the length-1 of the array so it is never trying to access an index out of bounds. For example: arr = ["NORTH", "SOUTH", "SOUTH", "EAST", "WEST", "NORTH", "WEST"] liste = [] for idx,i in enumerate(arr[:len(arr)-1]): if (i == 'NORTH' and arr[idx+1] == 'SOUTH') or (i == 'SOUTH' and arr[idx+1] == 'NORTH') or (i == 'EAST' and arr[idx+1] == 'WEST') or (i == 'WEST' and arr[idx+1] == 'EAST'): liste.append(idx) liste.append(idx+1) print(liste) outputs [0,1,3,4] A: When your loop arrives at the last element, it checks for 'WEST' and then tries to check the next element, but there is none. This gives the index error. A quick fix may be to append an element like 'END', for which none of the checks succeeds. Thus, no element past the end gets checked. A: The behavior you're describing is due to short-circuit evaluation. This condition: i == 'WEST' and arr[idx+1] == 'EAST' will check if i is equal to "WEST". Only when this check passes will it compare if arr[idx + 1] is equal to "EAST" -- at which point an IndexError occurs. The other three conditions in your if statement behave analogously, which is why they don't cause an IndexError to occur. You should create a slice so that you don't check past the last element: from itertools import islice for idx, i in enumerate(islice(arr, len(arr) - 1)): ...
If statement in for loop, index out of range with one additional condition?
I'm trying to create an if statement in a for loop to look at an element in a list and compare it to the next element with enumerate(). arr = ["NORTH", "SOUTH", "SOUTH", "EAST", "WEST", "NORTH", "WEST"] liste = [] for idx,i in enumerate(arr): if (i == 'NORTH' and arr[idx+1] == 'SOUTH') or (i == 'SOUTH' and arr[idx+1] == 'NORTH') or (i == 'EAST' and arr[idx+1] == 'WEST') or (i == 'WEST' and arr[idx+1] == 'EAST'): liste.append(idx) liste.append(idx+1) print(liste) expected [0, 1, 3, 4] got --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Input In [44], in <cell line: 2>() 1 liste = [] 2 for idx,i in enumerate(arr): ----> 3 if (i == 'NORTH' and arr[idx+1] == 'SOUTH') or (i == 'SOUTH' and arr[idx+1] == 'NORTH') or (i == 'EAST' and arr[idx+1] == 'WEST') or (i == 'WEST' and arr[idx+1] == 'EAST'): 4 liste.append(idx) 5 liste.append(idx+1) IndexError: list index out of range but if the original if is (without the last "or") for idx,i in enumerate(arr): if (i == 'NORTH' and arr[idx+1] == 'SOUTH') or (i == 'SOUTH' and arr[idx+1] == 'NORTH') or (i == 'EAST' and arr[idx+1] == 'WEST'): it goes through fine and gives the expected outcome (this case has no reversed west/east anyway, but I of course want it to work for random lists). What's up with that? It's a codewars problem and I can come up with a workaround myself, so I don't want the solution to the whole problem, I'm just trying to understand why it's behaving this way. EDIT: I just realized it's because the last element in the list is actually "WEST" so then it's checking idx+1 which for the last element is not in the list. In that case I would be interested in how to avoid that!
[ "When you reach your last element, idx+1 is out of bounds for the array. You would want to keep that in mind in your logic. One way to resolve is to enumerate over the length-1 of the array so it is never trying to access an index out of bounds.\nFor example:\narr = [\"NORTH\", \"SOUTH\", \"SOUTH\", \"EAST\", \"WEST\", \"NORTH\", \"WEST\"]\nliste = []\nfor idx,i in enumerate(arr[:len(arr)-1]):\n if (i == 'NORTH' and arr[idx+1] == 'SOUTH') or (i == 'SOUTH' and arr[idx+1] == 'NORTH') or (i == 'EAST' and arr[idx+1] == 'WEST') or (i == 'WEST' and arr[idx+1] == 'EAST'):\n liste.append(idx)\n liste.append(idx+1)\nprint(liste)\n\noutputs [0,1,3,4]\n", "When your loop arrives at the last element, it checks for 'WEST' and then tries to check the next element, but there is none. This gives the index error.\nA quick fix may be to append an element like 'END', for which none of the checks succeeds. Thus, no element past the end gets checked.\n", "The behavior you're describing is due to short-circuit evaluation. This condition:\ni == 'WEST' and arr[idx+1] == 'EAST'\n\nwill check if i is equal to \"WEST\". Only when this check passes will it compare if arr[idx + 1] is equal to \"EAST\" -- at which point an IndexError occurs. The other three conditions in your if statement behave analogously, which is why they don't cause an IndexError to occur.\nYou should create a slice so that you don't check past the last element:\nfrom itertools import islice\nfor idx, i in enumerate(islice(arr, len(arr) - 1)):\n ...\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "enumerate", "for_loop", "list", "python" ]
stackoverflow_0074468140_enumerate_for_loop_list_python.txt
Q: Convert a number to Excel’s base 26 OK, I'm stuck on something seemingly simple. I am trying to convert a number to base 26 (ie. 3 = C, 27 = AA, ect.). I am guessing my problem has to do with not having a 0 in the model? Not sure. But if you run the code, you will see that numbers 52, 104 and especially numbers around 676 are really weird. Can anyone give me a hint as to what I am not seeing? I will appreciate it. (just in case to avoid wasting your time, @ is ascii char 64, A is ascii char 65) def toBase26(x): x = int(x) if x == 0: return '0' if x < 0: negative = True x = abs(x) else: negative = False def digit_value (val): return str(chr(int(val)+64)) digits = 1 base26 = "" while 26**digits < x: digits += 1 while digits != 0: remainder = x%(26**(digits-1)) base26 += digit_value((x-remainder)/(26**(digits-1))) x = remainder digits -= 1 if negative: return '-'+base26 else: return base26 import io with io.open('numbers.txt','w') as f: for i in range(1000): f.write('{} is {}\n'.format(i,toBase26(i))) So, I found a temporary workaround by making a couple of changes to my function (the 2 if statements in the while loop). My columns are limited to 500 anyways, and the following change to the function seems to do the trick up to x = 676, so I am satisfied. However if any of you find a general solution for any x (may be my code may help), would be pretty cool! def toBase26(x): x = int(x) if x == 0: return '0' if x < 0: negative = True x = abs(x) else: negative = False def digit_value (val): return str(chr(int(val)+64)) digits = 1 base26 = "" while 26**digits < x: digits += 1 while digits != 0: remainder = x%(26**(digits-1)) if remainder == 0: remainder += 26**(digits-1) if digits == 1: remainder -= 1 base26 += digit_value((x-remainder)/(26**(digits-1))) x = remainder digits -= 1 if negative: return '-'+base26 else: return base26 A: The problem when converting to Excel’s “base 26” is that for Excel, a number ZZ is actually 26 * 26**1 + 26 * 26**0 = 702 while normal base 26 number systems would make a 1 * 26**2 + 1 * 26**1 + 0 * 26**0 = 702 (BBA) out of that. So we cannot use the usual ways here to convert these numbers. Instead, we have to roll our own divmod_excel function: def divmod_excel(n): a, b = divmod(n, 26) if b == 0: return a - 1, b + 26 return a, b With that, we can create a to_excel function: import string def to_excel(num): chars = [] while num > 0: num, d = divmod_excel(num) chars.append(string.ascii_uppercase[d - 1]) return ''.join(reversed(chars)) For the other direction, this is a bit simpler from functools import reduce def from_excel(chars): return reduce(lambda r, x: r * 26 + x + 1, map(string.ascii_uppercase.index, chars), 0) This set of functions does the right thing: >>> to_excel(26) 'Z' >>> to_excel(27) 'AA' >>> to_excel(702) 'ZZ' >>> to_excel(703) 'AAA' >>> from_excel('Z') 26 >>> from_excel('AA') 27 >>> from_excel('ZZ') 702 >>> from_excel('AAA') 703 And we can actually confirm that they work correctly opposite of each other by simply checking whether we can chain them to reproduce the original number: for i in range(100000): if from_excel(to_excel(i)) != i: print(i) # (prints nothing) A: Sorry, I wrote this in Pascal and know no Python function NumeralBase26Excel(numero: Integer): string; var algarismo: Integer; begin Result := ''; numero := numero - 1; if numero >= 0 then begin algarismo := numero mod 26; if numero < 26 then Result := Chr(Ord('A') + algarismo) else Result := NumeralBase26Excel(numero div 26) + Chr(Ord('A') + algarismo); end; end; A: Simplest way, if you do not want to do it yourself: from openpyxl.utils import get_column_letter proper_excel_column_letter = get_column_letter(5) # will equal "E" A: You can do it in one line (with line continuations for easier reading). Written here in VBA: Function sColumn(nColumn As Integer) As String ' Return Excel column letter for a given column number. ' 703 = 26^2 + 26^1 + 26^0 ' 64 = Asc("A") - 1 sColumn = _ IIf(nColumn < 703, "", Chr(Int((Int((nColumn - 1) / 26) - 1) / 26) + 64)) & _ IIf(nColumn < 27, "", Chr( ((Int((nColumn - 1) / 26) - 1) Mod 26) + 1 + 64)) & _ Chr( ( (nColumn - 1) Mod 26) + 1 + 64) End Function Or you can do it in the the worksheet: =if(<col num> < 703, "", char(floor((floor((<col num> - 1) / 26, 1) - 1) / 26, 1) + 64)) & if(<col num> < 27, "", char(mod( floor((<col num> - 1) / 26, 1) - 1, 26) + 1 + 64)) & char(mod( <col num> - 1 , 26) + 1 + 64) I've also posted the inverse operation done similarly.
Convert a number to Excel’s base 26
OK, I'm stuck on something seemingly simple. I am trying to convert a number to base 26 (ie. 3 = C, 27 = AA, ect.). I am guessing my problem has to do with not having a 0 in the model? Not sure. But if you run the code, you will see that numbers 52, 104 and especially numbers around 676 are really weird. Can anyone give me a hint as to what I am not seeing? I will appreciate it. (just in case to avoid wasting your time, @ is ascii char 64, A is ascii char 65) def toBase26(x): x = int(x) if x == 0: return '0' if x < 0: negative = True x = abs(x) else: negative = False def digit_value (val): return str(chr(int(val)+64)) digits = 1 base26 = "" while 26**digits < x: digits += 1 while digits != 0: remainder = x%(26**(digits-1)) base26 += digit_value((x-remainder)/(26**(digits-1))) x = remainder digits -= 1 if negative: return '-'+base26 else: return base26 import io with io.open('numbers.txt','w') as f: for i in range(1000): f.write('{} is {}\n'.format(i,toBase26(i))) So, I found a temporary workaround by making a couple of changes to my function (the 2 if statements in the while loop). My columns are limited to 500 anyways, and the following change to the function seems to do the trick up to x = 676, so I am satisfied. However if any of you find a general solution for any x (may be my code may help), would be pretty cool! def toBase26(x): x = int(x) if x == 0: return '0' if x < 0: negative = True x = abs(x) else: negative = False def digit_value (val): return str(chr(int(val)+64)) digits = 1 base26 = "" while 26**digits < x: digits += 1 while digits != 0: remainder = x%(26**(digits-1)) if remainder == 0: remainder += 26**(digits-1) if digits == 1: remainder -= 1 base26 += digit_value((x-remainder)/(26**(digits-1))) x = remainder digits -= 1 if negative: return '-'+base26 else: return base26
[ "The problem when converting to Excel’s “base 26” is that for Excel, a number ZZ is actually 26 * 26**1 + 26 * 26**0 = 702 while normal base 26 number systems would make a 1 * 26**2 + 1 * 26**1 + 0 * 26**0 = 702 (BBA) out of that. So we cannot use the usual ways here to convert these numbers.\nInstead, we have to roll our own divmod_excel function:\ndef divmod_excel(n):\n a, b = divmod(n, 26)\n if b == 0:\n return a - 1, b + 26\n return a, b\n\nWith that, we can create a to_excel function:\nimport string\ndef to_excel(num):\n chars = []\n while num > 0:\n num, d = divmod_excel(num)\n chars.append(string.ascii_uppercase[d - 1])\n return ''.join(reversed(chars))\n\nFor the other direction, this is a bit simpler\nfrom functools import reduce\ndef from_excel(chars):\n return reduce(lambda r, x: r * 26 + x + 1, map(string.ascii_uppercase.index, chars), 0)\n\n\nThis set of functions does the right thing:\n>>> to_excel(26)\n'Z'\n>>> to_excel(27)\n'AA'\n>>> to_excel(702)\n'ZZ'\n>>> to_excel(703)\n'AAA'\n>>> from_excel('Z')\n26\n>>> from_excel('AA')\n27\n>>> from_excel('ZZ')\n702\n>>> from_excel('AAA')\n703\n\nAnd we can actually confirm that they work correctly opposite of each other by simply checking whether we can chain them to reproduce the original number:\nfor i in range(100000):\n if from_excel(to_excel(i)) != i:\n print(i)\n# (prints nothing)\n\n", "Sorry, I wrote this in Pascal and know no Python\nfunction NumeralBase26Excel(numero: Integer): string;\nvar\n algarismo: Integer;\nbegin\n Result := '';\n numero := numero - 1;\n if numero >= 0 then\n begin\n algarismo := numero mod 26;\n if numero < 26 then\n Result := Chr(Ord('A') + algarismo)\n else\n Result := NumeralBase26Excel(numero div 26) + Chr(Ord('A') + algarismo);\n end;\nend;\n\n", "Simplest way, if you do not want to do it yourself:\nfrom openpyxl.utils import get_column_letter\n\nproper_excel_column_letter = get_column_letter(5)\n# will equal \"E\"\n\n", "You can do it in one line (with line continuations for easier reading). Written here in VBA:\nFunction sColumn(nColumn As Integer) As String\n\n' Return Excel column letter for a given column number.\n\n' 703 = 26^2 + 26^1 + 26^0\n' 64 = Asc(\"A\") - 1\n\nsColumn = _\n IIf(nColumn < 703, \"\", Chr(Int((Int((nColumn - 1) / 26) - 1) / 26) + 64)) & _\n IIf(nColumn < 27, \"\", Chr( ((Int((nColumn - 1) / 26) - 1) Mod 26) + 1 + 64)) & _\n Chr( ( (nColumn - 1) Mod 26) + 1 + 64)\n\nEnd Function\n\nOr you can do it in the the worksheet:\n=if(<col num> < 703, \"\", char(floor((floor((<col num> - 1) / 26, 1) - 1) / 26, 1) + 64)) & \n if(<col num> < 27, \"\", char(mod( floor((<col num> - 1) / 26, 1) - 1, 26) + 1 + 64)) & \n char(mod( <col num> - 1 , 26) + 1 + 64)\n\n\nI've also posted the inverse operation done similarly.\n" ]
[ 14, 0, 0, 0 ]
[]
[]
[ "base", "numbers", "python" ]
stackoverflow_0048983939_base_numbers_python.txt
Q: In a bash script, parsing arguments as variables does not work when calling a python script I am trying to parse arguments as bash variables when calling a python script. #!/bin/bash var="--circular True" python python_script.py --input input_file "$var" I got this error: python_script.py: error: unrecognized arguments: --circular True However, if I don't use a variable for the --circular flag, the script runs well without errors. #!/bin/bash python python_script.py --input input_file --circular True P.S. I am using the python module argparse A: "$var" expands the value of variable var to a single shell word. The result is not subject to word splitting in the context in which the epxansion takes place. This is why with ... var="--circular True" python python_script.py --input input_file "$var" ... Python sees a single argument --circular True instead of multiple arguments. In this case, you could just leave out the quotes around the expansion of $var on the python command line, but this is a poor idea in general. The best way to store multiple command-line arguments in a variable in Bash is to use an array: var=(--circular True) python python_script.py --input input_file "${var[@]}" In particular, the form "${var[@]}" expands to all the elements of array var, each element as a separate word. This handles cases that are not handled by using a scalar var and expanding it unquoted does not. For example, with ... var=(--label 'plausible example') python python_script.py --input input_file "${var[@]}" ... two arguments are derived from the variable, --label and plausible example. However, with this alternative ... var="--label 'plausible example'" python python_script.py --input input_file $var ... the arguments derived from var are --label and 'plausible example' (note the extra quotes), and with this ... var="--label plausible example" python python_script.py --input input_file $var ... three arguments are derived from var: --label, plausible, and example. This becomes even more important when the arguments are derived from user input, so that you do not have the opportunity to tune your usage to the specific data.
In a bash script, parsing arguments as variables does not work when calling a python script
I am trying to parse arguments as bash variables when calling a python script. #!/bin/bash var="--circular True" python python_script.py --input input_file "$var" I got this error: python_script.py: error: unrecognized arguments: --circular True However, if I don't use a variable for the --circular flag, the script runs well without errors. #!/bin/bash python python_script.py --input input_file --circular True P.S. I am using the python module argparse
[ "\"$var\" expands the value of variable var to a single shell word. The result is not subject to word splitting in the context in which the epxansion takes place. This is why with ...\n\nvar=\"--circular True\"\npython python_script.py --input input_file \"$var\"\n\n\n... Python sees a single argument --circular True instead of multiple arguments.\nIn this case, you could just leave out the quotes around the expansion of $var on the python command line, but this is a poor idea in general. The best way to store multiple command-line arguments in a variable in Bash is to use an array:\nvar=(--circular True)\npython python_script.py --input input_file \"${var[@]}\"\n\nIn particular, the form \"${var[@]}\" expands to all the elements of array var, each element as a separate word. This handles cases that are not handled by using a scalar var and expanding it unquoted does not. For example, with ...\nvar=(--label 'plausible example')\npython python_script.py --input input_file \"${var[@]}\"\n\n... two arguments are derived from the variable, --label and plausible example. However, with this alternative ...\nvar=\"--label 'plausible example'\"\npython python_script.py --input input_file $var\n\n... the arguments derived from var are --label and 'plausible example' (note the extra quotes), and with this ...\nvar=\"--label plausible example\"\npython python_script.py --input input_file $var\n\n... three arguments are derived from var: --label, plausible, and example.\nThis becomes even more important when the arguments are derived from user input, so that you do not have the opportunity to tune your usage to the specific data.\n" ]
[ 2 ]
[]
[]
[ "bash", "parsing", "python", "variables" ]
stackoverflow_0074467160_bash_parsing_python_variables.txt
Q: How to use relative import to import a function from a script in the parent folder How can I import a function in a script, where the function is defined in the parent's child folder? In the following folder structure, I would like to use root_folder/ utils_folder: __init__.py helper_functions.py (where Function_A is defined) module_A_folder: Script_A.py (Function_A will be imported and used here) Script_A.py needs to use Function_A. The __init__.py of utils_folder is defined: from .helper_functions import Function_A When I tried to import Function_A in Script_A.py like this: from ..utils import Function_A I received the following error: ImportError: attempted relative import with no known parent package How can I make this work? I am with Python 3.9 x64. A: Try: from utils_folder.helper_functions import Function_A
How to use relative import to import a function from a script in the parent folder
How can I import a function in a script, where the function is defined in the parent's child folder? In the following folder structure, I would like to use root_folder/ utils_folder: __init__.py helper_functions.py (where Function_A is defined) module_A_folder: Script_A.py (Function_A will be imported and used here) Script_A.py needs to use Function_A. The __init__.py of utils_folder is defined: from .helper_functions import Function_A When I tried to import Function_A in Script_A.py like this: from ..utils import Function_A I received the following error: ImportError: attempted relative import with no known parent package How can I make this work? I am with Python 3.9 x64.
[ "Try:\nfrom utils_folder.helper_functions import Function_A\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074468241_python_python_3.x.txt
Q: Search in multiple models in Django I have many different models in Django and I want to search for a keyword in all of them. For example, if you searched "blah", I want to show all of the products with "blah", all of the invoices with "blah", and finally all of the other models with "blah". I can develop a view and search in all of the models separately, but it's not a good idea. What is the best practice for this situation? A: I've run into this situation a few times and one solution is to use model managers, and create distinct search methods for single and multi-word queries. Take the following example models below: Each has its own custom Model Manager, with two separate query methods. search will query single-word searches against all fields, while search_and will query each word in a list of search words, using the reduce function. Both use Q objects to accomplish multi-field lookups. from functools import reduce from django.db.models import Q class ProductQuerySet(models.QuerySet): def search(self, query=None): qs = self if query is not None: or_lookup = (Q(product_name__icontains=query) | Q(description__icontains=query) | Q(category__icontains=query)) qs = qs.filter(or_lookup, active=True, released=True).distinct() return qs def search_and(self, query=None): qs = self if query is not None: or_lookup = reduce(lambda x, y: x & y, [(Q(product_name__icontains=word) | Q(description__icontains=word) | Q(category__icontains=word)) for word in query]) qs = qs.filter(or_lookup, active=True, released=True).distinct() return qs class ProductManager(models.Manager): def get_queryset(self): return ProductQuerySet(self.model, using=self._db) def search(self, query=None): return self.get_queryset().search(query=query) def search_and(self, query=None): return self.get_queryset().search_and(query=query) class Product(models.Model): product_name = models.CharField(max_length=200) description = models.CharField(max_length=240, blank=True, null=True) category = models.CharField(max_length=100, choices=CATEGORY) objects = ProductManager() def __str__(self): return self.product_name class ProfileQuerySet(models.QuerySet): def search(self, query=None): qs = self if query is not None: or_lookup = (Q(full_name__icontains=query) | Q(job_title__icontains=query) | Q(function__icontains=query)) qs = qs.filter(or_lookup, active=True, released=True).distinct() return qs def search_and(self, query=None): qs = self if query is not None: or_lookup = reduce(lambda x, y: x & y, [(Q(full_name__icontains=word) | Q(job_title__icontains=word) | Q(function__icontains=word)) for word in query]) qs = qs.filter(or_lookup, active=True, released=True).distinct() return qs class ProfileManager(models.Manager): def get_queryset(self): return ProfileQuerySet(self.model, using=self._db) def search(self, query=None): return self.get_queryset().search(query=query) def search_and(self, query=None): return self.get_queryset().search_and(query=query) class Profile(models.Model): full_name = models.CharField(max_length=200) job_title = models.CharField(max_length=240, blank=True, null=True) function = models.CharField(max_length=100, choices=FUNCTION) objects = ProfileManager() def __str__(self): return self.full_name Now to use these in a search view, which you can point at as many model managers as you like, which can query as many fields as you like. Using the example models above, here's a sample view, below. It passes the search term or terms to the appropriate model manager method, based on the count of terms (either 1 or more than 1). def application_search(request): data = dict() if 'query' in request.GET: query_list = request.GET.get("query", None).split() if query_list: try: if len(query_list) > 1: products = Product.objects.search_and(query=query_list) profiles = Profile.objects.search_and(query=query_list) else: products = Product.objects.search(query=query_list[0]) profiles = Profile.objects.search(query=query_list[0]) except: # Throw exception or log error here try: queryset_chain = chain(products, profiles) # combines querysets into one results = sorted(queryset_chain, key=lambda instance: instance.id, reverse=True) #sorts results by ID except: results = None data['results'] = render_to_string('pages/my_search_result_page.html', {'results': results}) return JsonResponse(data) The query in this view is actually being passed to the backend via AJAX, but you may do it differently based on your needs and template design.
Search in multiple models in Django
I have many different models in Django and I want to search for a keyword in all of them. For example, if you searched "blah", I want to show all of the products with "blah", all of the invoices with "blah", and finally all of the other models with "blah". I can develop a view and search in all of the models separately, but it's not a good idea. What is the best practice for this situation?
[ "I've run into this situation a few times and one solution is to use model managers, and create distinct search methods for single and multi-word queries. Take the following example models below: Each has its own custom Model Manager, with two separate query methods. search will query single-word searches against all fields, while search_and will query each word in a list of search words, using the reduce function. Both use Q objects to accomplish multi-field lookups.\nfrom functools import reduce\nfrom django.db.models import Q\n\nclass ProductQuerySet(models.QuerySet):\n \n def search(self, query=None):\n qs = self\n if query is not None:\n or_lookup = (Q(product_name__icontains=query) | \n Q(description__icontains=query) | \n Q(category__icontains=query))\n\n qs = qs.filter(or_lookup, active=True, released=True).distinct()\n return qs\n\n def search_and(self, query=None):\n qs = self\n if query is not None:\n or_lookup = reduce(lambda x, y: x & y, [(Q(product_name__icontains=word) |\n Q(description__icontains=word) | \n Q(category__icontains=word)) for word in query])\n qs = qs.filter(or_lookup, active=True, released=True).distinct()\n \n return qs\n \n\nclass ProductManager(models.Manager):\n def get_queryset(self):\n return ProductQuerySet(self.model, using=self._db)\n\n def search(self, query=None):\n return self.get_queryset().search(query=query)\n\n def search_and(self, query=None):\n return self.get_queryset().search_and(query=query)\n\n\nclass Product(models.Model):\n product_name = models.CharField(max_length=200)\n description = models.CharField(max_length=240, blank=True, null=True) \n category = models.CharField(max_length=100, choices=CATEGORY)\n \n objects = ProductManager()\n\n def __str__(self):\n return self.product_name\n\n\nclass ProfileQuerySet(models.QuerySet):\n \n def search(self, query=None):\n qs = self\n if query is not None:\n or_lookup = (Q(full_name__icontains=query) | \n Q(job_title__icontains=query) | \n Q(function__icontains=query))\n\n qs = qs.filter(or_lookup, active=True, released=True).distinct()\n return qs\n\n def search_and(self, query=None):\n qs = self\n if query is not None:\n or_lookup = reduce(lambda x, y: x & y, [(Q(full_name__icontains=word) |\n Q(job_title__icontains=word) | \n Q(function__icontains=word)) for word in query])\n qs = qs.filter(or_lookup, active=True, released=True).distinct()\n \n return qs\n \n\nclass ProfileManager(models.Manager):\n def get_queryset(self):\n return ProfileQuerySet(self.model, using=self._db)\n\n def search(self, query=None):\n return self.get_queryset().search(query=query)\n\n def search_and(self, query=None):\n return self.get_queryset().search_and(query=query)\n\n\nclass Profile(models.Model):\n full_name = models.CharField(max_length=200)\n job_title = models.CharField(max_length=240, blank=True, null=True) \n function = models.CharField(max_length=100, choices=FUNCTION)\n \n objects = ProfileManager()\n\n def __str__(self):\n return self.full_name\n\n\nNow to use these in a search view, which you can point at as many model managers as you like, which can query as many fields as you like. Using the example models above, here's a sample view, below. It passes the search term or terms to the appropriate model manager method, based on the count of terms (either 1 or more than 1).\n\n\ndef application_search(request):\n data = dict()\n \n if 'query' in request.GET: \n query_list = request.GET.get(\"query\", None).split()\n if query_list:\n try:\n if len(query_list) > 1:\n products = Product.objects.search_and(query=query_list)\n profiles = Profile.objects.search_and(query=query_list)\n else:\n products = Product.objects.search(query=query_list[0])\n profiles = Profile.objects.search(query=query_list[0])\n except:\n # Throw exception or log error here \n try:\n queryset_chain = chain(products, profiles) # combines querysets into one\n results = sorted(queryset_chain, key=lambda instance: instance.id, reverse=True) #sorts results by ID\n except:\n results = None \n\n data['results'] = render_to_string('pages/my_search_result_page.html', {'results': results})\n\n return JsonResponse(data)\n\n\nThe query in this view is actually being passed to the backend via AJAX, but you may do it differently based on your needs and template design.\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "python" ]
stackoverflow_0074419771_django_django_rest_framework_python.txt
Q: numpy.linalg.det returns very small numbers instead of 0 I calculated the determinant of matrix using np.linalg.det(matrix) but it returns weird values. For example, it gives 1.1012323e-16 instead of 0. Of course, I can round the result with numpy.around, but is there any option to set some "default" rounding for results of all numpy methods, including numpy.linalg.det? A: The value of the determinant looking "weird" is due to the floating point arithmetic, you can look it up. Regarding your question, I believe numpy.set_printoptions is what you are looking for. Please, see Docs
numpy.linalg.det returns very small numbers instead of 0
I calculated the determinant of matrix using np.linalg.det(matrix) but it returns weird values. For example, it gives 1.1012323e-16 instead of 0. Of course, I can round the result with numpy.around, but is there any option to set some "default" rounding for results of all numpy methods, including numpy.linalg.det?
[ "The value of the determinant looking \"weird\" is due to the floating point arithmetic, you can look it up.\nRegarding your question, I believe numpy.set_printoptions is what you are looking for. Please, see Docs\n" ]
[ 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074468282_numpy_python.txt
Q: Regex to find some special match of characters Hi guys i have this text US Championships ---------------- [Event "ch-USA sf"] [Site "Denver USA"] [Date "1998.11.10"] [Round "01"] [White "Shaked,T"] [Black "DeFirmian,N"] [Result "1/2-1/2"] [ECO "A30"] [WhiteElo "2490"] [BlackElo "2605"] 1. c4 c5 2. Nf3 Nc6 3. Nc3 Nf6 4. g3 b6 5. Bg2 Bb7 6. O-O e6 7. e4 d6 8. d4 cxd4 9. Nxd4 Nxd4 10. Qxd4 Be7 11. b3 O-O 12. Ba3 Qc7 13. Rfd1 Rfd8 14. Nb5 Qc6 15. Qe3 Ne8 16. Bb4 a6 17. Nd4 Qc7 18. a4 Nf6 19. a5 Re8 20. Ne2 bxa5 21. Bxa5 Qc6 22. Nc3 Rab8 23. h3 Ba8 24. Rab1 Bd8 25. Bxd8 Rexd8 26. f4 h6 27. Rb2 Qc7 28. Kh2 Bc6 29. Rd4 a5 30. Qd2 Kf8 31. Qd1 e5 32. fxe5 dxe5 33. Rxd8+ Rxd8 34. Rd2 Rb8 35. Nd5 Bxd5 36. exd5 Qc5 37. Qc2 h5 38. Qc3 e4 39. Re2 a4 40. bxa4 Nxd5 41. Qe5 Rb1 42. h4 Qg1+ 43. Kh3 Nf6 44. Bxe4 Nxe4 1/2-1/2 7th Monarch Assurance --------------------- [Event "7th Monarch Assurance"] [Site "Port Erin IOM"] [Date "1998.11.09"] [Round "03"] [White "Plaskett,J"] [Black "Sutovsky,E"] [Result "0-1"] [ECO "B50"] [WhiteElo "2455"] [BlackElo "2575"] 1. e4 c5 2. Nf3 d6 3. Bc4 Nf6 4. Nc3 a6 5. d3 Nc6 6. a3 g6 7. O-O Bg7 8. h3 b5 9. Ba2 O-O 10. Bg5 h6 11. Be3 e5 12. Qd2 Nd4 13. Nh2 Kh7 14. f4 Be6 15. Bxe6 fxe6 16. fxe5 dxe5 17. g4 Nd7 18. Rxf8 Qxf8 19. Rf1 Qe7 20. g5 h5 21. Ne2 b4 22. axb4 cxb4 23. Nf3 Qc5 24. Nexd4 exd4 25. Bf4 a5 26. b3 a4 27. bxa4 Rxa4 28. Qd1 Ra2 29. Rf2 Qc3 30. Kg2 Nc5 31. Rd2 b3 32. cxb3 Qxd3 33. Qe1 Qxb3 34. e5 d3 35. Rxa2 Qxa2+ 36. Qf2 Qd5 37. Qd4 Qxd4 38. Nxd4 Bxe5 0-1 the text above is a sample! I'm trying hard to find a regex that will find all of Some Text ---------------- Lines and remove them using python! I'm trying hard to find regex match of these both lines so I can remove them The amount of the -- is random but its absolutely more than 1 and I only want to match two lines not any more things So from the example text I want these matches US Championships ---------------- 7th Monarch Assurance --------------------- Can you guys please help me I tried many ways but no success A: Try (text is your text from the question) regex demo: import re pat = re.compile(r"^.*\n-+$", flags=re.M) for m in pat.findall(text): print(m) Prints: US Championships ---------------- 7th Monarch Assurance ---------------------
Regex to find some special match of characters
Hi guys i have this text US Championships ---------------- [Event "ch-USA sf"] [Site "Denver USA"] [Date "1998.11.10"] [Round "01"] [White "Shaked,T"] [Black "DeFirmian,N"] [Result "1/2-1/2"] [ECO "A30"] [WhiteElo "2490"] [BlackElo "2605"] 1. c4 c5 2. Nf3 Nc6 3. Nc3 Nf6 4. g3 b6 5. Bg2 Bb7 6. O-O e6 7. e4 d6 8. d4 cxd4 9. Nxd4 Nxd4 10. Qxd4 Be7 11. b3 O-O 12. Ba3 Qc7 13. Rfd1 Rfd8 14. Nb5 Qc6 15. Qe3 Ne8 16. Bb4 a6 17. Nd4 Qc7 18. a4 Nf6 19. a5 Re8 20. Ne2 bxa5 21. Bxa5 Qc6 22. Nc3 Rab8 23. h3 Ba8 24. Rab1 Bd8 25. Bxd8 Rexd8 26. f4 h6 27. Rb2 Qc7 28. Kh2 Bc6 29. Rd4 a5 30. Qd2 Kf8 31. Qd1 e5 32. fxe5 dxe5 33. Rxd8+ Rxd8 34. Rd2 Rb8 35. Nd5 Bxd5 36. exd5 Qc5 37. Qc2 h5 38. Qc3 e4 39. Re2 a4 40. bxa4 Nxd5 41. Qe5 Rb1 42. h4 Qg1+ 43. Kh3 Nf6 44. Bxe4 Nxe4 1/2-1/2 7th Monarch Assurance --------------------- [Event "7th Monarch Assurance"] [Site "Port Erin IOM"] [Date "1998.11.09"] [Round "03"] [White "Plaskett,J"] [Black "Sutovsky,E"] [Result "0-1"] [ECO "B50"] [WhiteElo "2455"] [BlackElo "2575"] 1. e4 c5 2. Nf3 d6 3. Bc4 Nf6 4. Nc3 a6 5. d3 Nc6 6. a3 g6 7. O-O Bg7 8. h3 b5 9. Ba2 O-O 10. Bg5 h6 11. Be3 e5 12. Qd2 Nd4 13. Nh2 Kh7 14. f4 Be6 15. Bxe6 fxe6 16. fxe5 dxe5 17. g4 Nd7 18. Rxf8 Qxf8 19. Rf1 Qe7 20. g5 h5 21. Ne2 b4 22. axb4 cxb4 23. Nf3 Qc5 24. Nexd4 exd4 25. Bf4 a5 26. b3 a4 27. bxa4 Rxa4 28. Qd1 Ra2 29. Rf2 Qc3 30. Kg2 Nc5 31. Rd2 b3 32. cxb3 Qxd3 33. Qe1 Qxb3 34. e5 d3 35. Rxa2 Qxa2+ 36. Qf2 Qd5 37. Qd4 Qxd4 38. Nxd4 Bxe5 0-1 the text above is a sample! I'm trying hard to find a regex that will find all of Some Text ---------------- Lines and remove them using python! I'm trying hard to find regex match of these both lines so I can remove them The amount of the -- is random but its absolutely more than 1 and I only want to match two lines not any more things So from the example text I want these matches US Championships ---------------- 7th Monarch Assurance --------------------- Can you guys please help me I tried many ways but no success
[ "Try (text is your text from the question) regex demo:\nimport re\n\npat = re.compile(r\"^.*\\n-+$\", flags=re.M)\n\nfor m in pat.findall(text):\n print(m)\n\nPrints:\nUS Championships\n----------------\n7th Monarch Assurance\n---------------------\n\n" ]
[ 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074468306_python_regex.txt
Q: Is there a way to speed up the Save method with PIL? I have an API that saves an the image to S3 bucket and returns the S3 URL but the saving part of the PIL image is slow. Here is a snippet of code: from PIL import Image import io import boto3 BUCKET = '' s3 = boto3.resource('s3') def convert_fn(args): pil_image = Image.open(args['path']).convert('RGBA') . . . in_mem_file = io.BytesIO() pil_image.save(in_mem_file, format='PNG') #<--- This takes too long in_mem_file.seek(0) s3.meta.client.upload_fileobj( in_mem_file, BUCKET, 'outputs/{}.png'.format(args['save_name']), ExtraArgs={ 'ACL': 'public-read', 'ContentType':'image/png' } ) return json.dumps({"Image saved in": "https://{}.s3.amazonaws.com/outputs/{}.png".format(BUCKET, args['save_name'])}) How can I speed up the upload?, Would it be easier to return the bytes? The Image.save method is the most time consuming part of my script. I want to increase the performance of my app and I'm thinking that returning as a stream of bytes may be the fastest way to return the image. A: Compressing image data to PNG takes time - CPU time. There might be a better performant lib to that than PIL, but you'd have to interface it with Python, and it still would take sometime. "Returning bytes" make no sense - you either want to have image files saved on S3 or don't. And the "bytes" will only represent an image as long as they are properly encoded into an image file, unless you have code to compose back an image from raw bytes. For speeding this up, you could either create an AWS lambda project that will take the unencoded array, generate the png file and save it to S3 in an async mode, or, easier, you might try saving the image in an uncompressed format, that will spare you from the CPU time to compress PNG: try saving it as a .tga or .bmp file instead of a .png, but expect final files to be 10 to 30 times larger than the equivalent .PNGs. Also, it is not clear from the code if this is in a web-api view, and you'd like to speedup the API return, and it would be ok if the image would be generated and uploaded in background after the API returns. In that case, there are ways to improve the responsivity of your app, but we need to have the "web code": i.e. which framework you are using, the view function itself, and the calling to the function presented here. A: In PIL.Image.save when saving PNG there is an argument called compression_level with a compression_level=0 we can create faster savings at the cost of no compression. Docs
Is there a way to speed up the Save method with PIL?
I have an API that saves an the image to S3 bucket and returns the S3 URL but the saving part of the PIL image is slow. Here is a snippet of code: from PIL import Image import io import boto3 BUCKET = '' s3 = boto3.resource('s3') def convert_fn(args): pil_image = Image.open(args['path']).convert('RGBA') . . . in_mem_file = io.BytesIO() pil_image.save(in_mem_file, format='PNG') #<--- This takes too long in_mem_file.seek(0) s3.meta.client.upload_fileobj( in_mem_file, BUCKET, 'outputs/{}.png'.format(args['save_name']), ExtraArgs={ 'ACL': 'public-read', 'ContentType':'image/png' } ) return json.dumps({"Image saved in": "https://{}.s3.amazonaws.com/outputs/{}.png".format(BUCKET, args['save_name'])}) How can I speed up the upload?, Would it be easier to return the bytes? The Image.save method is the most time consuming part of my script. I want to increase the performance of my app and I'm thinking that returning as a stream of bytes may be the fastest way to return the image.
[ "Compressing image data to PNG takes time - CPU time. There might be a better performant lib to that than PIL, but you'd have to interface it with Python, and it still would take sometime.\n\"Returning bytes\" make no sense - you either want to have image files saved on S3 or don't. And the \"bytes\" will only represent an image as long as they are properly encoded into an image file, unless you have code to compose back an image from raw bytes.\nFor speeding this up, you could either create an AWS lambda project that will take the unencoded array, generate the png file and save it to S3 in an async mode, or, easier, you might try saving the image in an uncompressed format, that will spare you from the CPU time to compress PNG: try saving it as a .tga or .bmp file instead of a .png, but expect final files to be 10 to 30 times larger than the equivalent .PNGs.\nAlso, it is not clear from the code if this is in a web-api view, and you'd like to speedup the API return, and it would be ok if the image would be generated and uploaded in background after the API returns.\nIn that case, there are ways to improve the responsivity of your app, but we need to have the \"web code\": i.e. which framework you are using, the view function itself, and the calling to the function presented here.\n", "In PIL.Image.save when saving PNG there is an argument called compression_level with a compression_level=0 we can create faster savings at the cost of no compression. Docs\n" ]
[ 2, 0 ]
[]
[]
[ "amazon_sagemaker", "computer_vision", "image", "python", "python_imaging_library" ]
stackoverflow_0074464037_amazon_sagemaker_computer_vision_image_python_python_imaging_library.txt
Q: Python dictionary inside function not getting updated with new values I'm trying to replace an inputed dictionary with new values. I don't understand why the value of the dictionary doesn't change outside of the function. It's weird because I remember this working earlier... ` def multiply_by_term(poly, term): new_values = [] for key in poly: new_values.append(poly[key] * term[1]) new_key_assign = list(poly.keys()) for i in range(len(new_key_assign)): new_key_assign[i] += term[0] poly = dict(zip(new_key_assign, new_values)) ` I tried changing the value of poly with the dict() and zip() functions, but when I check the value of poly after the function is called, poly doesn't change. A: By writing poly = dict(zip(new_key_assign, new_values)), you will make it a different object from what it was when entering the function. So to keep the same id of your dict you just need to clear it before amending it: def multiply_by_term(poly: dict, term: list): new_values = [poly[key] * term[1] for key in poly] new_key_assign = list(poly.keys()) for i in range(len(new_key_assign)): new_key_assign[i] += term[0] poly.clear() for i, key in enumerate(new_key_assign): poly[key] = new_values[i] poly = {1: 2, 3: 4, 5: 6} multiply_by_term(poly, [10, 20])
Python dictionary inside function not getting updated with new values
I'm trying to replace an inputed dictionary with new values. I don't understand why the value of the dictionary doesn't change outside of the function. It's weird because I remember this working earlier... ` def multiply_by_term(poly, term): new_values = [] for key in poly: new_values.append(poly[key] * term[1]) new_key_assign = list(poly.keys()) for i in range(len(new_key_assign)): new_key_assign[i] += term[0] poly = dict(zip(new_key_assign, new_values)) ` I tried changing the value of poly with the dict() and zip() functions, but when I check the value of poly after the function is called, poly doesn't change.
[ "By writing poly = dict(zip(new_key_assign, new_values)), you will make it a different object from what it was when entering the function. So to keep the same id of your dict you just need to clear it before amending it:\ndef multiply_by_term(poly: dict, term: list):\n new_values = [poly[key] * term[1] for key in poly]\n new_key_assign = list(poly.keys())\n for i in range(len(new_key_assign)):\n new_key_assign[i] += term[0]\n\n poly.clear()\n for i, key in enumerate(new_key_assign):\n poly[key] = new_values[i]\n\npoly = {1: 2, 3: 4, 5: 6}\nmultiply_by_term(poly, [10, 20])\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "function", "python" ]
stackoverflow_0074468000_dictionary_function_python.txt
Q: Tricky update values across a dataset if the sum of the row equals a certain threshold I have a dataset where if the numerical columns sum is less than 1.0, these fields will update to 0. Data ID type Q1 24 Q2 24 Q3 24 Q4 24 AA hey 2.0 1.2 0.5 0.6 AA hello 0.7 2.0 0.6 0.6 AA hi 0.1 0.1 0.1 0.1 AA good 0.3 0.4 0.2 0.2 Desired The only row where the values sum is less than 1 is the third row, so now this is updated to where all numerical fields in that row = 0 ID type Q1 24 Q2 24 Q3 24 Q4 24 AA hey 2.0 1.2 0.5 0.6 AA hello 0.7 2.0 0.6 0.6 AA hi 0.0 0.0 0.0 0.0 AA good 0.3 0.4 0.2 0.2 Doing I have an idea of where to start, but not sure how to replace the current values with 0, if the sum of the row is less than 1. filter columns that has 'Q' in their name and sum along the rows (across columns) df['sum']=df.filter(like='Q').sum(axis=1) Any suggestion is appreciated. A: # Use loc update the columns where their sum is less than zero df.loc[df.iloc[:,2:].sum(axis=1)<1, ['Q124','Q224','Q324','Q424']]=0 df ID type Q124 Q224 Q324 Q424 0 AA hey 2.0 1.2 0.5 0.6 1 AA hello 0.7 2.0 0.6 0.6 2 AA hi 0.0 0.0 0.0 0.0 3 AA good 0.3 0.4 0.2 0.2
Tricky update values across a dataset if the sum of the row equals a certain threshold
I have a dataset where if the numerical columns sum is less than 1.0, these fields will update to 0. Data ID type Q1 24 Q2 24 Q3 24 Q4 24 AA hey 2.0 1.2 0.5 0.6 AA hello 0.7 2.0 0.6 0.6 AA hi 0.1 0.1 0.1 0.1 AA good 0.3 0.4 0.2 0.2 Desired The only row where the values sum is less than 1 is the third row, so now this is updated to where all numerical fields in that row = 0 ID type Q1 24 Q2 24 Q3 24 Q4 24 AA hey 2.0 1.2 0.5 0.6 AA hello 0.7 2.0 0.6 0.6 AA hi 0.0 0.0 0.0 0.0 AA good 0.3 0.4 0.2 0.2 Doing I have an idea of where to start, but not sure how to replace the current values with 0, if the sum of the row is less than 1. filter columns that has 'Q' in their name and sum along the rows (across columns) df['sum']=df.filter(like='Q').sum(axis=1) Any suggestion is appreciated.
[ "# Use loc update the columns where their sum is less than zero\ndf.loc[df.iloc[:,2:].sum(axis=1)<1, ['Q124','Q224','Q324','Q424']]=0\ndf\n\nID type Q124 Q224 Q324 Q424\n0 AA hey 2.0 1.2 0.5 0.6\n1 AA hello 0.7 2.0 0.6 0.6\n2 AA hi 0.0 0.0 0.0 0.0\n3 AA good 0.3 0.4 0.2 0.2\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074468347_numpy_pandas_python.txt
Q: changing format of dataframe saying we have a data frame looking like this : with x,y,z the value we are interested in. Year1 year2 year3 canada. x1 x2 x3 shape we have can we transform it to a data frame 2 looking like the following : Country Year Value Canada Year1 x1 Canada Year2 x2 Canada Year3 x3 shape wanted and so on for the other countries. Is there a way to code this transformation ? Original data look like this: Thank you so much A: here is one way to do it Assuming, in your given DF, country is an index field # stack and reindex, then rename the columns df.stack().reset_index().rename(columns={'level_0': 'Country', 'level_1': 'Year', 0:'Value'}) Country Year Value 0 canada. Year1 x1 1 canada. year2 x2 2 canada. year3 x3 if country is not an index then and is instead a column then df.set_index('country').stack().reset_index().rename(columns={'level_0': 'Country', 'level_1': 'Year', 0:'Value'})
changing format of dataframe
saying we have a data frame looking like this : with x,y,z the value we are interested in. Year1 year2 year3 canada. x1 x2 x3 shape we have can we transform it to a data frame 2 looking like the following : Country Year Value Canada Year1 x1 Canada Year2 x2 Canada Year3 x3 shape wanted and so on for the other countries. Is there a way to code this transformation ? Original data look like this: Thank you so much
[ "here is one way to do it\nAssuming, in your given DF, country is an index field\n# stack and reindex, then rename the columns\ndf.stack().reset_index().rename(columns={'level_0': 'Country', 'level_1': 'Year', 0:'Value'})\n\nCountry Year Value\n0 canada. Year1 x1\n1 canada. year2 x2\n2 canada. year3 x3\n\nif country is not an index then and is instead a column then\ndf.set_index('country').stack().reset_index().rename(columns={'level_0': 'Country', 'level_1': 'Year', 0:'Value'})\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074468190_dataframe_pandas_python.txt
Q: SQLAchemy scoped_session issue After long time work with I still have questions about sqlalchemy scoped session that I cannot figure out. For instance, I have decorator for functions that provides it with session def db_session_provider(commit=True, rollback=True, reraise=True): def decorator(func: typing.Callable): @functools.wraps(func) def wrapper(*args, **kwargs): with Session() as session: try: result = func(*args, **kwargs, session=session) if commit: session.commit() return result except: # noqa if rollback: session.rollback() if reraise: raise return wrapper return decorator Where Session is builders defined like: session_factory = sessionmaker( autocommit=config.SQLALCHEMY_AUTOCOMMIT, autoflush=config.SQLALCHEMY_AUTOFLUSH, bind=engine, expire_on_commit=False ) Session = scoped_session(sessionmaker()) Now, I have code that fails with error sqlalchemy.orm.exc.DetachedInstanceError: Instance <Client at 0x10daae430> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: https://sqlalche.me/e/14/bhk3). Documentation by link doesn't make things more clear as looks irrelevant. Here is code that triggers such error: def fn_with_ext_session(client: Client, session: Session) -> None: # do something with client, it is legit and works print(f"Client {client.id} fetched") @db_session_provider() def fn_with_int_session(client_id: int, session: Session) -> None: # doing stuff unrelated to model Client but involves some other linked tables: # here `session` passed by decorator trades = session.query(Trade).filter(Trade.client_id == client_id).all() # after exiting from this function outer object `Client` becomes detached! @db_session_provider() def fn1(session: Session): client = session.query(Client).get(1) # here Client attached to the session fn_with_ext_session(client, session) # here Client attached to the session fn_with_int_session(client.id) # here Client DETACHED from locally defined session!!! print(f"Client {client.id}") # <--- here exception raised Could you please clarify how sqlalchemy session lives and why it overlaps here? A: Base = declarative_base() engine = create_engine(f"postgresql+psycopg2://{username}:{password}@/{db}", echo=False) class Client(Base): __tablename__ = "clients" id = Column( Integer, nullable=False, primary_key=True ) name = Column(Text) Base.metadata.create_all(engine) session_factory = sessionmaker( autocommit=False, autoflush=False, bind=engine, expire_on_commit=False ) Session = scoped_session(session_factory) def db_session_provider(commit=True, rollback=True, reraise=True): def decorator(func: typing.Callable): @functools.wraps(func) def wrapper(*args, **kwargs): with Session() as session: try: result = func(*args, **kwargs, session=session) if commit: session.commit() return result except: # noqa if rollback: session.rollback() if reraise: raise return wrapper return decorator @db_session_provider() def create_clients(session: Session): c1 = Client(name="first client") session.add(c1) c2 = Client(name="second client") session.add(c2) @db_session_provider() def read_clients(session: Session): print ([c.name for c in session.query(Client).all()]) create_clients() read_clients() Outputs ['first client', 'second client']
SQLAchemy scoped_session issue
After long time work with I still have questions about sqlalchemy scoped session that I cannot figure out. For instance, I have decorator for functions that provides it with session def db_session_provider(commit=True, rollback=True, reraise=True): def decorator(func: typing.Callable): @functools.wraps(func) def wrapper(*args, **kwargs): with Session() as session: try: result = func(*args, **kwargs, session=session) if commit: session.commit() return result except: # noqa if rollback: session.rollback() if reraise: raise return wrapper return decorator Where Session is builders defined like: session_factory = sessionmaker( autocommit=config.SQLALCHEMY_AUTOCOMMIT, autoflush=config.SQLALCHEMY_AUTOFLUSH, bind=engine, expire_on_commit=False ) Session = scoped_session(sessionmaker()) Now, I have code that fails with error sqlalchemy.orm.exc.DetachedInstanceError: Instance <Client at 0x10daae430> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: https://sqlalche.me/e/14/bhk3). Documentation by link doesn't make things more clear as looks irrelevant. Here is code that triggers such error: def fn_with_ext_session(client: Client, session: Session) -> None: # do something with client, it is legit and works print(f"Client {client.id} fetched") @db_session_provider() def fn_with_int_session(client_id: int, session: Session) -> None: # doing stuff unrelated to model Client but involves some other linked tables: # here `session` passed by decorator trades = session.query(Trade).filter(Trade.client_id == client_id).all() # after exiting from this function outer object `Client` becomes detached! @db_session_provider() def fn1(session: Session): client = session.query(Client).get(1) # here Client attached to the session fn_with_ext_session(client, session) # here Client attached to the session fn_with_int_session(client.id) # here Client DETACHED from locally defined session!!! print(f"Client {client.id}") # <--- here exception raised Could you please clarify how sqlalchemy session lives and why it overlaps here?
[ "Base = declarative_base()\n\nengine = create_engine(f\"postgresql+psycopg2://{username}:{password}@/{db}\", echo=False)\n\nclass Client(Base):\n __tablename__ = \"clients\"\n id = Column(\n Integer, nullable=False, primary_key=True\n )\n name = Column(Text)\n\nBase.metadata.create_all(engine)\n\n\nsession_factory = sessionmaker(\n autocommit=False, autoflush=False, bind=engine, expire_on_commit=False\n)\n\nSession = scoped_session(session_factory)\n\n\ndef db_session_provider(commit=True, rollback=True, reraise=True):\n def decorator(func: typing.Callable):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n with Session() as session:\n try:\n result = func(*args, **kwargs, session=session)\n\n if commit:\n session.commit()\n\n return result\n except: # noqa\n if rollback:\n session.rollback()\n\n if reraise:\n raise\n\n return wrapper\n\n return decorator\n\n\n@db_session_provider()\ndef create_clients(session: Session):\n c1 = Client(name=\"first client\")\n session.add(c1)\n c2 = Client(name=\"second client\")\n session.add(c2)\n\n@db_session_provider()\ndef read_clients(session: Session):\n print ([c.name for c in session.query(Client).all()])\n\n\n\ncreate_clients()\n\nread_clients()\n\nOutputs\n['first client', 'second client']\n\n" ]
[ 0 ]
[]
[]
[ "python", "session", "sqlalchemy" ]
stackoverflow_0074462772_python_session_sqlalchemy.txt
Q: Pygame freezes on startup I'm using pygame to try and get better with python but it just doesn't respond. I don't know why, as I have similar code that works just fine. import pygame import random import time width = 500 height = 500 snake = [[width / 2,height / 2]] direction = "right" pygame.init() move_increment = 0.1 screen = pygame.display.set_mode((width,height)) running = True pygame.display.set_caption(("Snake for Python")) icon = pygame.image.load(("download.png")) pygame.display.set_icon(icon) def Keys(): keys = pygame.key.get_pressed() if keys[pygame.K_w]: direction = "up" print("w pressed") if keys[pygame.K_s]: direction = "down" print("s pressed") if keys[pygame.K_d]: direction = "right" print("d pressed") if keys[pygame.K_a]: direction = "left" print("a pressed") while running: for x in snake: pygame.draw.rect(screen, (255,255,255), [x[0], x[1], 15, 15]) if direction == "up": x[1] -= move_increment if direction == "down": x[1] += move_increment if direction == "left": x[0] -= move_increment if direction == "right": x[0] += move_increment pygame.draw.rect(screen,(0,0,0),[0,0,width,height]) pygame.display.flip() Keys() for event in pygame.event.get(): if event.type == pygame.QUIT: running = False No errors, no prompts stopping execution, this just makes NO SENSE. A: pygame.draw.rect(screen,(0,0,0),[0,0,width,height]) pygame.display.flip() Keys() for event in pygame.event.get(): if event.type == pygame.QUIT: running = False This code needs to be indented more. It's currently outside the while loop, and therefore never being run
Pygame freezes on startup
I'm using pygame to try and get better with python but it just doesn't respond. I don't know why, as I have similar code that works just fine. import pygame import random import time width = 500 height = 500 snake = [[width / 2,height / 2]] direction = "right" pygame.init() move_increment = 0.1 screen = pygame.display.set_mode((width,height)) running = True pygame.display.set_caption(("Snake for Python")) icon = pygame.image.load(("download.png")) pygame.display.set_icon(icon) def Keys(): keys = pygame.key.get_pressed() if keys[pygame.K_w]: direction = "up" print("w pressed") if keys[pygame.K_s]: direction = "down" print("s pressed") if keys[pygame.K_d]: direction = "right" print("d pressed") if keys[pygame.K_a]: direction = "left" print("a pressed") while running: for x in snake: pygame.draw.rect(screen, (255,255,255), [x[0], x[1], 15, 15]) if direction == "up": x[1] -= move_increment if direction == "down": x[1] += move_increment if direction == "left": x[0] -= move_increment if direction == "right": x[0] += move_increment pygame.draw.rect(screen,(0,0,0),[0,0,width,height]) pygame.display.flip() Keys() for event in pygame.event.get(): if event.type == pygame.QUIT: running = False No errors, no prompts stopping execution, this just makes NO SENSE.
[ "pygame.draw.rect(screen,(0,0,0),[0,0,width,height])\npygame.display.flip()\nKeys()\n \nfor event in pygame.event.get():\n if event.type == pygame.QUIT:\n running = False\n\nThis code needs to be indented more. It's currently outside the while loop, and therefore never being run\n" ]
[ 1 ]
[]
[]
[ "freeze", "pygame", "python", "python_3.x" ]
stackoverflow_0074468452_freeze_pygame_python_python_3.x.txt
Q: Django form is never valid and hence doesnt save to database I am creating a registration model which has date,time(charfield with choices),customer and restaurant .I need some help on why my instance is not saved even when I fill out my model form models.py class reservation(models.Model): TIMESLOTS = [ ('11:00-1:00', '11:00-1:00'), ('01:00-3:00', '01:00-03:00'), ('03:00-05:00', '03:00-05:00'), ('05:00-07:00', '05:00-07:00'), ('07:00-09:00', '07:00-09:00') ] date=models.DateField(null=True) time=models.CharField(null=True,max_length=200,choices=TIMESLOTS) customer=models.OneToOneField(User,null=True,on_delete=models.CASCADE) restaurant=models.OneToOneField(Restaurantdetails,on_delete=models.CASCADE,null=True) def __str__(self): return self.restaurant.name forms.py class Reservationform(ModelForm): class Meta: model=reservation fields=['date','time','restaurant'] views.py def reservationcreator(request): form=Reservationform() if form.is_valid(): form = Reservationform(request.POST) res=form.save() res.customer=request.user res.save() messages.success(request, 'reservation created') return redirect('menu') else: print('BS') context = {'form': form} return render(request,'userrestaurant/reservation.html',context) A: Your form will never be valid because you are supplying an empty form. You need to add the request.POST data to that form before you validate it: def reservationcreator(request): form=Reservationform(request.POST or None) if form.is_valid(): res.customer=request.user res=form.save() messages.success(request, 'reservation created') return redirect('menu') else: print('BS') context = {'form': form} return render(request,'userrestaurant/reservation.html',context) A: A form that is not bounded, is never valid. A bounded form is a form that received data, for example through request.POST or request.GET; and request.FILES. You thus check the HTTP method, and depending on that initialize the form, so: def reservationcreator(request): if request.method == 'POST': form = Reservationform(request.POST, request.FILES) if form.is_valid(): form.instance.customer = request.user form.save() messages.success(request, 'reservation created') return redirect('menu') else: form = ReservationForm() context = {'form': form} return render(request, 'userrestaurant/reservation.html', context) That being said, your view is a simple CreateView [Django-doc] and can be implemented with: from django.contrib.auth.mixins import LoginRequiredMixin from django.contrib.messages.views import SuccessMessageMixin from django.urls import reverse from django.views.generic import CreateView class ReservationCreateView(LoginRequiredMixin, SuccessMessageMixin, CreateView): form_class = ReservationForm template_name = 'userrestaurant/reservation.html' success_url = reverse('menu') success_message = 'reservation created' def form_valid(self, form): form.instance.customer = request.user return super().form_valid(form) Note: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation. Note: You can limit views to a view to authenticated users with the @login_required decorator [Django-doc].
Django form is never valid and hence doesnt save to database
I am creating a registration model which has date,time(charfield with choices),customer and restaurant .I need some help on why my instance is not saved even when I fill out my model form models.py class reservation(models.Model): TIMESLOTS = [ ('11:00-1:00', '11:00-1:00'), ('01:00-3:00', '01:00-03:00'), ('03:00-05:00', '03:00-05:00'), ('05:00-07:00', '05:00-07:00'), ('07:00-09:00', '07:00-09:00') ] date=models.DateField(null=True) time=models.CharField(null=True,max_length=200,choices=TIMESLOTS) customer=models.OneToOneField(User,null=True,on_delete=models.CASCADE) restaurant=models.OneToOneField(Restaurantdetails,on_delete=models.CASCADE,null=True) def __str__(self): return self.restaurant.name forms.py class Reservationform(ModelForm): class Meta: model=reservation fields=['date','time','restaurant'] views.py def reservationcreator(request): form=Reservationform() if form.is_valid(): form = Reservationform(request.POST) res=form.save() res.customer=request.user res.save() messages.success(request, 'reservation created') return redirect('menu') else: print('BS') context = {'form': form} return render(request,'userrestaurant/reservation.html',context)
[ "Your form will never be valid because you are supplying an empty form. You need to add the request.POST data to that form before you validate it:\ndef reservationcreator(request):\n form=Reservationform(request.POST or None)\n\n if form.is_valid():\n res.customer=request.user\n res=form.save()\n messages.success(request, 'reservation created')\n return redirect('menu')\n else:\n print('BS')\n context = {'form': form}\n\n return render(request,'userrestaurant/reservation.html',context)\n\n", "A form that is not bounded, is never valid. A bounded form is a form that received data, for example through request.POST or request.GET; and request.FILES. You thus check the HTTP method, and depending on that initialize the form, so:\ndef reservationcreator(request):\n if request.method == 'POST':\n form = Reservationform(request.POST, request.FILES)\n if form.is_valid():\n form.instance.customer = request.user\n form.save()\n messages.success(request, 'reservation created')\n return redirect('menu')\n else:\n form = ReservationForm()\n context = {'form': form}\n return render(request, 'userrestaurant/reservation.html', context)\nThat being said, your view is a simple CreateView [Django-doc] and can be implemented with:\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.urls import reverse\nfrom django.views.generic import CreateView\n\nclass ReservationCreateView(LoginRequiredMixin, SuccessMessageMixin, CreateView):\n form_class = ReservationForm\n template_name = 'userrestaurant/reservation.html'\n success_url = reverse('menu')\n success_message = 'reservation created'\n\n def form_valid(self, form):\n form.instance.customer = request.user\n return super().form_valid(form)\n\n\nNote: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.\n\n\n\nNote: You can limit views to a view to authenticated users with the\n@login_required decorator [Django-doc].\n\n" ]
[ 3, 3 ]
[]
[]
[ "django", "django_models", "python", "web" ]
stackoverflow_0074468354_django_django_models_python_web.txt
Q: How can I get the list of selected values in an excel ListBox object using xlwings? I'm trying to know which items of my excel ListBox is being selected using xlwings. What works: sheet = xw.sheets.active sheet.api.Shapes('ListBox11').ControlFormat.ListCount #returns 17 because 17 items in my ListBox What I tried and fails with error (com_error: (-2147352570, 'Unknown name.', None, None)) : sheet.api.Shapes('ListBox11').ControlFormat.SelectedValue #or sheet.api.Shapes('ListBox11').ControlFormat.List #or sheet.api.Shapes('ListBox11').ControlFormat.Selected(0) Can't manage to figure it out. Help would be much appreciated. Thanks A: You can try the following; This code is using an Active X Listbox ... sheet = xw.sheets.active lb_obj = sheet.api.OLEObjects("ListBox11").Object list_count = lb_obj.ListCount for x in range(list_count): if lb_obj.Selected(x) == True: print(lb_obj.List[x])
How can I get the list of selected values in an excel ListBox object using xlwings?
I'm trying to know which items of my excel ListBox is being selected using xlwings. What works: sheet = xw.sheets.active sheet.api.Shapes('ListBox11').ControlFormat.ListCount #returns 17 because 17 items in my ListBox What I tried and fails with error (com_error: (-2147352570, 'Unknown name.', None, None)) : sheet.api.Shapes('ListBox11').ControlFormat.SelectedValue #or sheet.api.Shapes('ListBox11').ControlFormat.List #or sheet.api.Shapes('ListBox11').ControlFormat.Selected(0) Can't manage to figure it out. Help would be much appreciated. Thanks
[ "You can try the following;\nThis code is using an Active X Listbox\n...\nsheet = xw.sheets.active\n\nlb_obj = sheet.api.OLEObjects(\"ListBox11\").Object\nlist_count = lb_obj.ListCount\n\nfor x in range(list_count):\n if lb_obj.Selected(x) == True:\n print(lb_obj.List[x])\n\n" ]
[ 1 ]
[]
[]
[ "excel", "python", "xlwings" ]
stackoverflow_0074461842_excel_python_xlwings.txt
Q: Why does my while loop continue to call my menu function even after receiving different instruction I have a simple Python program with multiple functions that displays a menu, takes an input, loops through and formats a CSV file then outputs information from that CSV file based on the user's input. The Menu options look like 1: Call menu again 2: Create a default Report 3: More specified report 4: More specified Report 5: Exit program I am using a while loop to loop over these menu options so I can call the menu repeatedly while the user continues to input a 1. here is a look at the while loop def main(): banner() while True: choice = menu() #if the choice = 1, we call the menu function again if choice == 1: menu() elif choice == 2: defaultReport() break elif choice == 3: #elif statement for a function not yet created in part 1 pass elif choice == 4: #elif statement for a function not yet created in part 1 pass elif choice ==5: print('\nExiting Program') break The goal is to be able to call the menu function while the input (choice) = 1, then as soon as the input equals something else the program executes the code corresponding to the input without calling/displaying the menu again Examp. of current problem: 1 - calls/displays menu again 1st input 1 - calls/displays menuagain 2nd input 2 - should show a default report, but calls the menu/displays it once more 3rd input 2- shows default output 4th input Goal: 1 - calls/displays menu again 1st input 1 - calls/displays menu again 2nd input 2- shows default output 3rd input Menu function for those interested: def menu(): print('''\nMortality Rate Comparison Menu 1. Show This Menu Again 2. Full Mortality Report by State 3. Mortality for a Single State, by Date Range 4. Mortality Summary for all States 5. Exit \n ''') choice = input('Make your selection from the menu: ') while True: try: int(choice) break except: choice = input('Make your selection from the menu: ') while int(choice) > 5 or int(choice) < 1: choice = input('Make your selection from the menu: ') return int(choice) A: while True: ... is an infinite loop that will run until it hits a break statement. All your break statements are inside an if/elif branch, but not all the branches have a break. The logical reasons for not exiting the loop is then either (i) none of the if-tests match, or (ii) none of the branches with a break matches. You can change your program to figure out what is going on, e.g. use asserts to verify that values are sensible and cover all exits to the loop so you know where it is going: def main(): banner() while True: choice = menu() assert isinstance(choice, int), "choice is not an int" assert 1 <= choice <= 5, "choice is not in [1..5]" if choice == 1: menu() #... etc elif choice == 5: print('\nExiting Program') break else: print("Nothing matched..? Choice is:", choice) print("Debug...: end of while")
Why does my while loop continue to call my menu function even after receiving different instruction
I have a simple Python program with multiple functions that displays a menu, takes an input, loops through and formats a CSV file then outputs information from that CSV file based on the user's input. The Menu options look like 1: Call menu again 2: Create a default Report 3: More specified report 4: More specified Report 5: Exit program I am using a while loop to loop over these menu options so I can call the menu repeatedly while the user continues to input a 1. here is a look at the while loop def main(): banner() while True: choice = menu() #if the choice = 1, we call the menu function again if choice == 1: menu() elif choice == 2: defaultReport() break elif choice == 3: #elif statement for a function not yet created in part 1 pass elif choice == 4: #elif statement for a function not yet created in part 1 pass elif choice ==5: print('\nExiting Program') break The goal is to be able to call the menu function while the input (choice) = 1, then as soon as the input equals something else the program executes the code corresponding to the input without calling/displaying the menu again Examp. of current problem: 1 - calls/displays menu again 1st input 1 - calls/displays menuagain 2nd input 2 - should show a default report, but calls the menu/displays it once more 3rd input 2- shows default output 4th input Goal: 1 - calls/displays menu again 1st input 1 - calls/displays menu again 2nd input 2- shows default output 3rd input Menu function for those interested: def menu(): print('''\nMortality Rate Comparison Menu 1. Show This Menu Again 2. Full Mortality Report by State 3. Mortality for a Single State, by Date Range 4. Mortality Summary for all States 5. Exit \n ''') choice = input('Make your selection from the menu: ') while True: try: int(choice) break except: choice = input('Make your selection from the menu: ') while int(choice) > 5 or int(choice) < 1: choice = input('Make your selection from the menu: ') return int(choice)
[ "while True:\n ...\n\nis an infinite loop that will run until it hits a break statement.\nAll your break statements are inside an if/elif branch, but not all the branches have a break. The logical reasons for not exiting the loop is then either (i) none of the if-tests match, or (ii) none of the branches with a break matches.\nYou can change your program to figure out what is going on, e.g. use asserts to verify that values are sensible and cover all exits to the loop so you know where it is going:\ndef main():\n banner()\n while True:\n choice = menu()\n assert isinstance(choice, int), \"choice is not an int\"\n assert 1 <= choice <= 5, \"choice is not in [1..5]\"\n\n if choice == 1:\n menu()\n #... etc\n elif choice == 5:\n print('\\nExiting Program')\n break\n else:\n print(\"Nothing matched..? Choice is:\", choice)\n\n print(\"Debug...: end of while\")\n\n" ]
[ 0 ]
[ "This is likely because you are not resetting the value of choice. At the start of the loop you set it to menu() and then continued to do this each time, I'm not sure what menu() does but I'm guessing that it will be returning a 1 every time which is not what you would like.\nEither make sure the value received from calling menu() changes by possibly giving it some parameter or ask for input each time, possibly through the python shell. Hope this helps\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074468369_python.txt
Q: Python Categorize Dataframe Column Conditionally Using Regular Expression I have a dataframe: group id A 009x A 010x B 009x B 002x C 002x C 003x How do I make a new column new that categorizes conditionally under the following three conditions by group: If all id values consist of ONLY 009x and 010x, then categorize as g1 If the id value is one of 009x or 010x AND another id value is not one of 009x or 010x, then categorize as g2 Otherwise, just print the id value Desired result: group id new A 009x g1 A 010x g1 B 009x g2 B 002x g2 C 002x 002x C 003x 003x data = { 'group': ['A', 'A', 'B', 'B', 'C', 'C'], 'id': ['009x', '010x', '009x', '002x', '002x', '003x'], } df = pd.DataFrame(data) df A: I hope I've understood your question right. You can use .groupby() + custom function: def categorize_fn(x): tmp = x["id"].isin(["009x", "010x"]) if tmp.all(): x["new"] = "g1" elif tmp.any(): x["new"] = "g2" else: x["new"] = x["id"] return x df = df.groupby("group", group_keys=False).apply(categorize_fn) print(df) Prints: group id new 0 A 009x g1 1 A 010x g1 2 B 009x g2 3 B 002x g2 4 C 002x 002x 5 C 003x 003x
Python Categorize Dataframe Column Conditionally Using Regular Expression
I have a dataframe: group id A 009x A 010x B 009x B 002x C 002x C 003x How do I make a new column new that categorizes conditionally under the following three conditions by group: If all id values consist of ONLY 009x and 010x, then categorize as g1 If the id value is one of 009x or 010x AND another id value is not one of 009x or 010x, then categorize as g2 Otherwise, just print the id value Desired result: group id new A 009x g1 A 010x g1 B 009x g2 B 002x g2 C 002x 002x C 003x 003x data = { 'group': ['A', 'A', 'B', 'B', 'C', 'C'], 'id': ['009x', '010x', '009x', '002x', '002x', '003x'], } df = pd.DataFrame(data) df
[ "I hope I've understood your question right. You can use .groupby() + custom function:\ndef categorize_fn(x):\n tmp = x[\"id\"].isin([\"009x\", \"010x\"])\n\n if tmp.all():\n x[\"new\"] = \"g1\"\n elif tmp.any():\n x[\"new\"] = \"g2\"\n else:\n x[\"new\"] = x[\"id\"]\n\n return x\n\n\ndf = df.groupby(\"group\", group_keys=False).apply(categorize_fn)\nprint(df)\n\nPrints:\n group id new\n0 A 009x g1\n1 A 010x g1\n2 B 009x g2\n3 B 002x g2\n4 C 002x 002x\n5 C 003x 003x\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "regex" ]
stackoverflow_0074468470_pandas_python_regex.txt
Q: How to retrieve the status of a Stripe Subscription I am trying to get the status of customers subscriptions using the Stripe API for example: print(subscriptionStatusFunctionAPI(subscriptionID)) *"returns subscription status (e.g. active, past_due, canceled, unpaid, etc)"* below is the current pseudocode import stripe stripe.api_key = 'rk_test_XXX' ### retrieve and return customer specific Subscription object ### retrieve_sub = stripe.Subscription.retrieve( "sub_1M3iv0LwptWcL8DfHusz7LZ1", ) print(dir(retrieve_sub)) ['OBJECT_NAME', 'ReprJSONEncoder', '__ class__', '__ contains__', '__ copy__', '__ deepcopy__', '__ delattr__', '__ delitem__', '__ dict__', '__ dir__', '__ doc__', '__ eq__', '__ format__', '__ ge__', '__ getattr__', '__ getattribute__', '__ getitem__', '__ gt__', '__ hash__', '__ init__', '__ init_subclass__', '__ iter__', '__ le__', '__ len__', '__ lt__', '__ module__', '__ ne__', '__ new__', '__ reduce__', '__ reduce_ex__', '__ repr__', '__ reversed__', '__ setattr__', '__ setitem__', '__ setstate__', '__ sizeof__', '__ str__', '__ subclasshook__', '__ weakref__', '_cls_cancel', '_cls_delete', '_cls_delete_discount', '_last_response', '_previous', '_request', '_request_and_refresh', '_retrieve_params', '_search', '_static_request', '_static_request_stream', '_transient_values', '_unsaved_values', 'api_base', 'api_key', 'auto_paging_iter', 'cancel', 'class_url', 'clear', 'construct_from', 'copy', 'create', 'delete', 'delete_discount', 'fromkeys', 'get', 'instance_url', 'items', 'keys', 'last_response', 'list', 'modify', 'pop', 'popitem', 'refresh', 'refresh_from', 'request', 'request_stream', 'retrieve', 'save', 'search', 'search_auto_paging_iter', 'serialize', 'setdefault', 'stripe_account', 'stripe_id', 'stripe_version', 'to_dict', 'to_dict_recursive', 'update', 'values'] retrieve_sub <Subscription subscription id=sub_1M3je8LwptWcL8DfJCObejxI at 0x25183cb5c70> JSON: { "application": null, "application_fee_percent": null, "automatic_tax": { "enabled": false }, "billing": "charge_automatically", "billing_cycle_anchor": 1672333500, "billing_thresholds": null, "cancel_at": null, "cancel_at_period_end": false, "canceled_at": null, "collection_method": "charge_automatically", "created": 1672333500, "currency": "usd", "current_period_end": 1675011900, "current_period_start": 1672333500, "customer": "cus_MnKIJmGPujdOVd", "days_until_due": null, "default_payment_method": null, "default_source": null, "description": null, "discount": null, "ended_at": null, "id": "sub_1M3je8LwptWcL8DfJCObejxI", "invoice_customer_balance_settings": { "consume_applied_balance_on_void": true }, "items": { "data": [ { "billing_thresholds": null, "created": 1672333500, "id": "si_MnKIat6U3EN0O9", "metadata": {}, "object": "subscription_item", "plan": { "active": true, "aggregate_usage": null, "amount": 10000, "amount_decimal": "10000", "billing_scheme": "per_unit", "created": 1668352687, "currency": "usd", "id": "price_1M3hwhLwptWcL8DfUOZfUBnH", "interval": "month", "interval_count": 1, "livemode": false, "metadata": {}, "name": "Test Sub", "nickname": null, "object": "plan", "product": "prod_MnIXRF3ji6h4Nq", "statement_descriptor": null, "tiers": null, "tiers_mode": null, "transform_usage": null, "trial_period_days": null, "usage_type": "licensed" }, "price": { "active": true, "billing_scheme": "per_unit", "created": 1668352687, "currency": "usd", "custom_unit_amount": null, "id": "price_1M3hwhLwptWcL8DfUOZfUBnH", "livemode": false, "lookup_key": null, "metadata": {}, "nickname": null, "object": "price", "product": "prod_MnIXRF3ji6h4Nq", "recurring": { "aggregate_usage": null, "interval": "month", "interval_count": 1, "trial_period_days": null, "usage_type": "licensed" }, "tax_behavior": "unspecified", "tiers_mode": null, "transform_quantity": null, "type": "recurring", "unit_amount": 10000, "unit_amount_decimal": "10000" }, "quantity": 1, "subscription": "sub_1M3je8LwptWcL8DfJCObejxI", "tax_rates": [] } ], "has_more": false, "object": "list", "total_count": 1, "url": "/v1/subscription_items?subscription=sub_1M3je8LwptWcL8DfJCObejxI" }, "latest_invoice": "in_1M3je8LwptWcL8DfNbeYfB6C", "livemode": false, "metadata": {}, "next_pending_invoice_item_invoice": null, "object": "subscription", "on_behalf_of": null, "pause_collection": null, "payment_settings": { "payment_method_options": null, "payment_method_types": null, "save_default_payment_method": "off" }, "pending_invoice_item_interval": null, "pending_setup_intent": null, "pending_update": null, "plan": { "active": true, "aggregate_usage": null, "amount": 10000, "amount_decimal": "10000", "billing_scheme": "per_unit", "created": 1668352687, "currency": "usd", "id": "price_1M3hwhLwptWcL8DfUOZfUBnH", "interval": "month", "interval_count": 1, "livemode": false, "metadata": {}, "name": "Test Sub", "nickname": null, "object": "plan", "product": "prod_MnIXRF3ji6h4Nq", "statement_descriptor": null, "tiers": null, "tiers_mode": null, "transform_usage": null, "trial_period_days": null, "usage_type": "licensed" }, "quantity": 1, "schedule": null, "start": 1672333500, "start_date": 1672333500, "status": "active", "tax_percent": null, "test_clock": "clock_1M3jckLwptWcL8Dff89Wpff4", "transfer_data": null, "trial_end": null, "trial_start": null } A: The Subscription object has a status property. When you call the Retrieve Subscription API, you get back that Subscription object as a class in stripe-python. At that point you have access to all the properties of that object directly. You can access the status like this: retrieve_sub = stripe.Subscription.retrieve('sub_123') status = retrieve_sub.status
How to retrieve the status of a Stripe Subscription
I am trying to get the status of customers subscriptions using the Stripe API for example: print(subscriptionStatusFunctionAPI(subscriptionID)) *"returns subscription status (e.g. active, past_due, canceled, unpaid, etc)"* below is the current pseudocode import stripe stripe.api_key = 'rk_test_XXX' ### retrieve and return customer specific Subscription object ### retrieve_sub = stripe.Subscription.retrieve( "sub_1M3iv0LwptWcL8DfHusz7LZ1", ) print(dir(retrieve_sub)) ['OBJECT_NAME', 'ReprJSONEncoder', '__ class__', '__ contains__', '__ copy__', '__ deepcopy__', '__ delattr__', '__ delitem__', '__ dict__', '__ dir__', '__ doc__', '__ eq__', '__ format__', '__ ge__', '__ getattr__', '__ getattribute__', '__ getitem__', '__ gt__', '__ hash__', '__ init__', '__ init_subclass__', '__ iter__', '__ le__', '__ len__', '__ lt__', '__ module__', '__ ne__', '__ new__', '__ reduce__', '__ reduce_ex__', '__ repr__', '__ reversed__', '__ setattr__', '__ setitem__', '__ setstate__', '__ sizeof__', '__ str__', '__ subclasshook__', '__ weakref__', '_cls_cancel', '_cls_delete', '_cls_delete_discount', '_last_response', '_previous', '_request', '_request_and_refresh', '_retrieve_params', '_search', '_static_request', '_static_request_stream', '_transient_values', '_unsaved_values', 'api_base', 'api_key', 'auto_paging_iter', 'cancel', 'class_url', 'clear', 'construct_from', 'copy', 'create', 'delete', 'delete_discount', 'fromkeys', 'get', 'instance_url', 'items', 'keys', 'last_response', 'list', 'modify', 'pop', 'popitem', 'refresh', 'refresh_from', 'request', 'request_stream', 'retrieve', 'save', 'search', 'search_auto_paging_iter', 'serialize', 'setdefault', 'stripe_account', 'stripe_id', 'stripe_version', 'to_dict', 'to_dict_recursive', 'update', 'values'] retrieve_sub <Subscription subscription id=sub_1M3je8LwptWcL8DfJCObejxI at 0x25183cb5c70> JSON: { "application": null, "application_fee_percent": null, "automatic_tax": { "enabled": false }, "billing": "charge_automatically", "billing_cycle_anchor": 1672333500, "billing_thresholds": null, "cancel_at": null, "cancel_at_period_end": false, "canceled_at": null, "collection_method": "charge_automatically", "created": 1672333500, "currency": "usd", "current_period_end": 1675011900, "current_period_start": 1672333500, "customer": "cus_MnKIJmGPujdOVd", "days_until_due": null, "default_payment_method": null, "default_source": null, "description": null, "discount": null, "ended_at": null, "id": "sub_1M3je8LwptWcL8DfJCObejxI", "invoice_customer_balance_settings": { "consume_applied_balance_on_void": true }, "items": { "data": [ { "billing_thresholds": null, "created": 1672333500, "id": "si_MnKIat6U3EN0O9", "metadata": {}, "object": "subscription_item", "plan": { "active": true, "aggregate_usage": null, "amount": 10000, "amount_decimal": "10000", "billing_scheme": "per_unit", "created": 1668352687, "currency": "usd", "id": "price_1M3hwhLwptWcL8DfUOZfUBnH", "interval": "month", "interval_count": 1, "livemode": false, "metadata": {}, "name": "Test Sub", "nickname": null, "object": "plan", "product": "prod_MnIXRF3ji6h4Nq", "statement_descriptor": null, "tiers": null, "tiers_mode": null, "transform_usage": null, "trial_period_days": null, "usage_type": "licensed" }, "price": { "active": true, "billing_scheme": "per_unit", "created": 1668352687, "currency": "usd", "custom_unit_amount": null, "id": "price_1M3hwhLwptWcL8DfUOZfUBnH", "livemode": false, "lookup_key": null, "metadata": {}, "nickname": null, "object": "price", "product": "prod_MnIXRF3ji6h4Nq", "recurring": { "aggregate_usage": null, "interval": "month", "interval_count": 1, "trial_period_days": null, "usage_type": "licensed" }, "tax_behavior": "unspecified", "tiers_mode": null, "transform_quantity": null, "type": "recurring", "unit_amount": 10000, "unit_amount_decimal": "10000" }, "quantity": 1, "subscription": "sub_1M3je8LwptWcL8DfJCObejxI", "tax_rates": [] } ], "has_more": false, "object": "list", "total_count": 1, "url": "/v1/subscription_items?subscription=sub_1M3je8LwptWcL8DfJCObejxI" }, "latest_invoice": "in_1M3je8LwptWcL8DfNbeYfB6C", "livemode": false, "metadata": {}, "next_pending_invoice_item_invoice": null, "object": "subscription", "on_behalf_of": null, "pause_collection": null, "payment_settings": { "payment_method_options": null, "payment_method_types": null, "save_default_payment_method": "off" }, "pending_invoice_item_interval": null, "pending_setup_intent": null, "pending_update": null, "plan": { "active": true, "aggregate_usage": null, "amount": 10000, "amount_decimal": "10000", "billing_scheme": "per_unit", "created": 1668352687, "currency": "usd", "id": "price_1M3hwhLwptWcL8DfUOZfUBnH", "interval": "month", "interval_count": 1, "livemode": false, "metadata": {}, "name": "Test Sub", "nickname": null, "object": "plan", "product": "prod_MnIXRF3ji6h4Nq", "statement_descriptor": null, "tiers": null, "tiers_mode": null, "transform_usage": null, "trial_period_days": null, "usage_type": "licensed" }, "quantity": 1, "schedule": null, "start": 1672333500, "start_date": 1672333500, "status": "active", "tax_percent": null, "test_clock": "clock_1M3jckLwptWcL8Dff89Wpff4", "transfer_data": null, "trial_end": null, "trial_start": null }
[ "The Subscription object has a status property. When you call the Retrieve Subscription API, you get back that Subscription object as a class in stripe-python. At that point you have access to all the properties of that object directly.\nYou can access the status like this:\nretrieve_sub = stripe.Subscription.retrieve('sub_123')\nstatus = retrieve_sub.status\n\n" ]
[ 1 ]
[]
[]
[ "python", "stripe_payments" ]
stackoverflow_0074468494_python_stripe_payments.txt
Q: coverting list of string coordinates into list of lists coordinates without string I have a list of list of strings, but each string a coordinate separated by commas, I want to convert into list of lists of coordinates without string my_list =['44324,-34244', '44885.1,-33445.6', '45373.1,-32849.8', '45380.1,-32625.6', '44635.7,-32285.6', '44635.7,-32285.6'] I want to convert into [[44324,-34244], [44885.1,-33445.6], [45373.1,-32849.8], [45380.1,-32625.6], [44635.7,-32285.6], [44635.7,-32285.6]]``` I tried the following but it doesn't work ```coords = [map(float,i.split(",")) for i in my_list]``` print(coords) gives me gives me <map object at 0x7f7a7715d2b0> A: Wrap map(...) in list(...) like so: coords = [list(map(float,i.split(","))) for i in my_list]
coverting list of string coordinates into list of lists coordinates without string
I have a list of list of strings, but each string a coordinate separated by commas, I want to convert into list of lists of coordinates without string my_list =['44324,-34244', '44885.1,-33445.6', '45373.1,-32849.8', '45380.1,-32625.6', '44635.7,-32285.6', '44635.7,-32285.6'] I want to convert into [[44324,-34244], [44885.1,-33445.6], [45373.1,-32849.8], [45380.1,-32625.6], [44635.7,-32285.6], [44635.7,-32285.6]]``` I tried the following but it doesn't work ```coords = [map(float,i.split(",")) for i in my_list]``` print(coords) gives me gives me <map object at 0x7f7a7715d2b0>
[ "Wrap map(...) in list(...) like so:\ncoords = [list(map(float,i.split(\",\"))) for i in my_list]\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074468590_python_python_3.x.txt
Q: python selenium, find out when a download has completed? I've used selenium to initiate a download. After the download is complete, certain actions need to be taken, is there any simple method to find out when a download has complete? (I am using the FireFox driver) A: I came across this problem recently. I was downloading multiple files at once and had to build in a way to timeout if the downloads failed. The code checks the filenames in some download directory every second and exits once they are complete or if it takes longer than 20 seconds to finish. The returned download time was used to check if the downloads were successful or if it timed out. import time import os def download_wait(path_to_downloads): seconds = 0 dl_wait = True while dl_wait and seconds < 20: time.sleep(1) dl_wait = False for fname in os.listdir(path_to_downloads): if fname.endswith('.crdownload'): dl_wait = True seconds += 1 return seconds I believe that this only works with chrome files as they end with the .crdownload extension. There may be a similar way to check in other browsers. Edit: I recently changed the way that I use this function for times that .crdownload does not appear as the extension. Essentially this just waits for the correct number of files as well. def download_wait(directory, timeout, nfiles=None): """ Wait for downloads to finish with a specified timeout. Args ---- directory : str The path to the folder where the files will be downloaded. timeout : int How many seconds to wait until timing out. nfiles : int, defaults to None If provided, also wait for the expected number of files. """ seconds = 0 dl_wait = True while dl_wait and seconds < timeout: time.sleep(1) dl_wait = False files = os.listdir(directory) if nfiles and len(files) != nfiles: dl_wait = True for fname in files: if fname.endswith('.crdownload'): dl_wait = True seconds += 1 return seconds A: There is no built-in to selenium way to wait for the download to be completed. The general idea here would be to wait until a file would appear in your "Downloads" directory. This might either be achieved by looping over and over again checking for file existence: Check and wait until a file exists to read it Or, by using things like watchdog to monitor a directory: How to watch a directory for changes? Monitoring contents of files/directories? A: import os import time def latest_download_file(): path = r'Downloads folder file path' os.chdir(path) files = sorted(os.listdir(os.getcwd()), key=os.path.getmtime) newest = files[-1] return newest fileends = "crdownload" while "crdownload" == fileends: time.sleep(1) newest_file = latest_download_file() if "crdownload" in newest_file: fileends = "crdownload" else: fileends = "none" This is a combination of a few solutions. I didn't like that I had to scan the entire downloads folder for a file ending in "crdownload". This code implements a function that pulls the newest file in downloads folder. Then it simply checks if that file is still being downloaded. Used it for a Selenium tool I am building worked very well. A: I know its too late for the answer, though would like to share a hack for future readers. You can create a thread say thread1 from main thread and initiate your download here. Now, create some another thread, say thread2 and in there ,let it wait till thread1 completes using join() method.Now here,you can continue your flow of execution after download completes. Still make sure you dont initiate your download using selenium, instead extract the link using selenium and use requests module to download. Download using requests module For eg: def downloadit(): #download code here def after_dwn(): dwn_thread.join() #waits till thread1 has completed executing #next chunk of code after download, goes here dwn_thread = threading.Thread(target=downloadit) dwn_thread.start() metadata_thread = threading.Thread(target=after_dwn) metadata_thread.start() A: As answered before, there is no native way to check if download is finished. So here is a helper function that does the job for Firefox and Chrome. One trick is to clear the temp download folder before start a new download. Also, use native pathlib for cross-platform usage. from pathlib import Path def is_download_finished(temp_folder): firefox_temp_file = sorted(Path(temp_folder).glob('*.part')) chrome_temp_file = sorted(Path(temp_folder).glob('*.crdownload')) downloaded_files = sorted(Path(temp_folder).glob('*.*')) if (len(firefox_temp_file) == 0) and \ (len(chrome_temp_file) == 0) and \ (len(downloaded_files) >= 1): return True else: return False A: Check for "Unconfirmed" key word in file name in download directory: # wait for download complete wait = True while(wait==True): for fname in os.listdir('\path\to\download directory'): if ('Unconfirmed') in fname: print('downloading files ...') time.sleep(10) else: wait=False print('finished downloading all files ...') As soon as the the filed download is completed it exits the while loop. A: x1=0 while x1==0: count=0 li = os.listdir("directorypath") for x1 in li: if x1.endswith(".crdownload"): count = count+1 if count==0: x1=1 else: x1=0 This works if you are trying to check if a set of files(more than one) have finished downloading. A: If using Selenium and Chrome, you can write a custom wait condition such as: class file_has_been_downloaded(object): def __init__(self, dir, number): self.dir = dir self.number = number def __call__(self, driver): print(count_files(dir), '->', self.number) return count_files(dir) > self.number The function count_files just verifies that the file has been added to the folder def count_files(direct): for root, dirs, files in os.walk(direct): return len(list(f for f in files if f.startswith('MyPrefix') and ( not f.endswith('.crdownload')) )) Then to implement this in your code: files = count_files(dir) << copy the file. Possibly use shutil >> WebDriverWait(driver, 30).until(file_has_been_downloaded(dir, files)) A: this worked for me: fileends = "crdownload" while "crdownload" in fileends: sleep(1) for fname in os.listdir(some_path): print(fname) if "crdownload" in fname: fileends = "crdownload" else: fileends = "None" A: I got a better one though: So redirect the function that starts the download. e.g. download_function= lambda: element.click() than check number of files in directory and wait for a new file that doesnt have the download extension. After that rename it. (can be change to move the file instead of renaming it in the same directory) def save_download(self, directory, download_function, new_name, timeout=30): """ Download a file and rename it :param directory: download location that is set :param download_function: function to start download :param new_name: the name that the new download gets :param timeout: number of seconds to wait for download :return: path to downloaded file """ self.logger.info("Downloading " + new_name) files_start = os.listdir(directory) download_function() wait = True i = 0 while (wait or len(os.listdir(directory)) == len(files_start)) and i < timeout * 2: sleep(0.5) wait = False for file_name in os.listdir(directory): if file_name.endswith('.crdownload'): wait = True if i == timeout * 2: self.logger.warning("Documents not downloaded") raise TimeoutError("File not downloaded") else: self.logger.info("Downloading done") new_file = [name for name in os.listdir(directory) if name not in files_start][0] self.logger.info("New file found renaming " + new_file + " to " + new_name) while not os.access(directory + r"\\" + new_file, os.W_OK): sleep(0.5) self.logger.info("Waiting for write permission") os.rename(directory + "\\" + new_file, directory + "\\" + new_name) return directory + "\\" + new_file A: With Chrome, files which have not finished downloading have the extension .crdownload. If you set your download directory properly, then you can wait until the file that you want no longer has this extension. In principle, this is not much different to waiting for file to exist (as suggested by alecxe) - but at least you can monitor progress in this way. A: create a function that uses "requests" to get the file content and call that one, your program will not move forward unless the file is downloaded import requests from selenium import webdriver driver = webdriver.Chrome() # Open the website driver.get(website_url_) x = driver.find_element_by_partial_link_text('download') y = x.get_attribute("href") fc = requests.get(y) fname = x.text with open(fname, 'wb') as f: f.write(fc.content) A: This is VERY SIMPLE and worked for me (and works fot any extention) import os, glob and time (not truly needed) # count how many files you have in Downloads folder before download user = os.getlogin() downloads_folder = (r"C:/Users/" + user + "/Downloads/") files_path = os.path.join(downloads_folder, '*') files = sorted(glob.iglob(files_path), key=os.path.getctime, reverse=True) files_before_download = files print(f'files before download: {len(files)}') finished = False # ... # code # to # download # file # ... # just for extra safety time.sleep(0.5) # wait for the download to finish if there is +1 file in Downloads folder while not finished: files = sorted(glob.iglob(files_path), key=os.path.getctime, reverse=True) print(len(files)) if (len(files) == len(files_before_download)) or (len(files) == (len(files_before_download)+2)): print('not finished') finished = False else: print('finished') finished = True last_downloaded_file = files[0] A: well, what if you check the size of the file until it has x size? There must be an average (Too bored to buid code, build it. Ideas helps too) A: TL;DR poll for the existence of the file poll for non-zero filesize of that file Observed Behaviour I noticed that there can be a lag between a downloaded file appearing in the filesystem and the contents of that file being fully written, especially noticeable with large files. I did some experimenting, using stat_result from os.stat() on Linux, and found the following, when a file is first opened for writing st_size == 0 st_atime == st_mtime == st_ctime while data is being written to the file st_size == 0 st_atime == st_mtime == st_ctime once the writing is complete and the file is closed st_size > 0 st_atime < st_mtime == st_ctime Implementation Poll for a file using glob with a configurable timeout This is useful when you don't know exactly what the name of the downloaded file will be Poll for the filesize of a specific file to be above a threshold import glob import polling2 import os def poll_for_file_glob(file_glob: str, step: int=1, timeout: int=20): try: polling2.poll(lambda: len(glob.glob(file_glob)), step=step, timeout=timeout) except polling2.TimeoutException: raise RuntimeError(f"Unable to find file matching glob '{file_glob}'") return glob.glob(file_glob)[0] def poll_for_file_size(file_path: str, size_threshold: int=0, step: int=1, timeout: int=20): try: polling2.poll(lambda: os.stat(file_path).st_size > size_threshold, step=step, timeout=timeout) except polling2.TimeoutException: file_size = os.stat(file_path).st_size raise RuntimeError(f"File '{file_path}' has size {file_size}, which is not larger than threshold {size_threshold}") return os.stat(file_path).st_size You might use these functions like this, try: file_glob = "file_*.csv" file_path = poll_for_file_glob(file_glob=file_glob) file_size = poll_for_file_size(file_path=file_path) except: print(f"Problem polling for file matching '{file_glob}'") else: print(f"File '{file_path}' ({file_size}B) is ready")
python selenium, find out when a download has completed?
I've used selenium to initiate a download. After the download is complete, certain actions need to be taken, is there any simple method to find out when a download has complete? (I am using the FireFox driver)
[ "I came across this problem recently. I was downloading multiple files at once and had to build in a way to timeout if the downloads failed. \nThe code checks the filenames in some download directory every second and exits once they are complete or if it takes longer than 20 seconds to finish. The returned download time was used to check if the downloads were successful or if it timed out.\nimport time\nimport os\n\ndef download_wait(path_to_downloads):\n seconds = 0\n dl_wait = True\n while dl_wait and seconds < 20:\n time.sleep(1)\n dl_wait = False\n for fname in os.listdir(path_to_downloads):\n if fname.endswith('.crdownload'):\n dl_wait = True\n seconds += 1\n return seconds\n\nI believe that this only works with chrome files as they end with the .crdownload extension. There may be a similar way to check in other browsers.\nEdit: I recently changed the way that I use this function for times that .crdownload does not appear as the extension. Essentially this just waits for the correct number of files as well.\ndef download_wait(directory, timeout, nfiles=None):\n \"\"\"\n Wait for downloads to finish with a specified timeout.\n\n Args\n ----\n directory : str\n The path to the folder where the files will be downloaded.\n timeout : int\n How many seconds to wait until timing out.\n nfiles : int, defaults to None\n If provided, also wait for the expected number of files.\n\n \"\"\"\n seconds = 0\n dl_wait = True\n while dl_wait and seconds < timeout:\n time.sleep(1)\n dl_wait = False\n files = os.listdir(directory)\n if nfiles and len(files) != nfiles:\n dl_wait = True\n\n for fname in files:\n if fname.endswith('.crdownload'):\n dl_wait = True\n\n seconds += 1\n return seconds\n\n", "There is no built-in to selenium way to wait for the download to be completed.\n\nThe general idea here would be to wait until a file would appear in your \"Downloads\" directory.\nThis might either be achieved by looping over and over again checking for file existence:\n\nCheck and wait until a file exists to read it\n\nOr, by using things like watchdog to monitor a directory:\n\nHow to watch a directory for changes? \nMonitoring contents of files/directories?\n\n", "import os\nimport time\n\ndef latest_download_file():\n path = r'Downloads folder file path'\n os.chdir(path)\n files = sorted(os.listdir(os.getcwd()), key=os.path.getmtime)\n newest = files[-1]\n\n return newest\n\nfileends = \"crdownload\"\nwhile \"crdownload\" == fileends:\n time.sleep(1)\n newest_file = latest_download_file()\n if \"crdownload\" in newest_file:\n fileends = \"crdownload\"\n else:\n fileends = \"none\"\n\nThis is a combination of a few solutions. I didn't like that I had to scan the entire downloads folder for a file ending in \"crdownload\". This code implements a function that pulls the newest file in downloads folder. Then it simply checks if that file is still being downloaded. Used it for a Selenium tool I am building worked very well.\n", "I know its too late for the answer, though would like to share a hack for future readers.\nYou can create a thread say thread1 from main thread and initiate your download here.\nNow, create some another thread, say thread2 and in there ,let it wait till thread1 completes using join() method.Now here,you can continue your flow of execution after download completes.\nStill make sure you dont initiate your download using selenium, instead extract the link using selenium and use requests module to download.\n Download using requests module\nFor eg:\ndef downloadit():\n #download code here \n\ndef after_dwn():\n dwn_thread.join() #waits till thread1 has completed executing\n #next chunk of code after download, goes here\n\ndwn_thread = threading.Thread(target=downloadit)\ndwn_thread.start()\n\nmetadata_thread = threading.Thread(target=after_dwn)\nmetadata_thread.start()\n\n", "As answered before, there is no native way to check if download is finished. So here is a helper function that does the job for Firefox and Chrome. One trick is to clear the temp download folder before start a new download. Also, use native pathlib for cross-platform usage.\nfrom pathlib import Path\n\ndef is_download_finished(temp_folder):\n firefox_temp_file = sorted(Path(temp_folder).glob('*.part'))\n chrome_temp_file = sorted(Path(temp_folder).glob('*.crdownload'))\n downloaded_files = sorted(Path(temp_folder).glob('*.*'))\n if (len(firefox_temp_file) == 0) and \\\n (len(chrome_temp_file) == 0) and \\\n (len(downloaded_files) >= 1):\n return True\n else:\n return False\n\n", "Check for \"Unconfirmed\" key word in file name in download directory:\n# wait for download complete\nwait = True\nwhile(wait==True):\n for fname in os.listdir('\\path\\to\\download directory'):\n if ('Unconfirmed') in fname:\n print('downloading files ...')\n time.sleep(10)\n else:\n wait=False\nprint('finished downloading all files ...')\n\nAs soon as the the filed download is completed it exits the while loop.\n", "x1=0\nwhile x1==0:\n count=0\n li = os.listdir(\"directorypath\")\n for x1 in li:\n if x1.endswith(\".crdownload\"):\n count = count+1 \n if count==0:\n x1=1\n else:\n x1=0\n\nThis works if you are trying to check if a set of files(more than one) have finished downloading. \n", "If using Selenium and Chrome, you can write a custom wait condition such as:\nclass file_has_been_downloaded(object):\ndef __init__(self, dir, number):\n self.dir = dir\n self.number = number\n\ndef __call__(self, driver):\n print(count_files(dir), '->', self.number)\n return count_files(dir) > self.number\n\nThe function count_files just verifies that the file has been added to the folder\ndef count_files(direct):\nfor root, dirs, files in os.walk(direct):\n return len(list(f for f in files if f.startswith('MyPrefix') and (\n not f.endswith('.crdownload')) ))\n\nThen to implement this in your code:\nfiles = count_files(dir)\n<< copy the file. Possibly use shutil >>\nWebDriverWait(driver, 30).until(file_has_been_downloaded(dir, files))\n\n", "this worked for me:\nfileends = \"crdownload\"\nwhile \"crdownload\" in fileends:\n sleep(1)\n for fname in os.listdir(some_path): \n print(fname)\n if \"crdownload\" in fname:\n fileends = \"crdownload\"\n else:\n fileends = \"None\"\n\n", "I got a better one though:\nSo redirect the function that starts the download. e.g. download_function= lambda: element.click()\nthan check number of files in directory and wait for a new file that doesnt have the download extension. After that rename it. (can be change to move the file instead of renaming it in the same directory)\ndef save_download(self, directory, download_function, new_name, timeout=30):\n \"\"\"\n Download a file and rename it\n :param directory: download location that is set\n :param download_function: function to start download\n :param new_name: the name that the new download gets\n :param timeout: number of seconds to wait for download\n :return: path to downloaded file\n \"\"\"\n self.logger.info(\"Downloading \" + new_name)\n files_start = os.listdir(directory)\n download_function()\n wait = True\n i = 0\n while (wait or len(os.listdir(directory)) == len(files_start)) and i < timeout * 2:\n sleep(0.5)\n wait = False\n for file_name in os.listdir(directory):\n if file_name.endswith('.crdownload'):\n wait = True\n if i == timeout * 2:\n self.logger.warning(\"Documents not downloaded\")\n raise TimeoutError(\"File not downloaded\")\n else:\n self.logger.info(\"Downloading done\")\n new_file = [name for name in os.listdir(directory) if name not in files_start][0]\n self.logger.info(\"New file found renaming \" + new_file + \" to \" + new_name)\n while not os.access(directory + r\"\\\\\" + new_file, os.W_OK):\n sleep(0.5)\n self.logger.info(\"Waiting for write permission\")\n os.rename(directory + \"\\\\\" + new_file, directory + \"\\\\\" + new_name)\n return directory + \"\\\\\" + new_file\n\n", "With Chrome, files which have not finished downloading have the extension .crdownload. If you set your download directory properly, then you can wait until the file that you want no longer has this extension. In principle, this is not much different to waiting for file to exist (as suggested by alecxe) - but at least you can monitor progress in this way.\n", "create a function that uses \"requests\" to get the file content and call that one, your program will not move forward unless the file is downloaded\nimport requests\nfrom selenium import webdriver\ndriver = webdriver.Chrome()\n# Open the website\ndriver.get(website_url_)\n\nx = driver.find_element_by_partial_link_text('download')\ny = x.get_attribute(\"href\")\nfc = requests.get(y)\nfname = x.text\nwith open(fname, 'wb') as f:\n f.write(fc.content)\n\n", "This is VERY SIMPLE and worked for me (and works fot any extention)\nimport os, glob and time (not truly needed)\n\n# count how many files you have in Downloads folder before download\n\nuser = os.getlogin()\ndownloads_folder = (r\"C:/Users/\" + user + \"/Downloads/\")\nfiles_path = os.path.join(downloads_folder, '*')\nfiles = sorted(glob.iglob(files_path), key=os.path.getctime, reverse=True)\nfiles_before_download = files\n\nprint(f'files before download: {len(files)}')\nfinished = False\n\n# ...\n# code\n# to\n# download\n# file\n# ...\n\n# just for extra safety\ntime.sleep(0.5)\n\n# wait for the download to finish if there is +1 file in Downloads folder\nwhile not finished:\n files = sorted(glob.iglob(files_path), key=os.path.getctime, reverse=True)\n print(len(files))\n if (len(files) == len(files_before_download)) or (len(files) == (len(files_before_download)+2)):\n print('not finished')\n finished = False\n else:\n print('finished')\n finished = True\n\nlast_downloaded_file = files[0]\n\n", "well, what if you check the size of the file until it has x size? There must be an average (Too bored to buid code, build it. Ideas helps too)\n", "TL;DR\n\npoll for the existence of the file\npoll for non-zero filesize of that file\n\nObserved Behaviour\nI noticed that there can be a lag between a downloaded file appearing in the filesystem and the contents of that file being fully written, especially noticeable with large files.\nI did some experimenting, using stat_result from os.stat() on Linux, and found the following,\n\nwhen a file is first opened for writing\n\nst_size == 0\nst_atime == st_mtime == st_ctime\n\n\nwhile data is being written to the file\n\nst_size == 0\nst_atime == st_mtime == st_ctime\n\n\nonce the writing is complete and the file is closed\n\nst_size > 0\nst_atime < st_mtime == st_ctime\n\n\n\nImplementation\n\nPoll for a file using glob with a configurable timeout\n\nThis is useful when you don't know exactly what the name of the downloaded file will be\n\n\nPoll for the filesize of a specific file to be above a threshold\n\nimport glob\nimport polling2\nimport os\n\ndef poll_for_file_glob(file_glob: str, step: int=1, timeout: int=20):\n try:\n polling2.poll(lambda: len(glob.glob(file_glob)), step=step, timeout=timeout)\n except polling2.TimeoutException:\n raise RuntimeError(f\"Unable to find file matching glob '{file_glob}'\")\n return glob.glob(file_glob)[0]\n\ndef poll_for_file_size(file_path: str, size_threshold: int=0, step: int=1, timeout: int=20):\n try:\n polling2.poll(lambda: os.stat(file_path).st_size > size_threshold, step=step, timeout=timeout)\n except polling2.TimeoutException:\n file_size = os.stat(file_path).st_size\n raise RuntimeError(f\"File '{file_path}' has size {file_size}, which is not larger than threshold {size_threshold}\")\n return os.stat(file_path).st_size\n\nYou might use these functions like this,\ntry:\n file_glob = \"file_*.csv\"\n file_path = poll_for_file_glob(file_glob=file_glob)\n file_size = poll_for_file_size(file_path=file_path)\nexcept:\n print(f\"Problem polling for file matching '{file_glob}'\")\nelse:\n print(f\"File '{file_path}' ({file_size}B) is ready\")\n\n" ]
[ 51, 29, 13, 7, 5, 3, 2, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0034338897_python_selenium.txt
Q: Select specific columns with cast using SQLAlchemy I'm using SQLAlchemy (Version: 1.4.44) and I'm having some unexpected results when trying to select columns and using cast on those columns. First, most of the examples and even current documentation suggests column selection should work by passing an array to the select function like this: s = select([table.c.col1]) However, I get the following error if I try this: s = my_table.select([my_table.columns.user_id]) sqlalchemy.exc.ArgumentError: SQL expression for WHERE/HAVING role expected, got [Column('user_id', String(), table=<my_table>)]. Some examples suggest just placing the field directly in the select query. s = select(table.c.col1) But this seems to do nothing more than create an idle where-clause out of the field. I eventually was able to achieve column selection with this approach: s = my_table.select().with_only_columns(my_table.columns.created_at) But I am not able to use cast for some reason with this approach. s = my_table.select().with_only_columns(cast(my_table.columns.created_at, Date)) ValueError: Couldn't parse date string '2022' - value is not a string. All help appreciated! A: I don't think table.select() is common usage. SQLAlchemy is in a big transition right now on its way to 2.0. In 1.4 (and in 2) the following syntax should work, use whatever session handling you already have working I just mean the select(...): from sqlalchemy.sql import select, cast from sqlalchemy.dialects.postgresql import INTEGER class User(Base): __tablename__ = "users" id = Column( Integer, nullable=False, primary_key=True ) name = Column(Text) with Session(engine) as session: u1 = User(name="1") session.add(u1) session.commit() with Session(engine) as session: my_table = User.__table__ # Cast user name into integer. print (session.execute(select(cast(my_table.c.name, INTEGER))).all())
Select specific columns with cast using SQLAlchemy
I'm using SQLAlchemy (Version: 1.4.44) and I'm having some unexpected results when trying to select columns and using cast on those columns. First, most of the examples and even current documentation suggests column selection should work by passing an array to the select function like this: s = select([table.c.col1]) However, I get the following error if I try this: s = my_table.select([my_table.columns.user_id]) sqlalchemy.exc.ArgumentError: SQL expression for WHERE/HAVING role expected, got [Column('user_id', String(), table=<my_table>)]. Some examples suggest just placing the field directly in the select query. s = select(table.c.col1) But this seems to do nothing more than create an idle where-clause out of the field. I eventually was able to achieve column selection with this approach: s = my_table.select().with_only_columns(my_table.columns.created_at) But I am not able to use cast for some reason with this approach. s = my_table.select().with_only_columns(cast(my_table.columns.created_at, Date)) ValueError: Couldn't parse date string '2022' - value is not a string. All help appreciated!
[ "I don't think table.select() is common usage. SQLAlchemy is in a big transition right now on its way to 2.0. In 1.4 (and in 2) the following syntax should work, use whatever session handling you already have working I just mean the select(...):\nfrom sqlalchemy.sql import select, cast\nfrom sqlalchemy.dialects.postgresql import INTEGER\n\nclass User(Base):\n __tablename__ = \"users\"\n id = Column(\n Integer, nullable=False, primary_key=True\n )\n name = Column(Text)\n\nwith Session(engine) as session:\n u1 = User(name=\"1\")\n session.add(u1)\n session.commit()\n\nwith Session(engine) as session:\n my_table = User.__table__\n # Cast user name into integer.\n print (session.execute(select(cast(my_table.c.name, INTEGER))).all())\n\n" ]
[ 1 ]
[]
[]
[ "casting", "python", "select", "sqlalchemy" ]
stackoverflow_0074461385_casting_python_select_sqlalchemy.txt
Q: Scraping: run for loop n number of times I am using instaloader to scrape instagram posts as part of a study project. To avoid getting shut down by instagram, I use sleep function to sleep between 1-20 sec between each round. This works well. I don't want to have to go through all posts each time I scrape, and therefore i want the loop to run 5 times. Which will give me 5 posts. But I don't seem to manage to get it to do it. I had written the following function to try to scrape the profile and return the first 5 posts: ## importing and creating instance from instaloader import Instaloader from instaloader import Profile import instaloader import time from random import randint L = instaloader.Instaloader() #random time for sleep vent = randint(1,20) # function: def get2posts(profile_name): profile = Profile.from_username(L.context, profile_name) POSTS = profile.get_posts() for post in POSTS: for i in range(2): L.download_post(post, profile_name) time.sleep(vent) break print('scrape done') This code returns 5 of the same posts though, and I simply can't figure out a way to get it to return the first 5 posts of an account. The working function, which harvests all posts of a profile is: # the original function (without range) def get_posts(profile_name): profile = Profile.from_username(L.context, profile_name) POSTS = profile.get_posts() for post in POSTS: L.download_post(post, profile_name) time.sleep(vent) print('I am done') Hope you can help :) A: The problem is that the inner for loop runs download_post twice (range(2)) on the same post, and then the outer loop breaks. If POSTS is a list, you can use slicing to loop only over the first 5 items like so: for post in POSTS[:5]:. A safer method though would be to count the posts as you go, which should work for most types of iterables (not just lists), like so: def get2posts(profile_name): profile = Profile.from_username(L.context, profile_name) POSTS = profile.get_posts() for i, post in enumerate(POSTS): L.download_post(post, profile_name) if i == 4: break time.sleep(vent) print('scrape done')
Scraping: run for loop n number of times
I am using instaloader to scrape instagram posts as part of a study project. To avoid getting shut down by instagram, I use sleep function to sleep between 1-20 sec between each round. This works well. I don't want to have to go through all posts each time I scrape, and therefore i want the loop to run 5 times. Which will give me 5 posts. But I don't seem to manage to get it to do it. I had written the following function to try to scrape the profile and return the first 5 posts: ## importing and creating instance from instaloader import Instaloader from instaloader import Profile import instaloader import time from random import randint L = instaloader.Instaloader() #random time for sleep vent = randint(1,20) # function: def get2posts(profile_name): profile = Profile.from_username(L.context, profile_name) POSTS = profile.get_posts() for post in POSTS: for i in range(2): L.download_post(post, profile_name) time.sleep(vent) break print('scrape done') This code returns 5 of the same posts though, and I simply can't figure out a way to get it to return the first 5 posts of an account. The working function, which harvests all posts of a profile is: # the original function (without range) def get_posts(profile_name): profile = Profile.from_username(L.context, profile_name) POSTS = profile.get_posts() for post in POSTS: L.download_post(post, profile_name) time.sleep(vent) print('I am done') Hope you can help :)
[ "The problem is that the inner for loop runs download_post twice (range(2)) on the same post, and then the outer loop breaks. If POSTS is a list, you can use slicing to loop only over the first 5 items like so: for post in POSTS[:5]:. A safer method though would be to count the posts as you go, which should work for most types of iterables (not just lists), like so:\ndef get2posts(profile_name):\n profile = Profile.from_username(L.context, profile_name)\n POSTS = profile.get_posts()\n\n for i, post in enumerate(POSTS):\n L.download_post(post, profile_name)\n if i == 4:\n break\n time.sleep(vent)\n\n print('scrape done')\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "instagram", "instaloader", "python" ]
stackoverflow_0074467018_for_loop_instagram_instaloader_python.txt
Q: How to autoincrement values checkbox with jinja2 (Django) with reset I need to autoincrement value in my checkbox and reset value when I generated new massive of checkbox forloop.count dont reset {% for ans in Answ %} {% if ans.question_id_id == Questions.id %} <input type="hidden" value="{{ Questions.id }}" name="id"> <div class="form-check" ><label><input type="checkbox" value="{{ ans.id }}" name="answer"> {{ ans.answer }} </label></div> {% endif %} {% endfor %} views.py class AnswerQuestionView (LoginRequiredMixin, DetailView): login_url = '/login' redirect_field_name = 'redirect_to' model = Question template_name = 'index.html' context_object_name = 'Questions' slug_field = 'pk' def get_context_data(self, **kwargs): context = super(AnswerQuestionView, self).get_context_data(**kwargs) context['user_group'] = self.request.user.groups.values_list()[0][1] context['Answ'] = QuestAnswer.objects.all() return context A: This is one of the many reasons why you should not do filtering in the template. Another very important one is performance: as the number of answers will grow, eventually the template rendering will take a lot of time. You can filter in the view with: class AnswerQuestionView(LoginRequiredMixin, DetailView): login_url = '/login' redirect_field_name = 'redirect_to' model = Question template_name = 'index.html' context_object_name = 'question' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['user_group'] = self.request.user.groups.values_list()[0][1] context['answers'] = QuestAnswer.objects.filter(question_id=self.object) return context You probably can even use the related name: class AnswerQuestionView(LoginRequiredMixin, DetailView): login_url = '/login' redirect_field_name = 'redirect_to' model = Question template_name = 'index.html' context_object_name = 'question' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['user_group'] = self.request.user.groups.values_list()[0][1] context['answers'] = self.object.questionanswer_set.all() return context Note: Normally one does not add a suffix …_id to a ForeignKey field, since Django will automatically add a "twin" field with an …_id suffix. Therefore it should be question, instead of question_id.
How to autoincrement values checkbox with jinja2 (Django) with reset
I need to autoincrement value in my checkbox and reset value when I generated new massive of checkbox forloop.count dont reset {% for ans in Answ %} {% if ans.question_id_id == Questions.id %} <input type="hidden" value="{{ Questions.id }}" name="id"> <div class="form-check" ><label><input type="checkbox" value="{{ ans.id }}" name="answer"> {{ ans.answer }} </label></div> {% endif %} {% endfor %} views.py class AnswerQuestionView (LoginRequiredMixin, DetailView): login_url = '/login' redirect_field_name = 'redirect_to' model = Question template_name = 'index.html' context_object_name = 'Questions' slug_field = 'pk' def get_context_data(self, **kwargs): context = super(AnswerQuestionView, self).get_context_data(**kwargs) context['user_group'] = self.request.user.groups.values_list()[0][1] context['Answ'] = QuestAnswer.objects.all() return context
[ "This is one of the many reasons why you should not do filtering in the template. Another very important one is performance: as the number of answers will grow, eventually the template rendering will take a lot of time.\nYou can filter in the view with:\nclass AnswerQuestionView(LoginRequiredMixin, DetailView):\n login_url = '/login'\n redirect_field_name = 'redirect_to'\n model = Question\n template_name = 'index.html'\n context_object_name = 'question'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['user_group'] = self.request.user.groups.values_list()[0][1]\n context['answers'] = QuestAnswer.objects.filter(question_id=self.object)\n return context\nYou probably can even use the related name:\nclass AnswerQuestionView(LoginRequiredMixin, DetailView):\n login_url = '/login'\n redirect_field_name = 'redirect_to'\n model = Question\n template_name = 'index.html'\n context_object_name = 'question'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['user_group'] = self.request.user.groups.values_list()[0][1]\n context['answers'] = self.object.questionanswer_set.all()\n return context\n\n\nNote: Normally one does not add a suffix …_id to a ForeignKey field, since Django\nwill automatically add a \"twin\" field with an …_id suffix. Therefore it should\nbe question, instead of question_id.\n\n" ]
[ 1 ]
[]
[]
[ "django", "jinja2", "python" ]
stackoverflow_0074468595_django_jinja2_python.txt
Q: NLP: pre-processing dataset into a new dataset I need help with processing an unsorted dataset. Sry, if I am a complete noob. I never did anything like that before. So as you can see, each conversation is identified by a dialogueID which consists of multiple rows of "from" & "to", as well as text messages. I would like to concatenate the text messages from the same sender of a dialogueID to one column and from the receiver to another column. This way, I could have a new csv-file with just [dialogueID, sender, receiver]. the new dataset should look like this I watched multiple tutorials and really struggle to figure out how to do it. I read in this 9-year-old post that iterating through data frames are not a good idea. Could someone help me out with a code snippet or give me a hint on how to properly do it without overcomplicating things? I thought something like this pseudo code below, but the performance with 1 million rows is not great, right? while !endOfFile for dialogueID in range (0, 1038324) if dialogueID+1 == dialogueID and toValue.isnull() concatenate textFromPrevRow + " " + textFromCurrentRow add new string to table column sender else add text to column receiver A: Edit 1 According to your clarification, this is what I believe you're looking for. Create an aggregation function which basically concats your string values with a line-break character. Then group by dialogueID and apply your aggregation. d = {} d['from'] = '\n'.join d['to'] = '\n'.join new_df = dialogue_dataframe.groupby('dialogueID', as_index=False).agg(d) After that rename the columns as you'd like: df.rename(columns={"from": "sender", "to": "receiver"}) Original answer Not quite sure I understood what you try to achieve, but maybe this will give some insights. Maybe write a couple of rows of the table you expect to get, for better clarification A: While the exact structure of the data (and thus your task) is not completely clear, maybe DataFrame.apply or rather DataFrame.aggregate can help you speed things up. Also, I would aggregate into either a dictionary or dataframe indexed by dialogue id. This way you can easily check if a given dialogue / sender already exists.
NLP: pre-processing dataset into a new dataset
I need help with processing an unsorted dataset. Sry, if I am a complete noob. I never did anything like that before. So as you can see, each conversation is identified by a dialogueID which consists of multiple rows of "from" & "to", as well as text messages. I would like to concatenate the text messages from the same sender of a dialogueID to one column and from the receiver to another column. This way, I could have a new csv-file with just [dialogueID, sender, receiver]. the new dataset should look like this I watched multiple tutorials and really struggle to figure out how to do it. I read in this 9-year-old post that iterating through data frames are not a good idea. Could someone help me out with a code snippet or give me a hint on how to properly do it without overcomplicating things? I thought something like this pseudo code below, but the performance with 1 million rows is not great, right? while !endOfFile for dialogueID in range (0, 1038324) if dialogueID+1 == dialogueID and toValue.isnull() concatenate textFromPrevRow + " " + textFromCurrentRow add new string to table column sender else add text to column receiver
[ "Edit 1\nAccording to your clarification, this is what I believe you're looking for.\nCreate an aggregation function which basically concats your string values with a line-break character. Then group by dialogueID and apply your aggregation.\nd = {}\nd['from'] = '\\n'.join\nd['to'] = '\\n'.join\nnew_df = dialogue_dataframe.groupby('dialogueID', as_index=False).agg(d)\n\nAfter that rename the columns as you'd like:\ndf.rename(columns={\"from\": \"sender\", \"to\": \"receiver\"})\n\nOriginal answer\nNot quite sure I understood what you try to achieve, but maybe this will give some insights. Maybe write a couple of rows of the table you expect to get, for better clarification\n", "While the exact structure of the data (and thus your task) is not completely clear, maybe DataFrame.apply or rather DataFrame.aggregate can help you speed things up. Also, I would aggregate into either a dictionary or dataframe indexed by dialogue id. This way you can easily check if a given dialogue / sender already exists.\n" ]
[ 1, 0 ]
[]
[]
[ "data_preprocessing", "data_science", "dataframe", "nlp", "python" ]
stackoverflow_0074468471_data_preprocessing_data_science_dataframe_nlp_python.txt
Q: regex to get value and it's proper unit i use the following regex to extract values that appear before certain units: ([.\d]+)\s*(?:kg|gr|g) What i want, is to include the unit of that specific value for example from this string : "some text 5kg another text 3 g more text 11.5gr end" i should be getting : ["5kg", "3 g", "11.5gr"] can't wrap my head on how to modify the above expression to get the wanted result. Thank you. A: import re p = re.compile('(?<!\d|\.)\d+(?:\.\d+)?\s*?(?:gr|kg|g)(?!\w)') print(p.findall("some text 5kg another text 3 g more text 11.5gr end")) A: Other solution (regex demo): (?i)\b\d+\.?\d*\s*(?:kg|gr?)\b (?i) - case insensitive \b - word boundary \d+\.?\d* - match the amount \s* - any number of spaces (?:kg|gr?) - match kg, g or gr \b - word boundary import re p = re.compile(r"(?i)\b\d+\.?\d*\s*(?:kg|gr?)\b") print(p.findall("some text 5kg another text 3 g more text 11.5gr end")) Prints: ['5kg', '3 g', '11.5gr']
regex to get value and it's proper unit
i use the following regex to extract values that appear before certain units: ([.\d]+)\s*(?:kg|gr|g) What i want, is to include the unit of that specific value for example from this string : "some text 5kg another text 3 g more text 11.5gr end" i should be getting : ["5kg", "3 g", "11.5gr"] can't wrap my head on how to modify the above expression to get the wanted result. Thank you.
[ "import re\n\np = re.compile('(?<!\\d|\\.)\\d+(?:\\.\\d+)?\\s*?(?:gr|kg|g)(?!\\w)')\nprint(p.findall(\"some text 5kg another text 3 g more text 11.5gr end\"))\n\n", "Other solution (regex demo):\n(?i)\\b\\d+\\.?\\d*\\s*(?:kg|gr?)\\b\n\n\n(?i) - case insensitive\n\\b - word boundary\n\n\\d+\\.?\\d* - match the amount\n\\s* - any number of spaces\n(?:kg|gr?) - match kg, g or gr\n\n\n\\b - word boundary\n\n\nimport re\n\np = re.compile(r\"(?i)\\b\\d+\\.?\\d*\\s*(?:kg|gr?)\\b\")\nprint(p.findall(\"some text 5kg another text 3 g more text 11.5gr end\"))\n\nPrints:\n['5kg', '3 g', '11.5gr']\n\n" ]
[ 2, 1 ]
[]
[]
[ "python", "regex", "string" ]
stackoverflow_0074468594_python_regex_string.txt
Q: Is there a way to use an array as an index in Python? I'm trying to speed up my code and right now I have a "for" loop to sum numbers in an array. It's set up like this: a1=np.zeros(5) a2=[1,2,3,4,5,6,7,8,9,10] And what I want to do is sum the values of a2[:5] + a2[5:], to end up with a1=[7,9,11,13,15] So I've made a loop that goes: for ii in range(2): a1+=a2[5*ii:5*(ii+1)] However, this is taking really long. Does anyone have any ideas on how to get around this or how to restructure my code? I want to do: i=np.range(2) a1+=a2[5*i:5*(i+1)] But can't, since you can't use arrays as indices in Python. That's the only other idea I've had besides the loop. Edit: the 2 here is just an example, in my code I'm planning on having it do this like 50-100 times. A: It looks like you would like to add the first half of the list to its second half. This can be accomplished by reshaping the 1D list into a 2D array (2x5) and summing it along the horizontal axis. np.array(a2).reshape(2,5).sum(axis=0) # array([ 7, 9, 11, 13, 15]) A: Depends on what you really want to achieve. but the np.roll function might get the speed you are looking for: a2 = np.array([1,2,3,4,5,6,7,8,9,10]) a1 = a2[:5] + np.roll(a2, -5)[:5]
Is there a way to use an array as an index in Python?
I'm trying to speed up my code and right now I have a "for" loop to sum numbers in an array. It's set up like this: a1=np.zeros(5) a2=[1,2,3,4,5,6,7,8,9,10] And what I want to do is sum the values of a2[:5] + a2[5:], to end up with a1=[7,9,11,13,15] So I've made a loop that goes: for ii in range(2): a1+=a2[5*ii:5*(ii+1)] However, this is taking really long. Does anyone have any ideas on how to get around this or how to restructure my code? I want to do: i=np.range(2) a1+=a2[5*i:5*(i+1)] But can't, since you can't use arrays as indices in Python. That's the only other idea I've had besides the loop. Edit: the 2 here is just an example, in my code I'm planning on having it do this like 50-100 times.
[ "It looks like you would like to add the first half of the list to its second half. This can be accomplished by reshaping the 1D list into a 2D array (2x5) and summing it along the horizontal axis.\nnp.array(a2).reshape(2,5).sum(axis=0)\n# array([ 7, 9, 11, 13, 15])\n\n", "Depends on what you really want to achieve.\nbut the np.roll function might get the speed you are looking for:\na2 = np.array([1,2,3,4,5,6,7,8,9,10])\na1 = a2[:5] + np.roll(a2, -5)[:5]\n\n" ]
[ 2, 0 ]
[]
[]
[ "arrays", "indexing", "numpy", "performance", "python" ]
stackoverflow_0074468124_arrays_indexing_numpy_performance_python.txt
Q: Notify us if a QlineEdit is clicked while being in ReadOnly State and change a button Color depending if QlineEdit is in ReadOnly state or not I have a Pyqt Widget containing 3 buttons, 1 QlineEdit and 1 statusbar. One of the buttons makes the qlineedit in Read Only state, another one to disable the qlineedit Readonly state and the last one to show the values of the Qlineedit in the status bar message. I would like to build an event that is triggered when the qlineedit is in read only state and is clicked, this should show us a message in a status bar that the field is protected. Lastly, I would like to build another event that changes the color of the button "Edit" When the Each Time the Qlineedit Read only State is disabled. ``` import sys from PyQt5 import QtCore, QtWidgets from PyQt5.QtWidgets import QMainWindow, QWidget, QLabel, QLineEdit from PyQt5.QtWidgets import QPushButton, QStatusBar from PyQt5.QtCore import QSize class MainWindow(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.setMinimumSize(QSize(750, 300)) self.setWindowTitle("PyQt Line Edit example (textfield) - pythonprogramminglanguage.com") self.nameLabel = QLabel(self) self.nameLabel.setText('Name:') self.line = QLineEdit(self) self.line.move(80, 20) self.line.resize(200, 32) self.nameLabel.move(20, 20) # Push buttons self.btn_add = QPushButton('View', self) self.btn_add.clicked.connect(self.view) self.btn_add.resize(200,32) self.btn_add.move(80, 60) self.btn_edit = QPushButton('Edit', self) self.btn_edit.clicked.connect(self.edit) if self.edit(): self.btn_edit.clicked.connect(self.change_edit_button_color) #<<<<<<<<<<< This is my attempt ## The problem is It doesn´t get disabled when I press other button else: self.btn_edit.clicked.connect(self.normal_edit_button_color) self.btn_edit.resize(200,32) self.btn_edit.move(80, 120 ) self.btn_Validate = QPushButton('Validate', self) self.btn_Validate.clicked.connect(self.Validate) self.btn_Validate.resize(200,32) self.btn_Validate.move(80, 180) #Creating the statusbar self.statusBar = QStatusBar() self.setStatusBar(self.statusBar) self.line.setReadOnly(True) #<<<< BY DEFAULT THE QLINE FIELD IS IN READONLY STATE #>>>>>>>>>>>>>>>If the Qlineedit object is in readonly state notify us>>> This is my attempt <<<<<<<<<<<<<<<<<<<<<<<<< #The problem is that it doesn´t take into account the if statetment if self.line.isReadOnly(): self.line.selectionChanged.connect(self.Protected_field) def view(self): self.line.setReadOnly(True) #<<<< FUCTION TO PROTECT THE QLINE FIELD BY READONLY STATE self.edit= False def edit(self): self.line.setReadOnly(False) #<<<< FUCTION TO DISABLE READONLY STATE THE QLINE FIELD return True def Validate(self): self.statusBar.showMessage("Qline Value is {}".format(self.line.text()), 8000) self.edit= False def Protected_field(self): #<<< show in statusbar a message when the Qline is in Readonly state and is clicked self.statusBar.showMessage("This field is protected", 2000) def change_edit_button_color(self): #<<< Function to change edit button color when the QLINE is not in Readonly State self.btn_edit.setStyleSheet('background-color :rgb(119, 120, 121)') def normal_edit_button_color(self): #<<< Function to change edit button color when the QLINE is not in Readonly State self.btn_edit.setStyleSheet('background-color :white') if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) mainWin = MainWindow() mainWin.show() sys.exit( app.exec_() ) ´´´ I created the signal "selectionChanged" to notify us each time the Qlineedit object is clicked with an if statement but it doesn´t take that if statement into account, it triggers always the signal. For the color change, I made the edit method return a boolean True value when it's clicked and the other buttons return False and created an if statement with the signal to change the button color each time the Edit method returns True, but the problem is the button doesn´t comeback to the normal color once the the boolean value is False. A: There is no need to create additional slots for the same signal, you can simply change the color of the button inside the methods that toggle the readOnly setting. In fact you actually don't even need two separate buttons for the view and edit methods... you could just have a single button that toggles readOnly both off and on. Also your if statement for the statusbar message isn't working because the code needs to be executed every time the button is pressed, so it needs to moved inside of the slot method. For example: class MainWindow(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.setMinimumSize(QSize(750, 300)) self.setWindowTitle("PyQt Line Edit example (textfield) - pythonprogramminglanguage.com") self.nameLabel = QLabel(self) self.nameLabel.setText('Name:') self.line = QLineEdit(self) self.line.move(80, 20) self.line.resize(200, 32) self.nameLabel.move(20, 20) self.btn_add = QPushButton('View', self) self.btn_add.clicked.connect(self.view) self.btn_add.resize(200,32) self.btn_add.move(80, 60) self.btn_edit = QPushButton('Edit', self) self.btn_edit.clicked.connect(self.edit) self.btn_edit.resize(200,32) self.btn_edit.move(80, 120 ) self.btn_Validate = QPushButton('Validate', self) self.btn_Validate.clicked.connect(self.Validate) self.btn_Validate.resize(200,32) self.btn_Validate.move(80, 180) self.statusBar = QStatusBar() self.setStatusBar(self.statusBar) self.line.setReadOnly(True) self.line.selectionChanged.connect(self.Protected_field) def view(self): self.line.setReadOnly(True) self.btn_edit.setStyleSheet('background-color: rgb(119, 120, 121);') def edit(self): self.line.setReadOnly(False) self.btn_edit.setStyleSheet('background-color :white') def validate(self): self.statusBar.showMessage("Qline Value is {}".format(self.line.text()), 8000) def protected_field(self): if self.line.isReadOnly(): self.statusBar.showMessage("This field is protected", 2000) I also suggest using a vertical layout instead of calling move and resize on each widget. Check out QVBoxLayout.
Notify us if a QlineEdit is clicked while being in ReadOnly State and change a button Color depending if QlineEdit is in ReadOnly state or not
I have a Pyqt Widget containing 3 buttons, 1 QlineEdit and 1 statusbar. One of the buttons makes the qlineedit in Read Only state, another one to disable the qlineedit Readonly state and the last one to show the values of the Qlineedit in the status bar message. I would like to build an event that is triggered when the qlineedit is in read only state and is clicked, this should show us a message in a status bar that the field is protected. Lastly, I would like to build another event that changes the color of the button "Edit" When the Each Time the Qlineedit Read only State is disabled. ``` import sys from PyQt5 import QtCore, QtWidgets from PyQt5.QtWidgets import QMainWindow, QWidget, QLabel, QLineEdit from PyQt5.QtWidgets import QPushButton, QStatusBar from PyQt5.QtCore import QSize class MainWindow(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.setMinimumSize(QSize(750, 300)) self.setWindowTitle("PyQt Line Edit example (textfield) - pythonprogramminglanguage.com") self.nameLabel = QLabel(self) self.nameLabel.setText('Name:') self.line = QLineEdit(self) self.line.move(80, 20) self.line.resize(200, 32) self.nameLabel.move(20, 20) # Push buttons self.btn_add = QPushButton('View', self) self.btn_add.clicked.connect(self.view) self.btn_add.resize(200,32) self.btn_add.move(80, 60) self.btn_edit = QPushButton('Edit', self) self.btn_edit.clicked.connect(self.edit) if self.edit(): self.btn_edit.clicked.connect(self.change_edit_button_color) #<<<<<<<<<<< This is my attempt ## The problem is It doesn´t get disabled when I press other button else: self.btn_edit.clicked.connect(self.normal_edit_button_color) self.btn_edit.resize(200,32) self.btn_edit.move(80, 120 ) self.btn_Validate = QPushButton('Validate', self) self.btn_Validate.clicked.connect(self.Validate) self.btn_Validate.resize(200,32) self.btn_Validate.move(80, 180) #Creating the statusbar self.statusBar = QStatusBar() self.setStatusBar(self.statusBar) self.line.setReadOnly(True) #<<<< BY DEFAULT THE QLINE FIELD IS IN READONLY STATE #>>>>>>>>>>>>>>>If the Qlineedit object is in readonly state notify us>>> This is my attempt <<<<<<<<<<<<<<<<<<<<<<<<< #The problem is that it doesn´t take into account the if statetment if self.line.isReadOnly(): self.line.selectionChanged.connect(self.Protected_field) def view(self): self.line.setReadOnly(True) #<<<< FUCTION TO PROTECT THE QLINE FIELD BY READONLY STATE self.edit= False def edit(self): self.line.setReadOnly(False) #<<<< FUCTION TO DISABLE READONLY STATE THE QLINE FIELD return True def Validate(self): self.statusBar.showMessage("Qline Value is {}".format(self.line.text()), 8000) self.edit= False def Protected_field(self): #<<< show in statusbar a message when the Qline is in Readonly state and is clicked self.statusBar.showMessage("This field is protected", 2000) def change_edit_button_color(self): #<<< Function to change edit button color when the QLINE is not in Readonly State self.btn_edit.setStyleSheet('background-color :rgb(119, 120, 121)') def normal_edit_button_color(self): #<<< Function to change edit button color when the QLINE is not in Readonly State self.btn_edit.setStyleSheet('background-color :white') if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) mainWin = MainWindow() mainWin.show() sys.exit( app.exec_() ) ´´´ I created the signal "selectionChanged" to notify us each time the Qlineedit object is clicked with an if statement but it doesn´t take that if statement into account, it triggers always the signal. For the color change, I made the edit method return a boolean True value when it's clicked and the other buttons return False and created an if statement with the signal to change the button color each time the Edit method returns True, but the problem is the button doesn´t comeback to the normal color once the the boolean value is False.
[ "There is no need to create additional slots for the same signal, you can simply change the color of the button inside the methods that toggle the readOnly setting. In fact you actually don't even need two separate buttons for the view and edit methods... you could just have a single button that toggles readOnly both off and on.\nAlso your if statement for the statusbar message isn't working because the code needs to be executed every time the button is pressed, so it needs to moved inside of the slot method.\nFor example:\nclass MainWindow(QMainWindow):\n def __init__(self):\n QMainWindow.__init__(self)\n self.setMinimumSize(QSize(750, 300))\n self.setWindowTitle(\"PyQt Line Edit example (textfield) - pythonprogramminglanguage.com\")\n self.nameLabel = QLabel(self)\n self.nameLabel.setText('Name:')\n self.line = QLineEdit(self)\n self.line.move(80, 20)\n self.line.resize(200, 32)\n self.nameLabel.move(20, 20)\n self.btn_add = QPushButton('View', self)\n self.btn_add.clicked.connect(self.view)\n self.btn_add.resize(200,32)\n self.btn_add.move(80, 60)\n self.btn_edit = QPushButton('Edit', self)\n self.btn_edit.clicked.connect(self.edit)\n self.btn_edit.resize(200,32)\n self.btn_edit.move(80, 120 )\n self.btn_Validate = QPushButton('Validate', self)\n self.btn_Validate.clicked.connect(self.Validate)\n self.btn_Validate.resize(200,32)\n self.btn_Validate.move(80, 180)\n self.statusBar = QStatusBar()\n self.setStatusBar(self.statusBar)\n self.line.setReadOnly(True)\n self.line.selectionChanged.connect(self.Protected_field)\n\n def view(self):\n self.line.setReadOnly(True)\n self.btn_edit.setStyleSheet('background-color: rgb(119, 120, 121);')\n\n def edit(self):\n self.line.setReadOnly(False)\n self.btn_edit.setStyleSheet('background-color :white')\n\n def validate(self):\n self.statusBar.showMessage(\"Qline Value is {}\".format(self.line.text()), 8000)\n\n def protected_field(self):\n if self.line.isReadOnly():\n self.statusBar.showMessage(\"This field is protected\", 2000)\n\n\nI also suggest using a vertical layout instead of calling move and resize on each widget. Check out QVBoxLayout.\n" ]
[ 0 ]
[]
[]
[ "pyqt", "pyqt5", "python", "signals", "user_interface" ]
stackoverflow_0074463359_pyqt_pyqt5_python_signals_user_interface.txt
Q: Normalize the espicific rows of an array I have an array with size ( 61000) I want to normalize it based on this rule: Normalize the rows 0, 6, 12, 18, 24, ... (6i for i in range(1000)) based on the formulation which I provide. Dont change the values of the other rows. Here is an example: def normalize(array): minimum = np.expand_dims(np.min(array, axis=1), axis=1) maximum = np.expand_dims(np.max(array, axis=1), axis=1) return (array - minimum) / (maximum - minimum + 0.00001) Calling with the following input doesn't work: A = array([[15, 14, 3], [11, 9, 9], [16, 6, 1], [14, 6, 9], [ 1, 12, 2], [ 5, 1, 2], [13, 11, 2], [11, 4, 1], [11, 7, 10], [10, 11, 16], [ 2, 13, 4], [12, 14, 14]]) normalize(A) I expect the following output: array([[0.99999917, 0.9166659 , 0. ], [11, 9, 9], [16, 6, 1], [14, 6, 9], [ 1, 12, 2], [ 5, 1, 2], [0.99999909, 0.81818107, 0. ]], [11, 4, 1], [11, 7, 10], [10, 11, 16], [ 2, 13, 4], [12, 14, 14]]) A: You have to set up a second function having the step argument: def normalize_with_step(array, step): b = normalize(array[::step]) a, b = list(array), list(b) for i in range(0, len(a), step): a[i] = b[int(i/step)] a = np.array(a) return a Let's try it with a step = 6: a = np.random.randint(17, size = (12, 3)) a = normalize_with_step(a, 6) a Output array([[ 0.83333264, 0.99999917, 0. ], [ 9. , 14. , 6. ], [14. , 15. , 12. ], [12. , 7. , 10. ], [ 8. , 13. , 9. ], [12. , 0. , 3. ], [ 0.53333298, 0.99999933, 0. ], [15. , 14. , 12. ], [14. , 6. , 16. ], [ 9. , 14. , 3. ], [ 8. , 9. , 0. ], [10. , 13. , 0. ]])
Normalize the espicific rows of an array
I have an array with size ( 61000) I want to normalize it based on this rule: Normalize the rows 0, 6, 12, 18, 24, ... (6i for i in range(1000)) based on the formulation which I provide. Dont change the values of the other rows. Here is an example: def normalize(array): minimum = np.expand_dims(np.min(array, axis=1), axis=1) maximum = np.expand_dims(np.max(array, axis=1), axis=1) return (array - minimum) / (maximum - minimum + 0.00001) Calling with the following input doesn't work: A = array([[15, 14, 3], [11, 9, 9], [16, 6, 1], [14, 6, 9], [ 1, 12, 2], [ 5, 1, 2], [13, 11, 2], [11, 4, 1], [11, 7, 10], [10, 11, 16], [ 2, 13, 4], [12, 14, 14]]) normalize(A) I expect the following output: array([[0.99999917, 0.9166659 , 0. ], [11, 9, 9], [16, 6, 1], [14, 6, 9], [ 1, 12, 2], [ 5, 1, 2], [0.99999909, 0.81818107, 0. ]], [11, 4, 1], [11, 7, 10], [10, 11, 16], [ 2, 13, 4], [12, 14, 14]])
[ "You have to set up a second function having the step argument:\ndef normalize_with_step(array, step):\n \n b = normalize(array[::step])\n a, b = list(array), list(b)\n \n for i in range(0, len(a), step):\n a[i] = b[int(i/step)]\n \n a = np.array(a)\n return a\n\nLet's try it with a step = 6:\na = np.random.randint(17, size = (12, 3))\na = normalize_with_step(a, 6)\na\n\nOutput\narray([[ 0.83333264, 0.99999917, 0. ],\n [ 9. , 14. , 6. ],\n [14. , 15. , 12. ],\n [12. , 7. , 10. ],\n [ 8. , 13. , 9. ],\n [12. , 0. , 3. ],\n [ 0.53333298, 0.99999933, 0. ],\n [15. , 14. , 12. ],\n [14. , 6. , 16. ],\n [ 9. , 14. , 3. ],\n [ 8. , 9. , 0. ],\n [10. , 13. , 0. ]])\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074468506_numpy_python.txt
Q: Generate json schema from argparse CLI I have a CLI written with argparse and I was wondering if there was a way to produce a JSON schema from the ArgumentParser? The thought behind this being to distribute the JSON schema to extensions interfacing with the application, thus removing the need for each extension to write and maintain their own schema. My idea was to Convert the argparse.ArgumentParser to Python dictionary or JSON file and then pass that into a JSON schema generator Example import argparse from genson import SchemaBuilder parser = argparse.ArgumentParser( description="Some description", prog="myprog", usage="myprog [options]" ) parser.add_argument( "-v", "--version", action="store_true", help="Print server version number and exit", ) parser.add_argument( "-c", "--config", type=str, default=".fortls", help="Configuration options file (default file name: %(default)s)", ) args = vars(parser.parse_args("")) # Generate schema builder = SchemaBuilder() builder.add_schema({"type": "object", "properties": {}}) for k, v in args.items(): builder.add_object({k: v}) print(builder.to_json(indent=2)) Output { "$schema": "http://json-schema.org/schema#", "type": "object", "properties": { "version": { "type": "boolean" }, "config": { "type": "string" } } } However, I quickly realised that calling vars(parser().parse_args("")) to convert the CLI into a dictionary resulted into a lot of information being lost, like descriptions and required. Is there another way of doing this? I am open to swappingargparse with some other CLI if it would make generating a schema easier. Additional resources Tool to generate JSON schema from JSON data A: The solution that I came up with was to access the private variable _actions from ArgumentParser and convert that to a schema using pydantic. In my specific case it was quite easy to do since all the arguments in argparse were optional. If not, a bit more thought has to be put when creating the model with pydantic from __future__ import annotations from pydantic import Field, create_model import argparse parser = argparse.ArgumentParser( description="Some description", prog="myprog", usage="myprog [options]" ) parser.add_argument( "-v", "--version", action="store_true", help="Print server version number and exit", ) parser.add_argument( "-c", "--config", type=str, default=".fortls", help="Configuration options file (default file name: %(default)s)", ) schema_vals = {} for arg in parser._actions: # if condition for arguments to exclude: # continue val = arg.default desc: str = arg.help.replace("%(default)s", str(val)) # type: ignore schema_vals[arg.dest] = Field(val, description=desc) # type: ignore m = create_model("MySchema", **schema_vals) m.__doc__ = "Some description" with open("schema.json", "w") as f: print(m.schema_json(indent=2), file=f) Output { "title": "MySchema", "description": "Some description", "type": "object", "properties": { "help": { "title": "Help", "description": "show this help message and exit", "default": "==SUPPRESS==", "type": "string" }, "version": { "title": "Version", "description": "Print server version number and exit", "default": false, "type": "boolean" }, "config": { "title": "Config", "description": "Configuration options file (default file name: .fortls)", "default": ".fortls", "type": "string" } } }
Generate json schema from argparse CLI
I have a CLI written with argparse and I was wondering if there was a way to produce a JSON schema from the ArgumentParser? The thought behind this being to distribute the JSON schema to extensions interfacing with the application, thus removing the need for each extension to write and maintain their own schema. My idea was to Convert the argparse.ArgumentParser to Python dictionary or JSON file and then pass that into a JSON schema generator Example import argparse from genson import SchemaBuilder parser = argparse.ArgumentParser( description="Some description", prog="myprog", usage="myprog [options]" ) parser.add_argument( "-v", "--version", action="store_true", help="Print server version number and exit", ) parser.add_argument( "-c", "--config", type=str, default=".fortls", help="Configuration options file (default file name: %(default)s)", ) args = vars(parser.parse_args("")) # Generate schema builder = SchemaBuilder() builder.add_schema({"type": "object", "properties": {}}) for k, v in args.items(): builder.add_object({k: v}) print(builder.to_json(indent=2)) Output { "$schema": "http://json-schema.org/schema#", "type": "object", "properties": { "version": { "type": "boolean" }, "config": { "type": "string" } } } However, I quickly realised that calling vars(parser().parse_args("")) to convert the CLI into a dictionary resulted into a lot of information being lost, like descriptions and required. Is there another way of doing this? I am open to swappingargparse with some other CLI if it would make generating a schema easier. Additional resources Tool to generate JSON schema from JSON data
[ "The solution that I came up with was to access the private variable _actions from ArgumentParser and convert that to a schema using pydantic. In my specific case it was quite easy to do since all the arguments in argparse were optional. If not, a bit more thought has to be put when creating the model with pydantic\nfrom __future__ import annotations\nfrom pydantic import Field, create_model\nimport argparse\n\nparser = argparse.ArgumentParser(\n description=\"Some description\", prog=\"myprog\", usage=\"myprog [options]\"\n)\nparser.add_argument(\n \"-v\",\n \"--version\",\n action=\"store_true\",\n help=\"Print server version number and exit\",\n)\nparser.add_argument(\n \"-c\",\n \"--config\",\n type=str,\n default=\".fortls\",\n help=\"Configuration options file (default file name: %(default)s)\",\n)\n\nschema_vals = {}\nfor arg in parser._actions:\n # if condition for arguments to exclude:\n # continue\n val = arg.default\n desc: str = arg.help.replace(\"%(default)s\", str(val)) # type: ignore\n schema_vals[arg.dest] = Field(val, description=desc) # type: ignore\n\nm = create_model(\"MySchema\", **schema_vals)\nm.__doc__ = \"Some description\"\n\nwith open(\"schema.json\", \"w\") as f:\n print(m.schema_json(indent=2), file=f)\n\nOutput\n{\n \"title\": \"MySchema\",\n \"description\": \"Some description\",\n \"type\": \"object\",\n \"properties\": {\n \"help\": {\n \"title\": \"Help\",\n \"description\": \"show this help message and exit\",\n \"default\": \"==SUPPRESS==\",\n \"type\": \"string\"\n },\n \"version\": {\n \"title\": \"Version\",\n \"description\": \"Print server version number and exit\",\n \"default\": false,\n \"type\": \"boolean\"\n },\n \"config\": {\n \"title\": \"Config\",\n \"description\": \"Configuration options file (default file name: .fortls)\",\n \"default\": \".fortls\",\n \"type\": \"string\"\n }\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "argparse", "json", "python" ]
stackoverflow_0072718138_argparse_json_python.txt
Q: Rotate point about another point in degrees python If you had a point (in 2d), how could you rotate that point by degrees around the other point (the origin) in python? You might, for example, tilt the first point around the origin by 10 degrees. Basically you have one point PointA and origin that it rotates around. The code could look something like this: PointA=(200,300) origin=(100,100) NewPointA=rotate(origin,PointA,10) #The rotate function rotates it by 10 degrees A: The following rotate function performs a rotation of the point point by the angle angle (counterclockwise, in radians) around origin, in the Cartesian plane, with the usual axis conventions: x increasing from left to right, y increasing vertically upwards. All points are represented as length-2 tuples of the form (x_coord, y_coord). import math def rotate(origin, point, angle): """ Rotate a point counterclockwise by a given angle around a given origin. The angle should be given in radians. """ ox, oy = origin px, py = point qx = ox + math.cos(angle) * (px - ox) - math.sin(angle) * (py - oy) qy = oy + math.sin(angle) * (px - ox) + math.cos(angle) * (py - oy) return qx, qy If your angle is specified in degrees, you can convert it to radians first using math.radians. For a clockwise rotation, negate the angle. Example: rotating the point (3, 4) around an origin of (2, 2) counterclockwise by an angle of 10 degrees: >>> point = (3, 4) >>> origin = (2, 2) >>> rotate(origin, point, math.radians(10)) (2.6375113976783475, 4.143263683691346) Note that there's some obvious repeated calculation in the rotate function: math.cos(angle) and math.sin(angle) are each computed twice, as are px - ox and py - oy. I leave it to you to factor that out if necessary. A: An option to rotate a point by some degrees about another point is to use numpy instead of math. This allows to easily generalize the function to take any number of points as input, which might e.g. be useful when rotating a polygon. import numpy as np def rotate(p, origin=(0, 0), degrees=0): angle = np.deg2rad(degrees) R = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]]) o = np.atleast_2d(origin) p = np.atleast_2d(p) return np.squeeze((R @ (p.T-o.T) + o.T).T) points=[(200, 300), (100, 300)] origin=(100,100) new_points = rotate(points, origin=origin, degrees=10) print(new_points) A: import math def rotate(x,y,xo,yo,theta): #rotate x,y around xo,yo by theta (rad) xr=math.cos(theta)*(x-xo)-math.sin(theta)*(y-yo) + xo yr=math.sin(theta)*(x-xo)+math.cos(theta)*(y-yo) + yo return [xr,yr] A: After going through a lot of code and repositories. This function worked best for me. Also it is efficient as it calculates sine and cosine values only once. import numpy as np def rotate(point, origin, degrees): radians = np.deg2rad(degrees) x,y = point offset_x, offset_y = origin adjusted_x = (x - offset_x) adjusted_y = (y - offset_y) cos_rad = np.cos(radians) sin_rad = np.sin(radians) qx = offset_x + cos_rad * adjusted_x + sin_rad * adjusted_y qy = offset_y + -sin_rad * adjusted_x + cos_rad * adjusted_y return qx, qy A: This is easy if you represent your points as complex numbers and use the exp function with an imaginary argument (which is equivalent to the cos/sin operations shown in the other answers, but is easier to write and remember). Here's a function that rotates any number of points about the chosen origin: import numpy as np def rotate(points, origin, angle): return (points - origin) * np.exp(complex(0, angle)) + origin To rotate a single point (x1,y1) about the origin (x0,y0) with an angle in degrees, you could call the function with these arguments: points = complex(x1,y1) origin = complex(x0,y0) angle = np.deg2rad(degrees) To rotate multiple points (x1,y1), (x2,y2), ..., use: points = np.array([complex(x1,y1), complex(x2,y2), ...]) An example with a single point (200,300) rotated 10 degrees about (100,100): >>> new_point = rotate(complex(200,300), complex(100,100), np.deg2rad(10)) >>> new_point (163.75113976783473+314.3263683691346j) >>> (new_point.real, new_point.imag) (163.75113976783473, 314.3263683691346) A: The below script was best for me. from math import radians, sin, cos def rotate_point_wrt_center(point_to_be_rotated, angle, center_point = (0,0)): angle = radians(angle) xnew = cos(angle)*(point_to_be_rotated[0] - center_point[0]) - sin(angle)*(point_to_be_rotated[1] - center_point[1]) + center_point[0] ynew = sin(angle)*(point_to_be_rotated[0] - center_point[0]) + cos(angle)*(point_to_be_rotated[1] - center_point[1]) + center_point[1] return (round(xnew,2),round(ynew,2)) For example: if you want to rotate the point (1,1) (blue dot) by 45° about the point (-1,-1) (red cross), the function gives (1.83, -1.0) (red circle). >>> rotate_point_wrt_center(point_to_be_rotated = (1,1), angle = -45, center_point = (-1,-1)) # angle is negative to indicate clock-wise rotation. >>> (1.83, -1.0)
Rotate point about another point in degrees python
If you had a point (in 2d), how could you rotate that point by degrees around the other point (the origin) in python? You might, for example, tilt the first point around the origin by 10 degrees. Basically you have one point PointA and origin that it rotates around. The code could look something like this: PointA=(200,300) origin=(100,100) NewPointA=rotate(origin,PointA,10) #The rotate function rotates it by 10 degrees
[ "The following rotate function performs a rotation of the point point by the angle angle (counterclockwise, in radians) around origin, in the Cartesian plane, with the usual axis conventions: x increasing from left to right, y increasing vertically upwards. All points are represented as length-2 tuples of the form (x_coord, y_coord).\nimport math\n\ndef rotate(origin, point, angle):\n \"\"\"\n Rotate a point counterclockwise by a given angle around a given origin.\n\n The angle should be given in radians.\n \"\"\"\n ox, oy = origin\n px, py = point\n\n qx = ox + math.cos(angle) * (px - ox) - math.sin(angle) * (py - oy)\n qy = oy + math.sin(angle) * (px - ox) + math.cos(angle) * (py - oy)\n return qx, qy\n\nIf your angle is specified in degrees, you can convert it to radians first using math.radians. For a clockwise rotation, negate the angle.\nExample: rotating the point (3, 4) around an origin of (2, 2) counterclockwise by an angle of 10 degrees:\n>>> point = (3, 4)\n>>> origin = (2, 2)\n>>> rotate(origin, point, math.radians(10))\n(2.6375113976783475, 4.143263683691346)\n\nNote that there's some obvious repeated calculation in the rotate function: math.cos(angle) and math.sin(angle) are each computed twice, as are px - ox and py - oy. I leave it to you to factor that out if necessary.\n", "An option to rotate a point by some degrees about another point is to use numpy instead of math. This allows to easily generalize the function to take any number of points as input, which might e.g. be useful when rotating a polygon.\nimport numpy as np\n\ndef rotate(p, origin=(0, 0), degrees=0):\n angle = np.deg2rad(degrees)\n R = np.array([[np.cos(angle), -np.sin(angle)],\n [np.sin(angle), np.cos(angle)]])\n o = np.atleast_2d(origin)\n p = np.atleast_2d(p)\n return np.squeeze((R @ (p.T-o.T) + o.T).T)\n\n\npoints=[(200, 300), (100, 300)]\norigin=(100,100)\n\nnew_points = rotate(points, origin=origin, degrees=10)\nprint(new_points)\n\n", "import math\n\ndef rotate(x,y,xo,yo,theta): #rotate x,y around xo,yo by theta (rad)\n xr=math.cos(theta)*(x-xo)-math.sin(theta)*(y-yo) + xo\n yr=math.sin(theta)*(x-xo)+math.cos(theta)*(y-yo) + yo\n return [xr,yr]\n\n", "After going through a lot of code and repositories. This function worked best for me. Also it is efficient as it calculates sine and cosine values only once.\nimport numpy as np\ndef rotate(point, origin, degrees):\n radians = np.deg2rad(degrees)\n x,y = point\n offset_x, offset_y = origin\n adjusted_x = (x - offset_x)\n adjusted_y = (y - offset_y)\n cos_rad = np.cos(radians)\n sin_rad = np.sin(radians)\n qx = offset_x + cos_rad * adjusted_x + sin_rad * adjusted_y\n qy = offset_y + -sin_rad * adjusted_x + cos_rad * adjusted_y\n return qx, qy\n\n", "This is easy if you represent your points as complex numbers and use the exp function with an imaginary argument (which is equivalent to the cos/sin operations shown in the other answers, but is easier to write and remember). Here's a function that rotates any number of points about the chosen origin:\nimport numpy as np\n\ndef rotate(points, origin, angle):\n return (points - origin) * np.exp(complex(0, angle)) + origin\n\nTo rotate a single point (x1,y1) about the origin (x0,y0) with an angle in degrees, you could call the function with these arguments:\npoints = complex(x1,y1)\norigin = complex(x0,y0)\nangle = np.deg2rad(degrees)\n\nTo rotate multiple points (x1,y1), (x2,y2), ..., use:\npoints = np.array([complex(x1,y1), complex(x2,y2), ...])\n\nAn example with a single point (200,300) rotated 10 degrees about (100,100):\n>>> new_point = rotate(complex(200,300), complex(100,100), np.deg2rad(10))\n>>> new_point\n(163.75113976783473+314.3263683691346j)\n>>> (new_point.real, new_point.imag)\n(163.75113976783473, 314.3263683691346)\n\n", "The below script was best for me.\nfrom math import radians, sin, cos\n\ndef rotate_point_wrt_center(point_to_be_rotated, angle, center_point = (0,0)):\n \n angle = radians(angle)\n \n xnew = cos(angle)*(point_to_be_rotated[0] - center_point[0]) - sin(angle)*(point_to_be_rotated[1] - center_point[1]) + center_point[0]\n ynew = sin(angle)*(point_to_be_rotated[0] - center_point[0]) + cos(angle)*(point_to_be_rotated[1] - center_point[1]) + center_point[1]\n \n return (round(xnew,2),round(ynew,2))\n\nFor example: if you want to rotate the point (1,1) (blue dot) by 45° about the point (-1,-1) (red cross), the function gives (1.83, -1.0) (red circle).\n>>> rotate_point_wrt_center(point_to_be_rotated = (1,1), angle = -45, center_point = (-1,-1)) # angle is negative to indicate clock-wise rotation.\n>>> (1.83, -1.0)\n\n\n" ]
[ 120, 40, 9, 5, 4, 0 ]
[]
[]
[ "degrees", "math", "python" ]
stackoverflow_0034372480_degrees_math_python.txt
Q: How to pad given number of spaces between words in a string? Details: Given a string s that contains words. I am also given spaces which specifies the number of extra spaces to add between words. The number of spots will be len(words)-1. If spaces/spots is an odd number then the left slot gets more spaces. Example1: s = "This is an" spaces = 6 Ans = "This is an" #Explanation - 3 spaces added after "this" and 3 spaces added after "is" Example2: s = "This is an" spaces = 7 Ans = "This is an" #Explanation - 4 spaces added after "this" and 3 spaces added after "is" Solution: def solution(s, spaces): spots = len(s.split())-1 space_for_every_spot = spaces/spots ... A: Calculate the number of spaces needed between the words and the remainder, split the words, join with the even spacing, then replace the first spaces from the left with the larger spacing if needed. s = 'The quick brown fox jumped over the lazy dog.' spaces = 20 words = s.split() space_count, extra_count = divmod(spaces, len(words) - 1) spacing = ' ' * space_count extra_spacing = ' ' * (space_count + 1) result = spacing.join(words) result = result.replace(spacing, extra_spacing, extra_count) print(result) print(result.replace(' ', '.')) # for easier counting Output: The quick brown fox jumped over the lazy dog. The...quick...brown...fox...jumped..over..the..lazy..dog. A: The solution would be splitting your string, then based on some simple divisions calculate the amount of extra spaces. I can propose the following solution: def solution(s, spaces): words = s.split(" ") spots = len(words) - 1 n_spaces = spaces // spots n_extra_spaces = spaces - n_spaces * spots result = words[0] + " " * (n_spaces + n_extra_spaces) for word in words[1:]: result += word + " " * n_spaces return result
How to pad given number of spaces between words in a string?
Details: Given a string s that contains words. I am also given spaces which specifies the number of extra spaces to add between words. The number of spots will be len(words)-1. If spaces/spots is an odd number then the left slot gets more spaces. Example1: s = "This is an" spaces = 6 Ans = "This is an" #Explanation - 3 spaces added after "this" and 3 spaces added after "is" Example2: s = "This is an" spaces = 7 Ans = "This is an" #Explanation - 4 spaces added after "this" and 3 spaces added after "is" Solution: def solution(s, spaces): spots = len(s.split())-1 space_for_every_spot = spaces/spots ...
[ "Calculate the number of spaces needed between the words and the remainder, split the words, join with the even spacing, then replace the first spaces from the left with the larger spacing if needed.\ns = 'The quick brown fox jumped over the lazy dog.'\nspaces = 20\n\nwords = s.split()\nspace_count, extra_count = divmod(spaces, len(words) - 1)\nspacing = ' ' * space_count\nextra_spacing = ' ' * (space_count + 1)\nresult = spacing.join(words)\nresult = result.replace(spacing, extra_spacing, extra_count)\nprint(result)\nprint(result.replace(' ', '.')) # for easier counting\n\nOutput:\nThe quick brown fox jumped over the lazy dog.\nThe...quick...brown...fox...jumped..over..the..lazy..dog.\n\n", "The solution would be splitting your string, then based on some simple divisions calculate the amount of extra spaces. I can propose the following solution:\ndef solution(s, spaces):\n words = s.split(\" \")\n spots = len(words) - 1\n n_spaces = spaces // spots\n n_extra_spaces = spaces - n_spaces * spots\n result = words[0] + \" \" * (n_spaces + n_extra_spaces)\n for word in words[1:]:\n result += word + \" \" * n_spaces\n return result\n\n" ]
[ 2, 1 ]
[]
[]
[ "algorithm", "data_structures", "list", "python", "python_3.x" ]
stackoverflow_0074468709_algorithm_data_structures_list_python_python_3.x.txt
Q: Unable to hide Chromedriver console with CREATE_NO_WINDOW Python 3.11 ChromeDriver 107.0.5304.62 Chrome 107.0.5304.107 Selenium 4.6.0 Chromedriver console always shows when I try to build exe with pyinstaller. from selenium import webdriver from selenium.webdriver.chrome.service import Service as ChromeService from subprocess import CREATE_NO_WINDOW chrome_options = webdriver.ChromeOptions() chrome_options.binary_location = r'D:\Test\bin\chrome.exe' chrome_service = ChromeService(r'D:\Test\bin\chromedriver.exe') chrome_service.creationflags = CREATE_NO_WINDOW driver = webdriver.Chrome(service=chrome_service, options=chrome_options) driver.get('http://google.com') I have tried to build exe with pyinstaller in different ways: pyinstaller Test.py pyinstaller Test.pyw pyinstaller Test.py --windowed or --noconsole pyinstaller Test.pyw --windowed or --noconsole I also tried to change in venv\Lib\site-packages\selenium\webdriver\common\service.py at line 67 self.creation_flags = 0 to self.creation_flags = 1 I also tried different chrome/chromedriver combinations A: It doesn't work with selenium 4.6.0 version. It work with selenium 4.5.0
Unable to hide Chromedriver console with CREATE_NO_WINDOW
Python 3.11 ChromeDriver 107.0.5304.62 Chrome 107.0.5304.107 Selenium 4.6.0 Chromedriver console always shows when I try to build exe with pyinstaller. from selenium import webdriver from selenium.webdriver.chrome.service import Service as ChromeService from subprocess import CREATE_NO_WINDOW chrome_options = webdriver.ChromeOptions() chrome_options.binary_location = r'D:\Test\bin\chrome.exe' chrome_service = ChromeService(r'D:\Test\bin\chromedriver.exe') chrome_service.creationflags = CREATE_NO_WINDOW driver = webdriver.Chrome(service=chrome_service, options=chrome_options) driver.get('http://google.com') I have tried to build exe with pyinstaller in different ways: pyinstaller Test.py pyinstaller Test.pyw pyinstaller Test.py --windowed or --noconsole pyinstaller Test.pyw --windowed or --noconsole I also tried to change in venv\Lib\site-packages\selenium\webdriver\common\service.py at line 67 self.creation_flags = 0 to self.creation_flags = 1 I also tried different chrome/chromedriver combinations
[ "It doesn't work with selenium 4.6.0 version. It work with selenium 4.5.0\n" ]
[ 1 ]
[]
[]
[ "python", "selenium_chromedriver", "subprocess" ]
stackoverflow_0074461847_python_selenium_chromedriver_subprocess.txt
Q: Optimize python pattern matching in nucleotide sequences I'm currently working on a bioinformatic and modelling project where I need to do some pattern matching. Let's say I have a DNA fragment as follow 'atggcgtatagagc' and I split that fragment in micro-sequences of 8 nucleotides so that I have : 'atggcgta' 'tggcgtat' 'ggcgtata' 'gcgtatag' 'cgtataga' 'gtatagag' 'tatagagc' And for each of these fragment I want to search in a whole genome and per chromosome the number of time they appear and the positions (starting positions) of the matches. Here is how my code looks like : you can download the genome fasta file here : drive to the fasta file import re from Bio.SeqIO.FastaIO import FastaIterator from Bio.Seq import Seq def reverse_complement(sequence: str) -> str: my_sequence = Seq(sequence) return str(my_sequence.reverse_complement()) # you will need to unzip the file ant change the path below according to your working directory path = '../data/Genome_S288c.fa' genome = open(path, "r") chr_sequences = {} for record in FastaIterator(genome): chr_id = record.id seq = str(record.seq).lower() rc_seq = reverse_complement(seq) chr_sequences[chr_id] = {'5to3': seq, '3to5': rc_seq} genome.close() sequences = 'ATGACTAACGAAAAGGTCTGGATAGAGAAGTTGGATAATCCAACTCTTTCAGTGTTACCACATGACTTTTTACGCCCACAATCTTTAT'.lower() micro_size = 8 micro_sequences = [] start = micro_size - 1 for i in range(start, len(sequences), 1): current_micro_seq = sequences[i - start:i + 1] micro_sequences.append(current_micro_seq) genome_count = 0 chr_count = {} chr_locations = {} micro_fragment_stats = {} for ii_micro, micro_seq in enumerate(micro_sequences): for chr_idx in list(chr_sequences.keys()): chr_counter = 0 seq = chr_sequences[chr_idx]['5to3'] pos = [m.start() for m in re.finditer(pattern=r'(?=(' + micro_seq + '))', string=seq)] rc_seq = chr_sequences[chr_idx]['3to5'] rc_pos = [m.start() for m in re.finditer(pattern=r'(?=(' + micro_seq + '))', string=rc_seq)] chr_locations[chr] = {'5to3': pos, '3to5': rc_pos} chr_counter += len(pos) + len(rc_pos) chr_count[chr_idx] = chr_counter genome_count += chr_counter micro_fragment_stats[ii_micro] = {'occurrences genome': genome_count, 'occurrences chromosomes': chr_count, 'locations chromosomes': chr_locations} Actually my fragment is something like 2000bp long, so I took about 1 hour to compute all the micro-sequences. \ By the way, I use the r'(?=('+self.sequence+'))' to avoid the case of pattern that overlaps itself in the sequence, for instance : pattern = 'aaggaaaaa' string = 'aaggaaaaaggaaaaa' expected output : (0, 7) I am looking for a more efficient regex method that I can use for my case (in python if possible). Thanks in advance A: I would not recommend using regex for repetitive simple pattern matching. Outright comparison is expected to perform better. I did some basic testing and came up with the demo below. import time import re import random def compare(r1, r2, microseq_len, test_condition=1): # condition 1: make microseqs/indexes from longer sequence and search against shorter # condition 2: use regex to find position of microseq in reference sequence # condition 3: use regex to find position of microseq in reference sequence after verifying if microseq in reference strain start_time = time.time() if test_condition == 1: r1, r2 = r2, r1 # assemble dictionary containing microsequences and index positions microseq_di = {} for i in range(len(r1)-microseq_len): microseq = r1[i:i+microseq_len] if microseq not in microseq_di: microseq_di[microseq] = [] microseq_di[microseq].append([i, i+microseq_len]) # mark for deletion for microseq in microseq_di: # condition 2 if test_condition == 2: microseq_di[microseq] = [m.start() for m in re.finditer(pattern=r'(?=('+microseq+'))', string=r2)] elif microseq not in r2: microseq_di[microseq] = [] # condition 3 elif test_condition == 3: microseq_di[microseq] = [m.start() for m in re.finditer(pattern=r'(?=('+microseq+'))', string=r2)] print(time.time() - start_time) # run time # delete and return return({x:y for x, y in microseq_di.items() if y != []}) Input and Output: r_short = "".join([random.choices(["A", "T", "G", "C"])[0] for x in range(500)]) r_long = "".join([random.choices(["A", "T", "G", "C"])[0] for x in range(100000)]) len(compare(r_short, r_long, 8, test_condition=1).keys()) 0.19868111610412598 Out[1]: 400 len(compare(r_short, r_long, 8, test_condition=2).keys()) 0.8831210136413574 Out[2]: 399 len(compare(r_short, r_long, 8, test_condition=3).keys()) 0.7925639152526855 Out[3]: 399 Test condition 1 (microseqs from longer sequence) performed a lot better than the other two conditions using regex. Relative performance should improve with longer strings. r_short = "".join([random.choices(["A", "T", "G", "C"])[0] for x in range(2000)]) r_long = "".join([random.choices(["A", "T", "G", "C"])[0] for x in range(1000000)]) len(compare3(r_short, r_long, 8, test_condition=1).keys()) 2.2517480850219727 Out[4]: 1970 len(compare3(r_short, r_long, 8, test_condition=2).keys()) 35.65084385871887 Out[5]: 1969 len(compare3(r_short, r_long, 8, test_condition=3).keys()) 34.994577169418335 Out[6]: 1969 Note that condition 1 is not fully accommodating to your use-case since it doesn't exclude overlapping microseqs.
Optimize python pattern matching in nucleotide sequences
I'm currently working on a bioinformatic and modelling project where I need to do some pattern matching. Let's say I have a DNA fragment as follow 'atggcgtatagagc' and I split that fragment in micro-sequences of 8 nucleotides so that I have : 'atggcgta' 'tggcgtat' 'ggcgtata' 'gcgtatag' 'cgtataga' 'gtatagag' 'tatagagc' And for each of these fragment I want to search in a whole genome and per chromosome the number of time they appear and the positions (starting positions) of the matches. Here is how my code looks like : you can download the genome fasta file here : drive to the fasta file import re from Bio.SeqIO.FastaIO import FastaIterator from Bio.Seq import Seq def reverse_complement(sequence: str) -> str: my_sequence = Seq(sequence) return str(my_sequence.reverse_complement()) # you will need to unzip the file ant change the path below according to your working directory path = '../data/Genome_S288c.fa' genome = open(path, "r") chr_sequences = {} for record in FastaIterator(genome): chr_id = record.id seq = str(record.seq).lower() rc_seq = reverse_complement(seq) chr_sequences[chr_id] = {'5to3': seq, '3to5': rc_seq} genome.close() sequences = 'ATGACTAACGAAAAGGTCTGGATAGAGAAGTTGGATAATCCAACTCTTTCAGTGTTACCACATGACTTTTTACGCCCACAATCTTTAT'.lower() micro_size = 8 micro_sequences = [] start = micro_size - 1 for i in range(start, len(sequences), 1): current_micro_seq = sequences[i - start:i + 1] micro_sequences.append(current_micro_seq) genome_count = 0 chr_count = {} chr_locations = {} micro_fragment_stats = {} for ii_micro, micro_seq in enumerate(micro_sequences): for chr_idx in list(chr_sequences.keys()): chr_counter = 0 seq = chr_sequences[chr_idx]['5to3'] pos = [m.start() for m in re.finditer(pattern=r'(?=(' + micro_seq + '))', string=seq)] rc_seq = chr_sequences[chr_idx]['3to5'] rc_pos = [m.start() for m in re.finditer(pattern=r'(?=(' + micro_seq + '))', string=rc_seq)] chr_locations[chr] = {'5to3': pos, '3to5': rc_pos} chr_counter += len(pos) + len(rc_pos) chr_count[chr_idx] = chr_counter genome_count += chr_counter micro_fragment_stats[ii_micro] = {'occurrences genome': genome_count, 'occurrences chromosomes': chr_count, 'locations chromosomes': chr_locations} Actually my fragment is something like 2000bp long, so I took about 1 hour to compute all the micro-sequences. \ By the way, I use the r'(?=('+self.sequence+'))' to avoid the case of pattern that overlaps itself in the sequence, for instance : pattern = 'aaggaaaaa' string = 'aaggaaaaaggaaaaa' expected output : (0, 7) I am looking for a more efficient regex method that I can use for my case (in python if possible). Thanks in advance
[ "I would not recommend using regex for repetitive simple pattern matching. Outright comparison is expected to perform better. I did some basic testing and came up with the demo below.\nimport time\nimport re\nimport random\n\n\ndef compare(r1, r2, microseq_len, test_condition=1):\n # condition 1: make microseqs/indexes from longer sequence and search against shorter \n # condition 2: use regex to find position of microseq in reference sequence\n # condition 3: use regex to find position of microseq in reference sequence after verifying if microseq in reference strain\n start_time = time.time()\n if test_condition == 1:\n r1, r2 = r2, r1\n # assemble dictionary containing microsequences and index positions\n microseq_di = {}\n for i in range(len(r1)-microseq_len):\n microseq = r1[i:i+microseq_len]\n if microseq not in microseq_di:\n microseq_di[microseq] = []\n microseq_di[microseq].append([i, i+microseq_len])\n # mark for deletion\n for microseq in microseq_di:\n # condition 2\n if test_condition == 2:\n microseq_di[microseq] = [m.start() for m in re.finditer(pattern=r'(?=('+microseq+'))', string=r2)]\n elif microseq not in r2:\n microseq_di[microseq] = []\n # condition 3\n elif test_condition == 3:\n microseq_di[microseq] = [m.start() for m in re.finditer(pattern=r'(?=('+microseq+'))', string=r2)]\n print(time.time() - start_time) # run time\n # delete and return\n return({x:y for x, y in microseq_di.items() if y != []})\n\nInput and Output:\nr_short = \"\".join([random.choices([\"A\", \"T\", \"G\", \"C\"])[0] for x in range(500)])\n\nr_long = \"\".join([random.choices([\"A\", \"T\", \"G\", \"C\"])[0] for x in range(100000)])\n\nlen(compare(r_short, r_long, 8, test_condition=1).keys())\n0.19868111610412598\nOut[1]: 400\n\nlen(compare(r_short, r_long, 8, test_condition=2).keys())\n0.8831210136413574\nOut[2]: 399\n\nlen(compare(r_short, r_long, 8, test_condition=3).keys())\n0.7925639152526855\nOut[3]: 399\n\nTest condition 1 (microseqs from longer sequence) performed a lot better than the other two conditions using regex. Relative performance should improve with longer strings.\nr_short = \"\".join([random.choices([\"A\", \"T\", \"G\", \"C\"])[0] for x in range(2000)])\n\nr_long = \"\".join([random.choices([\"A\", \"T\", \"G\", \"C\"])[0] for x in range(1000000)])\n\nlen(compare3(r_short, r_long, 8, test_condition=1).keys())\n2.2517480850219727\nOut[4]: 1970\n\nlen(compare3(r_short, r_long, 8, test_condition=2).keys())\n35.65084385871887\nOut[5]: 1969\n\nlen(compare3(r_short, r_long, 8, test_condition=3).keys())\n34.994577169418335\nOut[6]: 1969\n\nNote that condition 1 is not fully accommodating to your use-case since it doesn't exclude overlapping microseqs.\n" ]
[ 0 ]
[]
[]
[ "bioinformatics", "dna_sequence", "python", "regex" ]
stackoverflow_0074467529_bioinformatics_dna_sequence_python_regex.txt
Q: Pandas Dataframe Remove all Rows with Letters in Certain Column I have a pandas dataframe in python that I want to remove rows that contain letters in a certain column. I have tried a few things, but nothing has worked. Input: A B C 0 9 1 a 1 8 2 b 2 7 cat c 3 6 4 d I would then remove rows that contained letters in column 'B'... Expected Output: A B C 0 9 1 a 1 8 2 b 3 6 4 d Update: After seeing the replies, I still haven't been able to get this to work. I'm going to just place my entire code here. Maybe I'm not understanding something... import pandas as pd #takes file path from user and removes quotation marks if necessary sysco1file = input("Input path of FS1 file: ").replace("\"","") sysco2file = input("Input path of FS2 file: ").replace("\"","") sysco3file = input("Input path of FS3 file: ").replace("\"","") #tab separated files, all values string sysco_1 = pd.read_csv(sysco1file, sep='\t', dtype=str) sysco_2 = pd.read_csv(sysco2file, sep='\t', dtype=str) sysco_3 = pd.read_csv(sysco3file, sep='\t', dtype=str) #combine all rows from the 3 files into one dataframe sysco_all = pd.concat([sysco_1,sysco_2,sysco_3]) #Also dropping nulls from CompAcctNum column sysco_all.dropna(subset=['CompAcctNum'], inplace=True) #ensure all values are string sysco_all = sysco_all.astype(str) #implemented solution from stackoverflow #I also tried putting "sysco_all = " in front of this sysco_all.loc[~sysco_all['CompanyNumber'].str.isalpha()] #writing dataframe to new csv file sysco_all.to_csv(r"C:\Users\user\Desktop\testcsvfile.csv") I do not get an error. However, the csv still has rows with letters in this column. A: Assuming the B column be string type, we can use str.contains here: df[~df["B"].str.contains(r'^[A-Za-z]+$', regex=True)] A: here is another way to do it # use isalpha to check if value is alphabetic # use negation to pick where value is not alphabetic df=df.loc[~df['B'].str.isalpha()] df A B C 0 9 1 a 1 8 2 b 3 6 4 d OR # output the filtered result to csv, preserving the original DF df.loc[~df['B'].str.isalpha()].to_csv('out.csv')
Pandas Dataframe Remove all Rows with Letters in Certain Column
I have a pandas dataframe in python that I want to remove rows that contain letters in a certain column. I have tried a few things, but nothing has worked. Input: A B C 0 9 1 a 1 8 2 b 2 7 cat c 3 6 4 d I would then remove rows that contained letters in column 'B'... Expected Output: A B C 0 9 1 a 1 8 2 b 3 6 4 d Update: After seeing the replies, I still haven't been able to get this to work. I'm going to just place my entire code here. Maybe I'm not understanding something... import pandas as pd #takes file path from user and removes quotation marks if necessary sysco1file = input("Input path of FS1 file: ").replace("\"","") sysco2file = input("Input path of FS2 file: ").replace("\"","") sysco3file = input("Input path of FS3 file: ").replace("\"","") #tab separated files, all values string sysco_1 = pd.read_csv(sysco1file, sep='\t', dtype=str) sysco_2 = pd.read_csv(sysco2file, sep='\t', dtype=str) sysco_3 = pd.read_csv(sysco3file, sep='\t', dtype=str) #combine all rows from the 3 files into one dataframe sysco_all = pd.concat([sysco_1,sysco_2,sysco_3]) #Also dropping nulls from CompAcctNum column sysco_all.dropna(subset=['CompAcctNum'], inplace=True) #ensure all values are string sysco_all = sysco_all.astype(str) #implemented solution from stackoverflow #I also tried putting "sysco_all = " in front of this sysco_all.loc[~sysco_all['CompanyNumber'].str.isalpha()] #writing dataframe to new csv file sysco_all.to_csv(r"C:\Users\user\Desktop\testcsvfile.csv") I do not get an error. However, the csv still has rows with letters in this column.
[ "Assuming the B column be string type, we can use str.contains here:\ndf[~df[\"B\"].str.contains(r'^[A-Za-z]+$', regex=True)]\n\n", "here is another way to do it\n# use isalpha to check if value is alphabetic\n# use negation to pick where value is not alphabetic\n\ndf=df.loc[~df['B'].str.isalpha()]\n\ndf\n\n A B C\n0 9 1 a\n1 8 2 b\n3 6 4 d\n\nOR\n# output the filtered result to csv, preserving the original DF\ndf.loc[~df['B'].str.isalpha()].to_csv('out.csv')\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074468646_dataframe_pandas_python.txt
Q: send data from html field(Not Form) using AJAX to python cgi I am trying to solve a problem, where I am suppose to send data using programmatic form which is not to use the form field itself to a backend python cgi script. However, I have no idea how to receive that text using python. With form I could use "form = cgi.FieldStorage()". However, for now, I am trying to send the data using "XMLHttpRequest.send()" but again i don't know how to catch this data from the python cgi script. So basically in here, I am having two issues. So far, in the following code, I am trying to get input value using JS and trying to create HTTPRequest to send over to a python script. But the output results in an error which is caught in the exception "Request Failed" #Update: I was able to fix it. If anyone ever needs it. I will keep the post. //This is the HTML file <!DOCTYPE html> <html> <head> <meta charset='utf-8'> <title>Login(Async)</title> </head> <body> <h1> Please Login </h1> <label for="userName"> User Name </label><br> <input type="text" id="username" name="username" placeholder="User"><br> <label for="userName"> Password </label><br> <input type="password" id="pwd" name="pwd" placeholder="Password"><br><br> <button type="button" onclick="callCheckPass()"> Login </button> <p id="contentArea"> </p> </body> <script> function callCheckPass(){ asyncRequest = new XMLHttpRequest(); try{ asyncRequest.addEventListener("readystatechange", stateChange, false); asyncRequest.open("POST", "checkpass.py", true); asyncRequest.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); asyncRequest.send("username=" + document.getElementById("username").value + "&" + "pwd="+ + document.getElementById("pwd").value); }catch(exception){ alert("Request Failed !!!"); } } function stateChange(){ if(asyncRequest.readyState == 4 && asyncRequest.status == 200){ document.getElementById("contentArea").innerHTML = asyncRequest.responseText; } } </script> </html> //This is the python script // I am not sure how to catch HTTPRequest in python. #!C:\Program Files\Python311\python.exe import cgi, cgitb cgitb.enable() #instance of Field Storage data = cgi.FieldStorage() #get data from fields. username = data.getvalue('username') print("Content-type: text/html\r\n\r\n") print("<html>") print("<head><title> Test </title> </head>") print("<body> <h1> Input: %s </h1> </body>"%(username)) print("</html>") A: So basically in python program you would receive the data from asyncRequest.send() which is combination of your input field creating a query param which is essentially sent via asyncRequest.send("Query Param"); Then using the variable name used in JS you would get value within your python script. <!DOCTYPE html> <html> <head> <meta charset='utf-8'> <title>Login(Async)</title> </head> <script> function callCheckPass(){ var username = document.querySelector("#user").value; var pwd = document.querySelector("#pass").value; asyncRequest = new XMLHttpRequest(); asyncRequest.addEventListener("readystatechange", eve=>{ if(asyncRequest.readyState == 4){ document.getElementById("contentArea").innerHTML = asyncRequest.responseText; } }); asyncRequest.open("POST", "../../cgi-bin/checkpass.cgi", true); asyncRequest.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); asyncRequest.send("username=" + username + "&" + "pwd=" + pwd); } </script> <body> <h1> Please Login </h1> User Name<br> <input type="text" id="user" ><br> Password<br> <input type="password" id="pass" ><br><br> <button type="button" onclick="callCheckPass()"> Login </button> <p id="contentArea"> </p> </body> </html> //Python script #!/usr/bin/python //path to your python.exe import cgi import cgitb import csv cgitb.enable() #instance of Field Storage form = cgi.FieldStorage() #get data from fields. user = form.getvalue('username') // username -> the variable used from JS, which is essential to get value. For some reason this worked for me pwd = form.getvalue("pwd") print("Content-Type: text/html\n") print("<html>") print("<head><title> Test </title> </head>") print("<body> <h1>") print(form) // data printed in your web application. print("</h1> </body>") print("</html>")
send data from html field(Not Form) using AJAX to python cgi
I am trying to solve a problem, where I am suppose to send data using programmatic form which is not to use the form field itself to a backend python cgi script. However, I have no idea how to receive that text using python. With form I could use "form = cgi.FieldStorage()". However, for now, I am trying to send the data using "XMLHttpRequest.send()" but again i don't know how to catch this data from the python cgi script. So basically in here, I am having two issues. So far, in the following code, I am trying to get input value using JS and trying to create HTTPRequest to send over to a python script. But the output results in an error which is caught in the exception "Request Failed" #Update: I was able to fix it. If anyone ever needs it. I will keep the post. //This is the HTML file <!DOCTYPE html> <html> <head> <meta charset='utf-8'> <title>Login(Async)</title> </head> <body> <h1> Please Login </h1> <label for="userName"> User Name </label><br> <input type="text" id="username" name="username" placeholder="User"><br> <label for="userName"> Password </label><br> <input type="password" id="pwd" name="pwd" placeholder="Password"><br><br> <button type="button" onclick="callCheckPass()"> Login </button> <p id="contentArea"> </p> </body> <script> function callCheckPass(){ asyncRequest = new XMLHttpRequest(); try{ asyncRequest.addEventListener("readystatechange", stateChange, false); asyncRequest.open("POST", "checkpass.py", true); asyncRequest.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); asyncRequest.send("username=" + document.getElementById("username").value + "&" + "pwd="+ + document.getElementById("pwd").value); }catch(exception){ alert("Request Failed !!!"); } } function stateChange(){ if(asyncRequest.readyState == 4 && asyncRequest.status == 200){ document.getElementById("contentArea").innerHTML = asyncRequest.responseText; } } </script> </html> //This is the python script // I am not sure how to catch HTTPRequest in python. #!C:\Program Files\Python311\python.exe import cgi, cgitb cgitb.enable() #instance of Field Storage data = cgi.FieldStorage() #get data from fields. username = data.getvalue('username') print("Content-type: text/html\r\n\r\n") print("<html>") print("<head><title> Test </title> </head>") print("<body> <h1> Input: %s </h1> </body>"%(username)) print("</html>")
[ "So basically in python program you would receive the data from asyncRequest.send() which is combination of your input field creating a query param which is essentially sent via asyncRequest.send(\"Query Param\"); Then using the variable name used in JS you would get value within your python script.\n <!DOCTYPE html>\n <html>\n <head>\n <meta charset='utf-8'>\n <title>Login(Async)</title>\n </head>\n <script>\n function callCheckPass(){\n var username = document.querySelector(\"#user\").value;\n var pwd = document.querySelector(\"#pass\").value;\n \n asyncRequest = new XMLHttpRequest();\n asyncRequest.addEventListener(\"readystatechange\", eve=>{\n if(asyncRequest.readyState == 4){\n document.getElementById(\"contentArea\").innerHTML = asyncRequest.responseText;\n }\n });\n asyncRequest.open(\"POST\", \"../../cgi-bin/checkpass.cgi\", true);\n asyncRequest.setRequestHeader(\"Content-type\", \"application/x-www-form-urlencoded\");\n asyncRequest.send(\"username=\" + username + \"&\" + \"pwd=\" + pwd);\n \n }\n </script>\n <body>\n <h1> Please Login </h1>\n User Name<br>\n <input type=\"text\" id=\"user\" ><br>\n Password<br>\n <input type=\"password\" id=\"pass\" ><br><br>\n <button type=\"button\" onclick=\"callCheckPass()\"> Login </button>\n <p id=\"contentArea\"> </p>\n \n </body>\n \n </html>\n\n//Python script\n#!/usr/bin/python //path to your python.exe\n\nimport cgi\nimport cgitb\nimport csv\n\ncgitb.enable()\n\n\n#instance of Field Storage\nform = cgi.FieldStorage()\n\n#get data from fields.\nuser = form.getvalue('username') // username -> the variable used from JS, which is essential to get value. For some reason this worked for me\npwd = form.getvalue(\"pwd\")\n\nprint(\"Content-Type: text/html\\n\")\nprint(\"<html>\")\nprint(\"<head><title> Test </title> </head>\")\nprint(\"<body> <h1>\")\nprint(form) // data printed in your web application.\nprint(\"</h1> </body>\")\nprint(\"</html>\")\n\n" ]
[ 0 ]
[]
[]
[ "ajax", "javascript", "python" ]
stackoverflow_0074453105_ajax_javascript_python.txt
Q: Difficulties adding a new blank line every 10 lines of text in Python I am working on a python script that will format books that I input from the internet for school. Currently both section one and section three are functional. The book is able to have all blank lines removed, and it is outputted into a plain text file. The issue I'm having is with section two. After all of the blank lines have been removed, every 10 lines there should be a new blank line re inserted into the text file. This is the code I have so far: import sys #setting finalBook as a string finalBook = "" i = int(0) #section one #removing all original blank lines from book with open("dangerousGame.txt") as f: for line in f: if not line.isspace(): finalBook = finalBook + line #section two #add in a blank line every 10 lines for i in finalBook: if i % 10 == 0 and i != 0: finalBook = finalBook + "\n" #section three #output in a plain text with open("test.txt", "w") as x: x.write(finalBook) So far I have tried searching for '\n' but it seems as though Python thinks that every line has one which is not the case. I also attempted tried splitting the book into a list and formatting it that way but this also did not work. Any help appreciated. A: Instead of: for i in finalBook: if i % 10 == 0 and i != 0: finalBook = finalBook + "\n" You will want something like: n = 10 finalBook = [ line for block in ( finalBook[i:i + n] + ['\n'] for i in range(0, len(finalBook), n) ) for line in block ] For example: finalBook = [str(i) + '\n' for i in range(10)] n = 3 finalBook = [ line for block in ( finalBook[i:i + n] + ['\n'] for i in range(0, len(finalBook), n) ) for line in block ] print(finalBook) Output: ['0\n', '1\n', '2\n', '\n', '3\n', '4\n', '5\n', '\n', '6\n', '7\n', '8\n', '\n', '9\n', '\n'] By the way, for your code to be more Pythonic, consider renaming finalBook to final_book, since 'snake-case' is the recommended naming convention for Python.
Difficulties adding a new blank line every 10 lines of text in Python
I am working on a python script that will format books that I input from the internet for school. Currently both section one and section three are functional. The book is able to have all blank lines removed, and it is outputted into a plain text file. The issue I'm having is with section two. After all of the blank lines have been removed, every 10 lines there should be a new blank line re inserted into the text file. This is the code I have so far: import sys #setting finalBook as a string finalBook = "" i = int(0) #section one #removing all original blank lines from book with open("dangerousGame.txt") as f: for line in f: if not line.isspace(): finalBook = finalBook + line #section two #add in a blank line every 10 lines for i in finalBook: if i % 10 == 0 and i != 0: finalBook = finalBook + "\n" #section three #output in a plain text with open("test.txt", "w") as x: x.write(finalBook) So far I have tried searching for '\n' but it seems as though Python thinks that every line has one which is not the case. I also attempted tried splitting the book into a list and formatting it that way but this also did not work. Any help appreciated.
[ "Instead of:\nfor i in finalBook:\n if i % 10 == 0 and i != 0: \n finalBook = finalBook + \"\\n\"\n\nYou will want something like:\nn = 10\nfinalBook = [\n line for block in (\n finalBook[i:i + n] + ['\\n'] for i in range(0, len(finalBook), n)\n ) for line in block\n]\n\nFor example:\nfinalBook = [str(i) + '\\n' for i in range(10)]\n\nn = 3\nfinalBook = [\n line for block in (\n finalBook[i:i + n] + ['\\n'] for i in range(0, len(finalBook), n)\n ) for line in block\n]\n\nprint(finalBook)\n\nOutput:\n['0\\n', '1\\n', '2\\n', '\\n', '3\\n', '4\\n', '5\\n', '\\n', '6\\n', '7\\n', '8\\n', '\\n', '9\\n', '\\n']\n\nBy the way, for your code to be more Pythonic, consider renaming finalBook to final_book, since 'snake-case' is the recommended naming convention for Python.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074468802_python_python_3.x.txt
Q: Find all occurrences of a character in a String I'm really new to python and trying to build a Hangman Game for practice. I'm using Python 3.6.1 The User can enter a letter and I want to tell him if there is any occurrence of that letter in the word and where it is. I get the total number of occurrences by using occurrences = currentWord.count(guess) I have firstLetterIndex = (currentWord.find(guess)), to get the index. Now I have the index of the first Letter, but what if the word has this letter multiple times? I tried secondLetterIndex = (currentWord.find(guess[firstLetterIndex, currentWordlength])), but that doesn't work. Is there a better way to do this? Maybe a build in function i can't find? A: One way to do this is to find the indices using list comprehension: currentWord = "hello" guess = "l" occurrences = currentWord.count(guess) indices = [i for i, a in enumerate(currentWord) if a == guess] print indices output: [2, 3] A: I would maintain a second list of Booleans indicating which letters have been correctly matched. >>> word_to_guess = "thicket" >>> matched = [False for c in word_to_guess] >>> for guess in "te": ... matched = [m or (guess == c) for m, c in zip(matched, word_to_guess)] ... print(list(zip(matched, word_to_guess))) ... [(True, 't'), (False, 'h'), (False, 'i'), (False, 'c'), (False, 'k'), (False, 'e'), (True, 't')] [(True, 't'), (False, 'h'), (False, 'i'), (False, 'c'), (False, 'k'), (True, 'e'), (True, 't')] A: def findall(sub, s) : pos = -1 hits=[] while (pos := s.find(sub,pos+1)) > -1 : hits.append(pos) return hits
Find all occurrences of a character in a String
I'm really new to python and trying to build a Hangman Game for practice. I'm using Python 3.6.1 The User can enter a letter and I want to tell him if there is any occurrence of that letter in the word and where it is. I get the total number of occurrences by using occurrences = currentWord.count(guess) I have firstLetterIndex = (currentWord.find(guess)), to get the index. Now I have the index of the first Letter, but what if the word has this letter multiple times? I tried secondLetterIndex = (currentWord.find(guess[firstLetterIndex, currentWordlength])), but that doesn't work. Is there a better way to do this? Maybe a build in function i can't find?
[ "One way to do this is to find the indices using list comprehension:\ncurrentWord = \"hello\"\n\nguess = \"l\"\n\noccurrences = currentWord.count(guess)\n\nindices = [i for i, a in enumerate(currentWord) if a == guess]\n\nprint indices\n\noutput:\n[2, 3]\n\n", "I would maintain a second list of Booleans indicating which letters have been correctly matched.\n>>> word_to_guess = \"thicket\"\n>>> matched = [False for c in word_to_guess]\n>>> for guess in \"te\":\n... matched = [m or (guess == c) for m, c in zip(matched, word_to_guess)]\n... print(list(zip(matched, word_to_guess)))\n...\n[(True, 't'), (False, 'h'), (False, 'i'), (False, 'c'), (False, 'k'), (False, 'e'), (True, 't')]\n[(True, 't'), (False, 'h'), (False, 'i'), (False, 'c'), (False, 'k'), (True, 'e'), (True, 't')] \n\n", "def findall(sub, s) :\n pos = -1\n hits=[]\n while (pos := s.find(sub,pos+1)) > -1 :\n hits.append(pos)\n return hits\n\n" ]
[ 8, 0, 0 ]
[]
[]
[ "python", "python_3.6", "string" ]
stackoverflow_0044307988_python_python_3.6_string.txt
Q: Ragged list to dataframe I have a non-uniform list as follows: [['E', 'A', 'P'], ['E', 'A', 'X', 'P'], ['E', 'A', 'P'], ['P'], ['E', 'A', 'X', 'P'], ['E', 'A', 'P'], ['A', 'X', 'P'], ['E', 'A', 'P'], ['E', 'A', 'P'], ['E', 'A', 'X', 'P'], ['E', 'A', 'P'], ['E', 'A', 'P'], ['A', 'X', 'P'], I would like to create a data frame from this, where each column represents the four possible letters "E", "A", "X" and "p" in a one-hot encoded manner - what is the most efficient way to go about this? A: I would recommend MultiLabelBinarizer from sklearn from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer() df = pd.DataFrame(mlb.fit_transform(l),columns=mlb.classes_) Out[170]: A E P X 0 1 1 1 0 1 1 1 1 1 2 1 1 1 0 3 0 0 1 0 4 1 1 1 1 5 1 1 1 0 6 1 0 1 1 7 1 1 1 0 8 1 1 1 0 9 1 1 1 1 10 1 1 1 0 11 1 1 1 0 12 1 0 1 1 Or we try pandas way with explode and str.get_dummies df = pd.Series(l).explode().str.get_dummies().groupby(level=0).sum() Out[176]: A E P X 0 1 1 1 0 1 1 1 1 1 2 1 1 1 0 3 0 0 1 0 4 1 1 1 1 5 1 1 1 0 6 1 0 1 1 7 1 1 1 0 8 1 1 1 0 9 1 1 1 1 10 1 1 1 0 11 1 1 1 0 12 1 0 1 1 Notice l is your list here A: Try: lst = [ ["E", "A", "P"], ["E", "A", "X", "P"], ["E", "A", "P"], ["P"], ["E", "A", "X", "P"], ["E", "A", "P"], ["A", "X", "P"], ["E", "A", "P"], ["E", "A", "P"], ["E", "A", "X", "P"], ["E", "A", "P"], ["E", "A", "P"], ["A", "X", "P"], ] df = pd.DataFrame({v: 1 for v in l} for l in lst).notna().astype(int) print(df) Prints: E A P X 0 1 1 1 0 1 1 1 1 1 2 1 1 1 0 3 0 0 1 0 4 1 1 1 1 5 1 1 1 0 6 0 1 1 1 7 1 1 1 0 8 1 1 1 0 9 1 1 1 1 10 1 1 1 0 11 1 1 1 0 12 0 1 1 1
Ragged list to dataframe
I have a non-uniform list as follows: [['E', 'A', 'P'], ['E', 'A', 'X', 'P'], ['E', 'A', 'P'], ['P'], ['E', 'A', 'X', 'P'], ['E', 'A', 'P'], ['A', 'X', 'P'], ['E', 'A', 'P'], ['E', 'A', 'P'], ['E', 'A', 'X', 'P'], ['E', 'A', 'P'], ['E', 'A', 'P'], ['A', 'X', 'P'], I would like to create a data frame from this, where each column represents the four possible letters "E", "A", "X" and "p" in a one-hot encoded manner - what is the most efficient way to go about this?
[ "I would recommend MultiLabelBinarizer from sklearn\nfrom sklearn.preprocessing import MultiLabelBinarizer\n \nmlb = MultiLabelBinarizer()\ndf = pd.DataFrame(mlb.fit_transform(l),columns=mlb.classes_)\nOut[170]: \n A E P X\n0 1 1 1 0\n1 1 1 1 1\n2 1 1 1 0\n3 0 0 1 0\n4 1 1 1 1\n5 1 1 1 0\n6 1 0 1 1\n7 1 1 1 0\n8 1 1 1 0\n9 1 1 1 1\n10 1 1 1 0\n11 1 1 1 0\n12 1 0 1 1\n\nOr we try pandas way with explode and str.get_dummies\ndf = pd.Series(l).explode().str.get_dummies().groupby(level=0).sum()\nOut[176]: \n A E P X\n0 1 1 1 0\n1 1 1 1 1\n2 1 1 1 0\n3 0 0 1 0\n4 1 1 1 1\n5 1 1 1 0\n6 1 0 1 1\n7 1 1 1 0\n8 1 1 1 0\n9 1 1 1 1\n10 1 1 1 0\n11 1 1 1 0\n12 1 0 1 1\n\nNotice l is your list here\n", "Try:\nlst = [\n [\"E\", \"A\", \"P\"],\n [\"E\", \"A\", \"X\", \"P\"],\n [\"E\", \"A\", \"P\"],\n [\"P\"],\n [\"E\", \"A\", \"X\", \"P\"],\n [\"E\", \"A\", \"P\"],\n [\"A\", \"X\", \"P\"],\n [\"E\", \"A\", \"P\"],\n [\"E\", \"A\", \"P\"],\n [\"E\", \"A\", \"X\", \"P\"],\n [\"E\", \"A\", \"P\"],\n [\"E\", \"A\", \"P\"],\n [\"A\", \"X\", \"P\"],\n]\n\ndf = pd.DataFrame({v: 1 for v in l} for l in lst).notna().astype(int)\nprint(df)\n\nPrints:\n E A P X\n0 1 1 1 0\n1 1 1 1 1\n2 1 1 1 0\n3 0 0 1 0\n4 1 1 1 1\n5 1 1 1 0\n6 0 1 1 1\n7 1 1 1 0\n8 1 1 1 0\n9 1 1 1 1\n10 1 1 1 0\n11 1 1 1 0\n12 0 1 1 1\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "list", "pandas", "python", "ragged" ]
stackoverflow_0074468869_dataframe_list_pandas_python_ragged.txt
Q: Dealing with 2 arrays in python, how would I return values for grades based on student name? Array 1 is called 'students', with 'Alex', 'Rich', 'Anthony', 'Len', 'Mark' as values. Array 2 is called 'grades' with [85, 44], [63, 19], [47, 95], [30, 67], [33, 16] as values. I need to select all rows from 'grades' where 'students' is either 'Alex' or 'Mark' Do I need to combine the arrays? I am new to python and struggling to figure out how to index this correctly. so far I have created the two arrays and have tried concatenating them together, but when I then try to index off the concatenated array I get errors students = np.array(['Alex', 'Rich', 'Anthony', 'Len', 'Mark']) grades = np.array([[85, 44], [63, 19], [47, 95], [30, 67], [33, 16]]) studentgrades = np.concatenate((students, grades), axis=1)` studentgrades['Alex'] A: students = ['Alex', 'Rich', 'Anthony', 'Len', 'Mark'] grades = [[85, 44], [63, 19], [47, 95], [30, 67], [33, 16]] studentgrades = dict(zip(students, grades)) print(studentgrades) {'Alex': [85, 44],'Rich': [63, 19],'Anthony': [47, 95],'Len': [30, 67],'Mark': [33, 16]} print(studentgrades['Alex']) [85, 44] A: here is one way to do it using list comprehension assumption: grade and student indexes are aligned # iterate through the students # when student is among the list, return their grades [grades[i] for i in range(len(students)) if students[i] in ['Alex','Mark']] [array([85, 44]), array([33, 16])]
Dealing with 2 arrays in python, how would I return values for grades based on student name?
Array 1 is called 'students', with 'Alex', 'Rich', 'Anthony', 'Len', 'Mark' as values. Array 2 is called 'grades' with [85, 44], [63, 19], [47, 95], [30, 67], [33, 16] as values. I need to select all rows from 'grades' where 'students' is either 'Alex' or 'Mark' Do I need to combine the arrays? I am new to python and struggling to figure out how to index this correctly. so far I have created the two arrays and have tried concatenating them together, but when I then try to index off the concatenated array I get errors students = np.array(['Alex', 'Rich', 'Anthony', 'Len', 'Mark']) grades = np.array([[85, 44], [63, 19], [47, 95], [30, 67], [33, 16]]) studentgrades = np.concatenate((students, grades), axis=1)` studentgrades['Alex']
[ "students = ['Alex', 'Rich', 'Anthony', 'Len', 'Mark']\ngrades = [[85, 44], [63, 19], [47, 95], [30, 67], [33, 16]]\nstudentgrades = dict(zip(students, grades))\nprint(studentgrades)\n\n\n{'Alex': [85, 44],'Rich': [63, 19],'Anthony': [47, 95],'Len': [30, 67],'Mark': [33, 16]}\n\n\nprint(studentgrades['Alex'])\n\n\n[85, 44]\n\n\n", "here is one way to do it using list comprehension\nassumption: grade and student indexes are aligned\n# iterate through the students\n# when student is among the list, return their grades\n[grades[i] for i in range(len(students)) if students[i] in ['Alex','Mark']]\n\n[array([85, 44]), array([33, 16])]\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074466039_arrays_numpy_python.txt
Q: convert PSUTIL Process output to object/dict? I am currently running the following script to get the system_md status of each host. Its working but the output I am getting is a Process Class and I am not sure how to parse the following params to a usable dict. I do not use python much so any help would be great. convert: psutil.Process(pid=1153, name='sssd', status='sleeping', started='2022-09-22 19:36:12')> to - obj = { name:"somename", pid: 1010, status: "sleeping" started:'2022-09-22 #!/usr/bin/env python import re import psutil def log_running_services(): known_cgroups = set() for pid in psutil.pids(): try: cgroups = open('/proc/%d/cgroup' % pid, 'r').read() except IOError: continue # may have exited since we read the listing, or may not have permissions systemd_name_match = re.search( '^1:name=systemd:(/.+)$', cgroups, re.MULTILINE) if systemd_name_match is None: continue # not in a systemd-maintained cgroup systemd_name = systemd_name_match.group(1) if systemd_name in known_cgroups: continue # we already printed this one if not systemd_name.endswith('.service'): continue # this isn't actually a service known_cgroups.add(systemd_name) print(systemd_name) process = psutil.Process(pid) # Attempting to get dict with dict.name = name etc. if __name__ == '__main__': log_running_services() A: And I resolved it with the following code outputs pretty pretty json with the service and status name: #!/usr/bin/env python import re import psutil import json def log_running_services(): known_cgroups = set() result = [] # print(psutil.pids) for pid in psutil.pids(): try: cgroups = open('/proc/%d/cgroup' % pid, 'r').read() except IOError: continue # may have exited since we read the listing, or may not have permissions systemd_name_match = re.search( '^1:name=systemd:(/.+)$', cgroups, re.MULTILINE) if systemd_name_match is None: continue # not in a systemd-maintained cgroup systemd_name = systemd_name_match.group(1) if systemd_name in known_cgroups: continue # we already printed this one if not systemd_name.endswith('.service'): continue # this isn't actually a service known_cgroups.add(systemd_name) s = "{0}".format(psutil.Process(pid)) mydict = dict((k.strip(), v.strip()) for k, v in (item.split('=') for item in s.split(','))) obj = { "name": eval(mydict.get("name")), "status": eval(mydict.get('status')) } result.append(obj) print(json.dumps(result)) if __name__ == '__main__': log_running_services()
convert PSUTIL Process output to object/dict?
I am currently running the following script to get the system_md status of each host. Its working but the output I am getting is a Process Class and I am not sure how to parse the following params to a usable dict. I do not use python much so any help would be great. convert: psutil.Process(pid=1153, name='sssd', status='sleeping', started='2022-09-22 19:36:12')> to - obj = { name:"somename", pid: 1010, status: "sleeping" started:'2022-09-22 #!/usr/bin/env python import re import psutil def log_running_services(): known_cgroups = set() for pid in psutil.pids(): try: cgroups = open('/proc/%d/cgroup' % pid, 'r').read() except IOError: continue # may have exited since we read the listing, or may not have permissions systemd_name_match = re.search( '^1:name=systemd:(/.+)$', cgroups, re.MULTILINE) if systemd_name_match is None: continue # not in a systemd-maintained cgroup systemd_name = systemd_name_match.group(1) if systemd_name in known_cgroups: continue # we already printed this one if not systemd_name.endswith('.service'): continue # this isn't actually a service known_cgroups.add(systemd_name) print(systemd_name) process = psutil.Process(pid) # Attempting to get dict with dict.name = name etc. if __name__ == '__main__': log_running_services()
[ "And I resolved it with the following code outputs pretty pretty json with the service and status name:\n#!/usr/bin/env python\n\nimport re\nimport psutil\nimport json\n\n\ndef log_running_services():\n known_cgroups = set()\n result = []\n # print(psutil.pids)\n for pid in psutil.pids():\n try:\n cgroups = open('/proc/%d/cgroup' % pid, 'r').read()\n except IOError:\n continue # may have exited since we read the listing, or may not have permissions\n systemd_name_match = re.search(\n '^1:name=systemd:(/.+)$', cgroups, re.MULTILINE)\n if systemd_name_match is None:\n continue # not in a systemd-maintained cgroup\n systemd_name = systemd_name_match.group(1)\n if systemd_name in known_cgroups:\n continue # we already printed this one\n if not systemd_name.endswith('.service'):\n continue # this isn't actually a service\n known_cgroups.add(systemd_name)\n s = \"{0}\".format(psutil.Process(pid))\n mydict = dict((k.strip(), v.strip()) for k, v in\n (item.split('=') for item in s.split(',')))\n\n obj = {\n \"name\": eval(mydict.get(\"name\")),\n \"status\": eval(mydict.get('status'))\n }\n result.append(obj)\n print(json.dumps(result))\n\n\nif __name__ == '__main__':\n log_running_services()\n\n\n" ]
[ 0 ]
[]
[]
[ "psutil", "python", "python_2.x", "scripting", "systemctl" ]
stackoverflow_0074468431_psutil_python_python_2.x_scripting_systemctl.txt
Q: What is the best way to query a pytable column with many values? I have a 11 columns x 13,470,621 rows pytable. The first column of the table contains a unique identifier to each row (this identifier is always only present once in the table). This is how I select rows from the table at the moment: my_annotations_table = h5r.root.annotations # Loop through table and get rows that match gene identifiers (column labeled gene_id). for record in my_annotations_table.where("(gene_id == b'gene_id_36624' ) | (gene_id == b'gene_id_14701' ) | (gene_id == b'gene_id_14702')"): # Do something with the data. Now this works fine with small datasets, but I will need to routinely perform queries in which I can have many thousand of unique identifiers to match for in the table's gene_id column. For these larger queries, the query string can quickly get very large and I get an exception: File "/path/to/my/software/python/python-3.9.0/lib/python3.9/site-packages/tables/table.py", line 1189, in _required_expr_vars cexpr = compile(expression, '<string>', 'eval') RecursionError: maximum recursion depth exceeded during compilation I've looked at this question (What is the PyTables counterpart of a SQL query "SELECT col2 FROM table WHERE col1 IN (val1, val2, val3...)"?), which is somehow similar to mine, but was not satisfactory. I come from an R background where we often do these kinds of queries (i.e. my_data_frame[my_data_frame$gene_id %in% c("gene_id_1234", "gene_id_1235"),] and was wondering if there was comparable solution that I could use with pytables. Thanks very much, A: Another approach to consider is combining 2 functions: Table.get_where_list() with Table.read_coordinates() Table.get_where_list(): gets the row coordinates fulfilling the given condition. Table.read_coordinates(): Gets a set of rows given their coordinates (in a list), and returns as a (record) array. The code would look something like this: my_annotations_table = h5r.root.annotations gene_name_list = ['gene_id_36624', 'gene_id_14701', 'gene_id_14702'] # Loop through gene names and get rows that match gene identifiers (column labeled gene_id) gene_row_list = [] for gene_name in gene_name_list: gene_rows = my_annotations_table.get_where_list("gene_id == gene_name")) gene_row_list.extend(gene_rows) # Retieve all of the data in one call gene_data_arr = my_annotations_table.read_coordinates(gene_row_list) A: Okay, I managed to do some satisfactory improvements on this. 1st: optimize the table (with the help of the documentation - https://www.pytables.org/usersguide/optimization.html) Create table. Make sure to specify the expectedrows=<int> arg as it has the potential to increase the query speed. table = h5w.create_table("/", 'annotations', DataDescr, "Annotation table unindexed", expectedrows=self._number_of_genes, filters=tb.Filters(complevel=9, complib='blosc') #tb comes from import tables as tb ... I also modified the input data so that the gene_id_12345 fields are simple integers (gene_id_12345 becomes 12345). Once the table is populated with its 13,470,621 entries (i.e. rows), I created a complete sorted index based on the gene_id column (Column.create_csindex()) and sorted it. table.cols.gene_id.create_csindex() table.copy(overwrite=True, sortby='gene_id', newname="Annotation table", checkCSI=True) # Just make sure that the index is usable. Will print an empty list if not. print(table.will_query_use_indexing('(gene_id == 57403)')) 2nd - The table is optimized, but I still can't query thousands of gene_ids at a time. So I simply separated them in chunks of 31 gene_ids (yes 31 was the absolute maximum, 32 was too much apparently). I did not perform benchmarks, but querying ~8000 gene_ids now takes approximately 10 seconds which is acceptable for my needs.
What is the best way to query a pytable column with many values?
I have a 11 columns x 13,470,621 rows pytable. The first column of the table contains a unique identifier to each row (this identifier is always only present once in the table). This is how I select rows from the table at the moment: my_annotations_table = h5r.root.annotations # Loop through table and get rows that match gene identifiers (column labeled gene_id). for record in my_annotations_table.where("(gene_id == b'gene_id_36624' ) | (gene_id == b'gene_id_14701' ) | (gene_id == b'gene_id_14702')"): # Do something with the data. Now this works fine with small datasets, but I will need to routinely perform queries in which I can have many thousand of unique identifiers to match for in the table's gene_id column. For these larger queries, the query string can quickly get very large and I get an exception: File "/path/to/my/software/python/python-3.9.0/lib/python3.9/site-packages/tables/table.py", line 1189, in _required_expr_vars cexpr = compile(expression, '<string>', 'eval') RecursionError: maximum recursion depth exceeded during compilation I've looked at this question (What is the PyTables counterpart of a SQL query "SELECT col2 FROM table WHERE col1 IN (val1, val2, val3...)"?), which is somehow similar to mine, but was not satisfactory. I come from an R background where we often do these kinds of queries (i.e. my_data_frame[my_data_frame$gene_id %in% c("gene_id_1234", "gene_id_1235"),] and was wondering if there was comparable solution that I could use with pytables. Thanks very much,
[ "Another approach to consider is combining 2 functions: Table.get_where_list() with Table.read_coordinates()\n\nTable.get_where_list(): gets the row coordinates fulfilling the given condition.\nTable.read_coordinates(): Gets a set of rows given their coordinates (in a list), and returns as a (record) array.\n\nThe code would look something like this:\nmy_annotations_table = h5r.root.annotations \ngene_name_list = ['gene_id_36624', 'gene_id_14701', 'gene_id_14702']\n# Loop through gene names and get rows that match gene identifiers (column labeled gene_id)\ngene_row_list = []\nfor gene_name in gene_name_list:\n gene_rows = my_annotations_table.get_where_list(\"gene_id == gene_name\")) \n gene_row_list.extend(gene_rows)\n\n# Retieve all of the data in one call\ngene_data_arr = my_annotations_table.read_coordinates(gene_row_list)\n\n", "Okay, I managed to do some satisfactory improvements on this.\n1st: optimize the table (with the help of the documentation - https://www.pytables.org/usersguide/optimization.html)\nCreate table. Make sure to specify the expectedrows=<int> arg as it has the potential to increase the query speed.\ntable = h5w.create_table(\"/\", 'annotations', \n DataDescr, \"Annotation table unindexed\", \n expectedrows=self._number_of_genes, \n filters=tb.Filters(complevel=9, complib='blosc')\n #tb comes from import tables as tb ...\n\nI also modified the input data so that the gene_id_12345 fields are simple integers (gene_id_12345 becomes 12345).\nOnce the table is populated with its 13,470,621 entries (i.e. rows),\nI created a complete sorted index based on the gene_id column (Column.create_csindex()) and sorted it.\ntable.cols.gene_id.create_csindex()\ntable.copy(overwrite=True, sortby='gene_id', newname=\"Annotation table\", checkCSI=True)\n# Just make sure that the index is usable. Will print an empty list if not.\nprint(table.will_query_use_indexing('(gene_id == 57403)'))\n\n2nd - The table is optimized, but I still can't query thousands of gene_ids at a time. So I simply separated them in chunks of 31 gene_ids (yes 31 was the absolute maximum, 32 was too much apparently).\nI did not perform benchmarks, but querying ~8000 gene_ids now takes approximately 10 seconds which is acceptable for my needs.\n" ]
[ 1, 0 ]
[]
[]
[ "pytables", "python" ]
stackoverflow_0074451862_pytables_python.txt
Q: ValueError: too many values to unpack (expected 2) on a simple Python function I'm coding this password manager program and keep getting this error message when I use the view function: File "c:\Users\user\Desktop\password_manager.py", line 7, in view user, passw = data.split("|") ValueError: too many values to unpack (expected 2) This is the program so far: master_pwd = input("What is the master password?") def view(): with open("passwords.txt", "r") as f: for line in f.readlines(): data = line.rstrip() user, passw = data.split("|") print("User:", user, "Password:", passw) def add(): name = input("Account name: ") pwd = input("Password: ") with open("passwords.txt", "a") as f: f.write(name + "|" + pwd + "\n") while True: mode = input("Would you like to add a new password or view existing ones (view, add)? Press q to quit. ").lower() if mode == "q": break if mode == "view": view() elif mode == "add": add() else: print("Invalid mode.") continue I tried using the .split() method to one variable at a time but it also resulted in the error. I thought the problem could be caused by the comma in user, passw = data.split("|") being deprecated, but I failed to find an alternative. A: The .split() function is returning more than 2 values in a list and therefore cannot be unpacked into only 2 variables. Maybe you have a password or username with a | in it which would cause that. I suggest to simply print(data.split('|')) for a visual of what is happening. It will probably print out a list with more than two values. A: Check your password file to be sure there aren't "|" characters in a username or password that are creating additional splits. If your data is good, you could catch the remaining elements in a list: user, passw, *other = data.split("|")
ValueError: too many values to unpack (expected 2) on a simple Python function
I'm coding this password manager program and keep getting this error message when I use the view function: File "c:\Users\user\Desktop\password_manager.py", line 7, in view user, passw = data.split("|") ValueError: too many values to unpack (expected 2) This is the program so far: master_pwd = input("What is the master password?") def view(): with open("passwords.txt", "r") as f: for line in f.readlines(): data = line.rstrip() user, passw = data.split("|") print("User:", user, "Password:", passw) def add(): name = input("Account name: ") pwd = input("Password: ") with open("passwords.txt", "a") as f: f.write(name + "|" + pwd + "\n") while True: mode = input("Would you like to add a new password or view existing ones (view, add)? Press q to quit. ").lower() if mode == "q": break if mode == "view": view() elif mode == "add": add() else: print("Invalid mode.") continue I tried using the .split() method to one variable at a time but it also resulted in the error. I thought the problem could be caused by the comma in user, passw = data.split("|") being deprecated, but I failed to find an alternative.
[ "The .split() function is returning more than 2 values in a list and therefore cannot be unpacked into only 2 variables. Maybe you have a password or username with a | in it which would cause that.\nI suggest to simply print(data.split('|')) for a visual of what is happening. It will probably print out a list with more than two values.\n", "Check your password file to be sure there aren't \"|\" characters in a username or password that are creating additional splits.\nIf your data is good, you could catch the remaining elements in a list:\nuser, passw, *other = data.split(\"|\")\n\n" ]
[ 2, 2 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074468338_python_python_3.x.txt
Q: Navigating between tkinter frames in multiple modules I'm new to Python. Trying to create two modules (.py files) which can be navigated to & fro, without having to create two windows. So, 1st module will have window & frame 1, 2nd module will have just a frame 2. On button click, the frame shown should be switched. Not sure if the below is the right way to do it, but I'm almost there. If there's a better way to do this, please suggest. testnew.py - from tkinter import * from functools import partial BLACK = "#000000" WHITE = "#FFFFFF" class main(): def __init__(self, master): self.frame1 = Frame(master) self.frame1.pack() self.label = Label(self.frame1, bg=WHITE, text="This is 1st frame.") self.label.pack() self.btn_1 = Button(self.frame1, bg=WHITE, text="Switch to 2nd frame", command=partial(self.switch_to_second, master)) self.btn_1.pack() def switch_to_second(self, master): self.btn_1.pack_forget() self.label.pack_forget() self.frame1.pack_forget() from testnew_2 import second_frame self.secondframe = second_frame(master) root = Tk() root.title("Hello world") root.geometry("500x500") main_frame = main(root) root.mainloop() testnew_2.py - from tkinter import * from functools import partial BLACK = "#000000" WHITE = "#FFFFFF" class second_frame(): def __init__(self, master): self.frame2 = Frame(master) self.frame2.pack() self.label2 = Label(self.frame2, bg=BLACK, fg=WHITE, text="This is second frame.") self.label2.pack() self.btn_2 = Button(self.frame2, bg=BLACK, fg=WHITE, text="Switch to 1st frame", command=partial(self.switch_to_first, master)) self.btn_2.pack() def switch_to_first(self, master): self.btn_2.pack_forget() self.label2.pack_forget() self.frame2.pack_forget() from testnew import main self.mainframe = main(master) Running into a problem of another window being created + this error - Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\Admin\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\Admin\PycharmProjects\helloworld\testnew_2.py", line 25, in switch_to_first self.mainframe = main(master) ^^^^^^^^^^^^ File "C:\Users\Admin\PycharmProjects\helloworld\testnew.py", line 10, in __init__ self.frame1 = Frame(master) ^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 3180, in __init__ Widget.__init__(self, master, 'frame', cnf, {}, extra) File "C:\Users\Admin\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 2628, in __init__ self.tk.call( _tkinter.TclError: can't invoke "frame" command: application has been destroyed A: When testnew is imported inside switch_to_first(), the following code inside testnew.py will be executed again to create another instance of Tk(): root = Tk() root.title("Hello world") root.geometry("500x500") main_frame = main(root) root.mainloop() So there will be two windows shown. The mentioned exception will be raised when the two windows are closed. You need to change the above code block as below: if __name__ == "__main__": root = Tk() root.title("Hello world") root.geometry("500x500") main_frame = main(root) root.mainloop() Then when testnew is imported again, no new window will be created because code inside the if block will not be executed.
Navigating between tkinter frames in multiple modules
I'm new to Python. Trying to create two modules (.py files) which can be navigated to & fro, without having to create two windows. So, 1st module will have window & frame 1, 2nd module will have just a frame 2. On button click, the frame shown should be switched. Not sure if the below is the right way to do it, but I'm almost there. If there's a better way to do this, please suggest. testnew.py - from tkinter import * from functools import partial BLACK = "#000000" WHITE = "#FFFFFF" class main(): def __init__(self, master): self.frame1 = Frame(master) self.frame1.pack() self.label = Label(self.frame1, bg=WHITE, text="This is 1st frame.") self.label.pack() self.btn_1 = Button(self.frame1, bg=WHITE, text="Switch to 2nd frame", command=partial(self.switch_to_second, master)) self.btn_1.pack() def switch_to_second(self, master): self.btn_1.pack_forget() self.label.pack_forget() self.frame1.pack_forget() from testnew_2 import second_frame self.secondframe = second_frame(master) root = Tk() root.title("Hello world") root.geometry("500x500") main_frame = main(root) root.mainloop() testnew_2.py - from tkinter import * from functools import partial BLACK = "#000000" WHITE = "#FFFFFF" class second_frame(): def __init__(self, master): self.frame2 = Frame(master) self.frame2.pack() self.label2 = Label(self.frame2, bg=BLACK, fg=WHITE, text="This is second frame.") self.label2.pack() self.btn_2 = Button(self.frame2, bg=BLACK, fg=WHITE, text="Switch to 1st frame", command=partial(self.switch_to_first, master)) self.btn_2.pack() def switch_to_first(self, master): self.btn_2.pack_forget() self.label2.pack_forget() self.frame2.pack_forget() from testnew import main self.mainframe = main(master) Running into a problem of another window being created + this error - Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\Admin\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\Admin\PycharmProjects\helloworld\testnew_2.py", line 25, in switch_to_first self.mainframe = main(master) ^^^^^^^^^^^^ File "C:\Users\Admin\PycharmProjects\helloworld\testnew.py", line 10, in __init__ self.frame1 = Frame(master) ^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 3180, in __init__ Widget.__init__(self, master, 'frame', cnf, {}, extra) File "C:\Users\Admin\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 2628, in __init__ self.tk.call( _tkinter.TclError: can't invoke "frame" command: application has been destroyed
[ "When testnew is imported inside switch_to_first(), the following code inside testnew.py will be executed again to create another instance of Tk():\nroot = Tk()\nroot.title(\"Hello world\")\nroot.geometry(\"500x500\")\nmain_frame = main(root)\nroot.mainloop()\n\nSo there will be two windows shown. The mentioned exception will be raised when the two windows are closed.\nYou need to change the above code block as below:\nif __name__ == \"__main__\":\n root = Tk()\n root.title(\"Hello world\")\n root.geometry(\"500x500\")\n main_frame = main(root)\n root.mainloop()\n\nThen when testnew is imported again, no new window will be created because code inside the if block will not be executed.\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074468656_python_tkinter.txt
Q: Python3 dictionary: remove duplicate values in alphabetical order Let's say I have the following dictionary: full_dic = { 'aa': 1, 'ac': 1, 'ab': 1, 'ba': 2, ... } I normally use standard dictionary comprehension to remove dupes like: t = {val : key for (key, val) in full_dic.items()} cleaned_dic = {val : key for (key, val) in t.items()} Calling print(cleaned_dic) outputs {'ab': 1,'ba': 2, ...} With this code, the key that remains seems to always be the final one in the list, but I'm not sure that's even guaranteed as dictionaries are unordered. Instead, I'd like to find a way to ensure that the key I keep is the first alphabetically. So, regardless of the 'order' the dictionary is in, I want the output to be: >> {'aa': 1,'ba': 2, ...} Where 'aa' comes first alphabetically. I ran some timer tests on 3 answers below and got the following (dictionary was created with random key/value pairs): dict length: 10 # of loops: 100000 HoliSimo (OrderedDict): 0.0000098405 seconds Ricardo: 0.0000115448 seconds Mark (itertools.groupby): 0.0000111745 seconds dict length: 1000000 # of loops: 10 HoliSimo (OrderedDict): 6.1724137300 seconds Ricardo: 3.3102091300 seconds Mark (itertools.groupby): 6.1338266200 seconds We can see that for smaller dictionary sizes using OrderedDict is fastest but for large dictionary sizes it's slightly better to use Ricardo's answer below. A: You should use the OrderectDict class. import collections full_dic = { 'aa': 1, 'ac': 1, 'ab': 1 } od = collections.OrderedDict(sorted(full_dic.items())) In this way you will be sure to have sorted dictionary (Original code: StackOverflow). And then: result = {} for k, vin od.items(): if value not in result.values(): result[key] = value I'm not sure if it will speed up the computation but you can try: inverted_dict = {} for k, v in od.items(): if inverted_dict.get(v) is None: inverted_dict[v] = k res = {v: k for k, v in inverted_dict.items()} A: t = {val : key for (key, val) in dict(sorted(full_dic.items(), key=lambda x: x[0].lower(), reverse=True)).items()} cleaned_dic = {val : key for (key, val) in t.items()} dict(sorted(cleaned_dic.items(), key=lambda x: x[0].lower())) >>> {'aa': 1, 'ba': 2} A: Seems like you can do this with a single sort and itertools.groupby. First sort the items by value, then key. Pass this to groupby and take the first item of each group to pass to the dict constructor: from itertools import groupby full_dic = { 'aa': 1, 'ac': 1, 'xx': 2, 'ab': 1, 'ba': 2, } groups = groupby(sorted(full_dic.items(), key=lambda p: (p[1], p[0])), key=lambda x: x[1]) dict(next(g) for k, g in groups) # {'aa': 1, 'ba': 2}
Python3 dictionary: remove duplicate values in alphabetical order
Let's say I have the following dictionary: full_dic = { 'aa': 1, 'ac': 1, 'ab': 1, 'ba': 2, ... } I normally use standard dictionary comprehension to remove dupes like: t = {val : key for (key, val) in full_dic.items()} cleaned_dic = {val : key for (key, val) in t.items()} Calling print(cleaned_dic) outputs {'ab': 1,'ba': 2, ...} With this code, the key that remains seems to always be the final one in the list, but I'm not sure that's even guaranteed as dictionaries are unordered. Instead, I'd like to find a way to ensure that the key I keep is the first alphabetically. So, regardless of the 'order' the dictionary is in, I want the output to be: >> {'aa': 1,'ba': 2, ...} Where 'aa' comes first alphabetically. I ran some timer tests on 3 answers below and got the following (dictionary was created with random key/value pairs): dict length: 10 # of loops: 100000 HoliSimo (OrderedDict): 0.0000098405 seconds Ricardo: 0.0000115448 seconds Mark (itertools.groupby): 0.0000111745 seconds dict length: 1000000 # of loops: 10 HoliSimo (OrderedDict): 6.1724137300 seconds Ricardo: 3.3102091300 seconds Mark (itertools.groupby): 6.1338266200 seconds We can see that for smaller dictionary sizes using OrderedDict is fastest but for large dictionary sizes it's slightly better to use Ricardo's answer below.
[ "You should use the OrderectDict class.\nimport collections\nfull_dic = {\n 'aa': 1,\n 'ac': 1,\n 'ab': 1\n}\nod = collections.OrderedDict(sorted(full_dic.items()))\n\nIn this way you will be sure to have sorted dictionary (Original code: StackOverflow).\nAnd then:\nresult = {}\nfor k, vin od.items():\n if value not in result.values():\n result[key] = value\n\nI'm not sure if it will speed up the computation but you can try:\ninverted_dict = {}\nfor k, v in od.items():\n if inverted_dict.get(v) is None:\n inverted_dict[v] = k\n\nres = {v: k for k, v in inverted_dict.items()}\n\n", "t = {val : key for (key, val) in dict(sorted(full_dic.items(), key=lambda x: x[0].lower(), reverse=True)).items()}\ncleaned_dic = {val : key for (key, val) in t.items()}\ndict(sorted(cleaned_dic.items(), key=lambda x: x[0].lower()))\n>>> {'aa': 1, 'ba': 2}\n\n", "Seems like you can do this with a single sort and itertools.groupby. First sort the items by value, then key. Pass this to groupby and take the first item of each group to pass to the dict constructor:\nfrom itertools import groupby\n\nfull_dic = {\n 'aa': 1,\n 'ac': 1,\n 'xx': 2,\n 'ab': 1,\n 'ba': 2,\n}\ngroups = groupby(sorted(full_dic.items(), key=lambda p: (p[1], p[0])), key=lambda x: x[1])\ndict(next(g) for k, g in groups)\n# {'aa': 1, 'ba': 2}\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "dictionary", "python", "python_3.x" ]
stackoverflow_0074468789_dictionary_python_python_3.x.txt
Q: Python flask not working with url containing "?" I am new to flask and I was trying to make GET request for url containing "?" symbol but it look like my program is just skipping work with it. I am working with flask-sql alchemy, flask and flask-restful. Some simplified look of my program looks like this: fields_list = ['id'] db = SQLAlchemy(app) class User(db.Model): id = db.Column(db.Integer, primary_key=True) class Get(Resource): @staticmethod def get(): users = User.query.all() usr_list = Collection.user_to_json(users) return {"Users": usr_list}, 200 class GetSorted(Resource): @staticmethod def get(field, type): if field not in fields_list or type not in ['acs', 'desc']: return {'Error': 'Wrong field or sort type'}, 400 users = db.session.execute(f"SELECT * FROM USER ORDER BY {field} {type}") usr_list = Collection.user_to_json(users) return {"Users": usr_list}, 200 api.add_resource(GetSorted, '/api/customers?sort=<field>&sort_type=<type>') api.add_resource(Get, '/api/customers') Output with url "http://127.0.0.1:5000/api/customers?sort=id&sort_type=desc" looks like this { "Users": [ { "Id": 1 }, { "Id": 2 }, { "Id": 3 }, ] } But I expect it to look like this { "Users": [ { "Id": 3 }, { "Id": 2 }, { "Id": 1 }, ] } Somehow if I replace "?" with "/" in url everything worked fine, but I want it to work with "?" A: In order to get the information after ?, you have to use request.args. This information is Query Parameters, which are part of the Query String: a section of the URL that contains key-value parameters. If your route is: api.add_resource(GetSorted, '/api/customers?sort=<field>&sort_type=<type>') Your key-values would be: sort=<field> sort_type=<type> And you could get the values of the field and type keys like this: sort = request.args.get('field', 'field_defaul_value') sort_type = request.args.get('type', 'type_defaul_value') More info: 1 2 A: With Flask you can define path variables like you did, but they must be part of the path. For example, defining a path of /api/customers/<id> can be used to get a specific customer by id, defining the function as def get(id):. Query parameters cannot be defined in such a way, and as you mentioned in your comment, you need to somehow "overload" the get function. Here is one way to do it: from flask import Flask, request from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) USERS = [ {"id": 1}, {"id": 3}, {"id": 2}, ] class Get(Resource): @classmethod def get(cls): if request.args: return cls._sorted_get() return {"Users": USERS, "args":request.args}, 200 @classmethod def _sorted_get(cls): field = request.args.get("sort") type = request.args.get("sort_type") if field not in ("id",) or type not in ['acs', 'desc']: return {'Error': 'Wrong field or sort type'}, 400 sorted_users = sorted(USERS, key=lambda x: x[field], reverse=type=="desc") return {"Users": sorted_users}, 200 api.add_resource(Get, '/api/customers') if __name__ == '__main__': app.run(debug=True) Here is Flask's documentation regarding accessing request data, and Flask-Restful's quickstart guide.
Python flask not working with url containing "?"
I am new to flask and I was trying to make GET request for url containing "?" symbol but it look like my program is just skipping work with it. I am working with flask-sql alchemy, flask and flask-restful. Some simplified look of my program looks like this: fields_list = ['id'] db = SQLAlchemy(app) class User(db.Model): id = db.Column(db.Integer, primary_key=True) class Get(Resource): @staticmethod def get(): users = User.query.all() usr_list = Collection.user_to_json(users) return {"Users": usr_list}, 200 class GetSorted(Resource): @staticmethod def get(field, type): if field not in fields_list or type not in ['acs', 'desc']: return {'Error': 'Wrong field or sort type'}, 400 users = db.session.execute(f"SELECT * FROM USER ORDER BY {field} {type}") usr_list = Collection.user_to_json(users) return {"Users": usr_list}, 200 api.add_resource(GetSorted, '/api/customers?sort=<field>&sort_type=<type>') api.add_resource(Get, '/api/customers') Output with url "http://127.0.0.1:5000/api/customers?sort=id&sort_type=desc" looks like this { "Users": [ { "Id": 1 }, { "Id": 2 }, { "Id": 3 }, ] } But I expect it to look like this { "Users": [ { "Id": 3 }, { "Id": 2 }, { "Id": 1 }, ] } Somehow if I replace "?" with "/" in url everything worked fine, but I want it to work with "?"
[ "In order to get the information after ?, you have to use request.args. This information is Query Parameters, which are part of the Query String: a section of the URL that contains key-value parameters.\nIf your route is:\napi.add_resource(GetSorted, '/api/customers?sort=<field>&sort_type=<type>')\n\nYour key-values would be:\nsort=<field>\nsort_type=<type>\n\nAnd you could get the values of the field and type keys like this:\nsort = request.args.get('field', 'field_defaul_value')\nsort_type = request.args.get('type', 'type_defaul_value')\n\nMore info:\n\n1\n2\n\n", "With Flask you can define path variables like you did, but they must be part of the path. For example, defining a path of /api/customers/<id> can be used to get a specific customer by id, defining the function as def get(id):. Query parameters cannot be defined in such a way, and as you mentioned in your comment, you need to somehow \"overload\" the get function. Here is one way to do it:\nfrom flask import Flask, request\nfrom flask_restful import Resource, Api\n\napp = Flask(__name__)\napi = Api(app)\n\nUSERS = [\n {\"id\": 1},\n {\"id\": 3},\n {\"id\": 2},\n]\n\nclass Get(Resource):\n @classmethod\n def get(cls):\n if request.args:\n return cls._sorted_get()\n return {\"Users\": USERS, \"args\":request.args}, 200\n\n @classmethod\n def _sorted_get(cls):\n field = request.args.get(\"sort\")\n type = request.args.get(\"sort_type\")\n if field not in (\"id\",) or type not in ['acs', 'desc']:\n return {'Error': 'Wrong field or sort type'}, 400\n sorted_users = sorted(USERS, key=lambda x: x[field], reverse=type==\"desc\")\n return {\"Users\": sorted_users}, 200\n\napi.add_resource(Get, '/api/customers')\n\n\nif __name__ == '__main__':\n app.run(debug=True)\n\nHere is Flask's documentation regarding accessing request data, and Flask-Restful's quickstart guide.\n" ]
[ 0, 0 ]
[]
[]
[ "flask", "flask_restful", "python" ]
stackoverflow_0074468668_flask_flask_restful_python.txt
Q: How do I check if a user has a role in nextcord? I am using nextcord and I am trying to check if a user has a role when they run a command. I have no idea how to do this so I cannot provide an MRE. I imagine that the code will be something like this: @client.slash_command(name="test") async def test(interaction:nextcord.Interaction): if interaction.user.has_role("Cool"): await interaction.send("You are cool!") else: await interaction.send("You are not cool.") A: To do this, you will need to get the role first, and then check if a specific member is in that role. Here is an example, however I believe this is not the only way to do this: from nextcord.utils import get role = get(ctx.guild.roles, name='search for role by name') if interaction.user in role: do something else: do a different thing Hope this helps A: I found the answer, It is sort of like this answer but backwards: @client.slash_command(name="check_role", description="Check if a user has a role", guild_ids=GUILD_IDS) async def check_role(interaction:nextcord.Interaction, user:nextcord.Member): if nextcord.utils.get(interaction.guild.roles, name="Role Name") in user.roles: await interaction.send("True!") else: await interaction.send("False.")
How do I check if a user has a role in nextcord?
I am using nextcord and I am trying to check if a user has a role when they run a command. I have no idea how to do this so I cannot provide an MRE. I imagine that the code will be something like this: @client.slash_command(name="test") async def test(interaction:nextcord.Interaction): if interaction.user.has_role("Cool"): await interaction.send("You are cool!") else: await interaction.send("You are not cool.")
[ "To do this, you will need to get the role first, and then check if a specific member is in that role. Here is an example, however I believe this is not the only way to do this:\nfrom nextcord.utils import get\n\nrole = get(ctx.guild.roles, name='search for role by name')\n\nif interaction.user in role:\n do something\nelse:\n do a different thing\n\nHope this helps\n", "I found the answer, It is sort of like this answer but backwards:\n@client.slash_command(name=\"check_role\", description=\"Check if a user has a role\", guild_ids=GUILD_IDS)\nasync def check_role(interaction:nextcord.Interaction, user:nextcord.Member):\n if nextcord.utils.get(interaction.guild.roles, name=\"Role Name\") in user.roles:\n await interaction.send(\"True!\")\n else:\n await interaction.send(\"False.\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "discord", "nextcord", "python" ]
stackoverflow_0074439793_discord_nextcord_python.txt
Q: Python Unittest: No tests discovered in Visual Studio Code I'm trying to make the self-running feature of Visual Studio Code unit tests work. I recently made a change in the directory structure of my Python project that was previously like this: myproje\ domain\ __init__.py repositories\ tests\ __init__.py guardstest.py utils\ __init__.py guards.py web\ And my setup for unittest was like this: "python.unitTest.unittestArgs": [ "-v", "-s", "tests", "-p", "*test*.py" ] After the changes, the structure of the project was as follows: myprojet\ app\ controllers\ __init__.py models\ __init__.py entities.py enums.py tests\ res\ image1.png image2.png __init__.py guardstest.py utils\ __init__.py guards.py views\ static\ templnates\ __init__.py uml\ After that the extension does not discover my tests anymore. I've tried to change the '-s' parameter to "./app/tests", ".tests", "./tests", "app/tests", "/app/tests", "app.tests", unsuccessfully . A: The problem was that I was using relative imports in the test module (from ..utils import guards). I just changed it to absolute import (from app.utils import guards) and it all worked again. A: There are 2 reasons that this might not work: There is an error in the tests The python Testing plugin won't find your tests if there is an error in the test script. To check for a potential error, click the Show Test Output, then run the tests using Run All Tests (both buttons are located in the top left, just above where the tests should appear). If there is an error, it will appear in the OUTPUT tab. The tests aren't properly configured Check your .vscode/settings.json, and take the python.testing.unittestArgs list. You can debug the discovery of tests in the command line by adding args to the python3 -m unittest discover command in the command line. So with this config: { "python.testing.unittestArgs": [ "-v", "-s", ".", "-p", "*test*.py" ] } You would launch the command: python3 -m unittest discover -v -s . -p "*test*.py" You can play with the args unitl you discover your tests, and modify the args in .vscode/settings.json accordingly . Here are the docs for unittest Note A common reason for this is that you're trying to run tests with dependencies. If this is the case, you can select your interpreter by running ctrl + shift + p and searching Python: Select Interpreter, then selecting the correct interpreter. A: It is because some of the imports in the test is not discoverable. when running python -m unittest -h, the last line of the output is For test discovery all test modules must be importable from the top level directory of the project. it is likely that VSCode is running the command without the right PYTHONPATH and other environment variables. I created __init__.py and put the following code into it. import sys import os import unittest # set module path for testing sys.path.insert(0, "path_in_PYTHONPATH") # repead to include all paths class TestBase(unittest.TestCase): def __init__(self, methodName: str) -> None: super().__init__(methodName=methodName) then in the test file, instead of extending unittest.TestCase, do from test import TestBase class Test_A(TestBase): ... A: In my case, I had same identical issue on Visual Studio Code, but my resolution was slightly different. I looked at .vscode/settings.json file. I noticed unittestEnabled and pytestEnabled, but unittestEnabled=true and pytestEnabled=false. Didn't know what the difference was, but I used pytest in my CLI test. So, I turned pytestEnabled = true. Then I went to Testing icon on the left. I clicked a button to discover test again and had to choose couple of options like setting test folder. Now it discovered all my tests and working as expected. Hope this helps someone.
Python Unittest: No tests discovered in Visual Studio Code
I'm trying to make the self-running feature of Visual Studio Code unit tests work. I recently made a change in the directory structure of my Python project that was previously like this: myproje\ domain\ __init__.py repositories\ tests\ __init__.py guardstest.py utils\ __init__.py guards.py web\ And my setup for unittest was like this: "python.unitTest.unittestArgs": [ "-v", "-s", "tests", "-p", "*test*.py" ] After the changes, the structure of the project was as follows: myprojet\ app\ controllers\ __init__.py models\ __init__.py entities.py enums.py tests\ res\ image1.png image2.png __init__.py guardstest.py utils\ __init__.py guards.py views\ static\ templnates\ __init__.py uml\ After that the extension does not discover my tests anymore. I've tried to change the '-s' parameter to "./app/tests", ".tests", "./tests", "app/tests", "/app/tests", "app.tests", unsuccessfully .
[ "The problem was that I was using relative imports in the test module (from ..utils import guards). \nI just changed it to absolute import (from app.utils import guards) and it all worked again.\n", "There are 2 reasons that this might not work:\nThere is an error in the tests\nThe python Testing plugin won't find your tests if there is an error in the test script.\nTo check for a potential error, click the Show Test Output, then run the tests using Run All Tests (both buttons are located in the top left, just above where the tests should appear).\nIf there is an error, it will appear in the OUTPUT tab.\nThe tests aren't properly configured\nCheck your .vscode/settings.json, and take the python.testing.unittestArgs list.\nYou can debug the discovery of tests in the command line by adding args to the python3 -m unittest discover command in the command line.\nSo with this config:\n{\n \"python.testing.unittestArgs\": [\n \"-v\",\n \"-s\",\n \".\",\n \"-p\",\n \"*test*.py\"\n ]\n}\n\nYou would launch the command:\npython3 -m unittest discover -v -s . -p \"*test*.py\"\n\nYou can play with the args unitl you discover your tests, and modify the args in .vscode/settings.json accordingly .\nHere are the docs for unittest\nNote\nA common reason for this is that you're trying to run tests with dependencies. If this is the case, you can select your interpreter by running ctrl + shift + p and searching Python: Select Interpreter, then selecting the correct interpreter.\n", "It is because some of the imports in the test is not discoverable. when running python -m unittest -h, the last line of the output is\n\nFor test discovery all test modules must be importable from the top level directory of the project.\n\nit is likely that VSCode is running the command without the right PYTHONPATH and other environment variables.\nI created __init__.py and put the following code into it.\nimport sys\nimport os\nimport unittest\n\n# set module path for testing\nsys.path.insert(0, \"path_in_PYTHONPATH\")\n# repead to include all paths\n\nclass TestBase(unittest.TestCase):\n def __init__(self, methodName: str) -> None:\n super().__init__(methodName=methodName)\n\nthen in the test file, instead of extending unittest.TestCase, do\nfrom test import TestBase \n\nclass Test_A(TestBase):\n ...\n\n\n", "In my case, I had same identical issue on Visual Studio Code, but my resolution was slightly different.\nI looked at .vscode/settings.json file.\nI noticed unittestEnabled and pytestEnabled, but unittestEnabled=true and pytestEnabled=false. Didn't know what the difference was, but I used pytest in my CLI test. So, I turned pytestEnabled = true. Then I went to Testing icon on the left. I clicked a button to discover test again and had to choose couple of options like setting test folder. Now it discovered all my tests and working as expected. Hope this helps someone.\n" ]
[ 8, 5, 4, 0 ]
[]
[]
[ "python", "visual_studio_code", "vscode_settings" ]
stackoverflow_0051198860_python_visual_studio_code_vscode_settings.txt
Q: A code to decide whether, or not, a given number of different coins possible to form an exact given amount of dollars (Using loops or recursion), I'm trying to write a python function where the user enters an amount of dollars (say:1.25) and number of coins (say:6), then the function decides whether, or not, it is possible to form the exact amount of dollars using the exact given number of coins, assuming that the coins are quarters (0.25), dimes (0.10), nickels (0.05) and pennies (0.010). the function can use any one of the coins multiple times, but the total number of coins used must be equal to the exact number passed to the function. e.g: if we pass 1.00 dollar and number of 6 coins: should return True because we can use (3 quarters + 2 dimes + 1 nickel) 1.25 dollars using 5 coins: True >> (5 quarters) 1.25 dollars using 8 coins: True >> (3 quarters + 5 dimes) 1.25 dollars using 7 coins: False. I have the idea of the solution in my mind but couldn't transform it to a python code: the function has to start iterating through the group of coins we have (starting from the highest coin: 0.25) and multiply it by the number passed. While the result is higher than the given amount of dollars, the number of coins passed should be decremented by 1. When we get to a point where the result of (number * coin) is less than the given amount of dollars, the amount should be (the given amount - (number * coin)) and the number of coins should be (the given number - the number used so far). I have been trying for few days to make a python code out of this. This is what I've done so far. ` def total(dollars, num): dollars = float(dollars) sofar = 0 num = round(num, 2) coins = [0.25, 0.10, 0.05, 0.01] possible = False if not possible: for x in range(len(coins)): if num * coins[x] == dollars: possible = True elif num * coins[x] > dollars: num -= 1 sofar += 1 else: dollars -= num * coins[x] num = sofar return possible ` When I pass (1.25, 5) to the function >> True (1.25, 6) >> False (1.25, 7) >> False (1.25, 8) >> False (which is a wrong returned value) Thanks in advance A: Here's a working solution without recursion, but does use list comprehension. Not sure how large your coin set is expected to grow to and since this calculates the sum for all combinations it won't scale nicely. from itertools import combinations_with_replacement list_of_coins = [0.1, 0.05, 0.25] # dimes, nickles, quarters number_of_coins = 6 # as your example gives sum_value = 1.0 # as your example gives def will_it_sum(list_of_coins, number_of_coins, sum_value): list_of_combos = list(combinations_with_replacement(iter(list_of_coins), number_of_coins)) summed_list = [sum(item) for item in list_of_combos] summed_to_desired_value = [i for i in summed_list if i == sum_value] if len(summed_to_desired_value) > 0: return True else: return False number_of_coins = 7 sum_value = 1.25 will_it_sum(list_of_coins, number_of_coins, sum_value) A: Below the solution that uses what python has already built-in. This will probably not go well as an exercise solution. from itertools import combinations_with_replacement def total(dollars, num): cents = int(dollars*100) coins = [25, 10, 5, 1] return any(sum(comb) == cents for comb in combinations_with_replacement(coins, num)) Alternatively, if the combination should be returned: from itertools import combinations_with_replacement def total(dollars, num): cents = int(dollars*100) coins = [25, 10, 5, 1] try: return next(filter(lambda comb: sum(comb) == cents, combinations_with_replacement(coins, num))) except StopIteration: return None A: This solves the problem without using combinations. This is the algorithm I described above. You start out using as many of the largest coin as you can, then you recursively see if you can fill in with the smaller coins. As soon as you get a match, you win. Note that I use a helper so I can convert the dollars to integer pennies, to avoid floating point issues. coins = [25, 10, 5, 1] def totalhelp(cents, num, which=0): if which >= len(coins): return False cts = coins[which] # What's the most of this coin we can use? maxn = min(cents // cts, num) for i in range(maxn,-1,-1): amt = i * cts if amt == cents: return True if totalhelp(cents-amt, num-i, which+1): return True return False def total(dollars, num): cents = int(dollars*100+0.5) return totalhelp( cents, num ) print( total(1.25, 4 ) ) print( total(1.25, 5 ) ) print( total(1.25, 6 ) ) print( total(1.25, 7 ) ) print( total(1.25, 8 ) ) Output: False True True True True
A code to decide whether, or not, a given number of different coins possible to form an exact given amount of dollars
(Using loops or recursion), I'm trying to write a python function where the user enters an amount of dollars (say:1.25) and number of coins (say:6), then the function decides whether, or not, it is possible to form the exact amount of dollars using the exact given number of coins, assuming that the coins are quarters (0.25), dimes (0.10), nickels (0.05) and pennies (0.010). the function can use any one of the coins multiple times, but the total number of coins used must be equal to the exact number passed to the function. e.g: if we pass 1.00 dollar and number of 6 coins: should return True because we can use (3 quarters + 2 dimes + 1 nickel) 1.25 dollars using 5 coins: True >> (5 quarters) 1.25 dollars using 8 coins: True >> (3 quarters + 5 dimes) 1.25 dollars using 7 coins: False. I have the idea of the solution in my mind but couldn't transform it to a python code: the function has to start iterating through the group of coins we have (starting from the highest coin: 0.25) and multiply it by the number passed. While the result is higher than the given amount of dollars, the number of coins passed should be decremented by 1. When we get to a point where the result of (number * coin) is less than the given amount of dollars, the amount should be (the given amount - (number * coin)) and the number of coins should be (the given number - the number used so far). I have been trying for few days to make a python code out of this. This is what I've done so far. ` def total(dollars, num): dollars = float(dollars) sofar = 0 num = round(num, 2) coins = [0.25, 0.10, 0.05, 0.01] possible = False if not possible: for x in range(len(coins)): if num * coins[x] == dollars: possible = True elif num * coins[x] > dollars: num -= 1 sofar += 1 else: dollars -= num * coins[x] num = sofar return possible ` When I pass (1.25, 5) to the function >> True (1.25, 6) >> False (1.25, 7) >> False (1.25, 8) >> False (which is a wrong returned value) Thanks in advance
[ "Here's a working solution without recursion, but does use list comprehension. Not sure how large your coin set is expected to grow to and since this calculates the sum for all combinations it won't scale nicely.\nfrom itertools import combinations_with_replacement\n\nlist_of_coins = [0.1, 0.05, 0.25] # dimes, nickles, quarters\nnumber_of_coins = 6 # as your example gives\nsum_value = 1.0 # as your example gives\n\ndef will_it_sum(list_of_coins, number_of_coins, sum_value):\n list_of_combos = list(combinations_with_replacement(iter(list_of_coins), number_of_coins))\n summed_list = [sum(item) for item in list_of_combos]\n summed_to_desired_value = [i for i in summed_list if i == sum_value]\n if len(summed_to_desired_value) > 0:\n return True\n else:\n return False\n\nnumber_of_coins = 7\nsum_value = 1.25\n\nwill_it_sum(list_of_coins, number_of_coins, sum_value)\n\n", "Below the solution that uses what python has already built-in. This will probably not go well as an exercise solution.\nfrom itertools import combinations_with_replacement\n\ndef total(dollars, num):\n cents = int(dollars*100)\n coins = [25, 10, 5, 1]\n return any(sum(comb) == cents for comb in combinations_with_replacement(coins, num))\n\nAlternatively, if the combination should be returned:\nfrom itertools import combinations_with_replacement\n\ndef total(dollars, num):\n cents = int(dollars*100)\n coins = [25, 10, 5, 1]\n try:\n return next(filter(lambda comb: sum(comb) == cents, combinations_with_replacement(coins, num)))\n except StopIteration:\n return None\n\n", "This solves the problem without using combinations. This is the algorithm I described above. You start out using as many of the largest coin as you can, then you recursively see if you can fill in with the smaller coins. As soon as you get a match, you win.\nNote that I use a helper so I can convert the dollars to integer pennies, to avoid floating point issues.\ncoins = [25, 10, 5, 1]\n\ndef totalhelp(cents, num, which=0):\n if which >= len(coins):\n return False\n cts = coins[which]\n # What's the most of this coin we can use?\n maxn = min(cents // cts, num)\n for i in range(maxn,-1,-1):\n amt = i * cts\n if amt == cents:\n return True\n if totalhelp(cents-amt, num-i, which+1):\n return True\n return False\n\ndef total(dollars, num):\n cents = int(dollars*100+0.5)\n return totalhelp( cents, num )\n\nprint( total(1.25, 4 ) )\nprint( total(1.25, 5 ) )\nprint( total(1.25, 6 ) )\nprint( total(1.25, 7 ) )\nprint( total(1.25, 8 ) )\n\nOutput:\nFalse\nTrue\nTrue\nTrue\nTrue\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "currency", "python", "recursion" ]
stackoverflow_0074465732_currency_python_recursion.txt
Q: for loops within dictionaries vs dictionaries within for loops? Hi I have a question about iterating through a list and adding items and their frequency within the list to a dictionary. i = ['apple','pear','red','apple','red','red','pear','pear','pear'] d = {x:i.count(x) for x in i} print (d) outputs {'pear': 4, 'apple': 2, 'red': 3} However i = ['apple','pear','red','apple','red','red','pear', 'pear', 'pear'] d = {} for x in i: d={x:i.count(x)} print(d) outputs {'pear': 4} I need to iterate through the list while adding each iteration within the dictionary to a new list. However I can't understand why the two different codes are giving different results. It's encouraging to seee that the count function works on the second one. But I am confused as to where apple and red dissapeared to. Sorry for bad wording etcetera been working on this hours and is driving me crazy. Thanks so much for taking time to help I am confused as to why the two results are different A: The problem is that you must add key:value pairs in the second loop instead of overwriting d with every loop. i = ['apple','pear','red','apple','red','red','pear','pear','pear'] d = {} for x in i: d[x] = i.count(x) print(d) will output the same as your first function. {'pear': 4, 'apple': 2, 'red': 3} Basically in your second example when you do d={x:i.count(x)} you have a one element dictionary and for every loop you overwrite that. Then it only shows pear: 4 because pear is the last element in your i list. A: i = ['apple','pear','red','apple','red','red','pear', 'pear', 'pear'] d = {} log = [] for x in i: log.append({x:i.count(x)}) log is [{'apple': 2}, {'pear': 4}, {'red': 3}, {'apple': 2}, {'red': 3}, {'red': 3}, {'pear': 4}, {'pear': 4}, {'pear': 4}] A: I just ran your first bit of code and it gave me {'apple': 2, 'pear': 4, 'red': 3} which is correct but differs from what you've stated in the question. To address your second bit of code, you're performing assignment operation on each iteration of the loop, so the value of d is being re-written every time you access a new item of i Personally I would recommend using Counter for this problem: >>> from collections import Counter >>> z = ['blue', 'red', 'blue', 'yellow', 'blue', 'red'] >>> Counter(z) Counter({'blue': 3, 'red': 2, 'yellow': 1})
for loops within dictionaries vs dictionaries within for loops?
Hi I have a question about iterating through a list and adding items and their frequency within the list to a dictionary. i = ['apple','pear','red','apple','red','red','pear','pear','pear'] d = {x:i.count(x) for x in i} print (d) outputs {'pear': 4, 'apple': 2, 'red': 3} However i = ['apple','pear','red','apple','red','red','pear', 'pear', 'pear'] d = {} for x in i: d={x:i.count(x)} print(d) outputs {'pear': 4} I need to iterate through the list while adding each iteration within the dictionary to a new list. However I can't understand why the two different codes are giving different results. It's encouraging to seee that the count function works on the second one. But I am confused as to where apple and red dissapeared to. Sorry for bad wording etcetera been working on this hours and is driving me crazy. Thanks so much for taking time to help I am confused as to why the two results are different
[ "The problem is that you must add key:value pairs in the second loop instead of overwriting d with every loop.\ni = ['apple','pear','red','apple','red','red','pear','pear','pear']\nd = {}\n\nfor x in i:\n d[x] = i.count(x)\n\nprint(d)\n\nwill output the same as your first function.\n{'pear': 4, 'apple': 2, 'red': 3}\n\nBasically in your second example when you do d={x:i.count(x)} you have a one element dictionary and for every loop you overwrite that. Then it only shows pear: 4 because pear is the last element in your i list.\n", "i = ['apple','pear','red','apple','red','red','pear', 'pear', 'pear']\nd = {} \nlog = []\nfor x in i: \n log.append({x:i.count(x)})\n\nlog is\n[{'apple': 2},\n {'pear': 4},\n {'red': 3},\n {'apple': 2},\n {'red': 3},\n {'red': 3},\n {'pear': 4},\n {'pear': 4},\n {'pear': 4}]\n\n", "I just ran your first bit of code and it gave me\n{'apple': 2, 'pear': 4, 'red': 3}\n\nwhich is correct but differs from what you've stated in the question.\nTo address your second bit of code, you're performing assignment operation on each iteration of the loop, so the value of d is being re-written every time you access a new item of i\nPersonally I would recommend using Counter for this problem:\n>>> from collections import Counter\n>>> z = ['blue', 'red', 'blue', 'yellow', 'blue', 'red']\n>>> Counter(z)\nCounter({'blue': 3, 'red': 2, 'yellow': 1})\n\n" ]
[ 2, 0, 0 ]
[ "varLs = ['apple','pear','red','apple','red','red','pear','pear','pear']\n\ndef frequency(varLs): \n counters = {}\n\n for item in varLs:\n if item not in counters:\n counters[item] = 1\n else:\n counters[item]+= 1\n return counters\n\nprint(frequency(varLs))\n\nreturns {'apple': 2, 'pear': 4, 'red': 3}\n" ]
[ -1 ]
[ "dictionary", "for_loop", "python", "python_3.x" ]
stackoverflow_0074468784_dictionary_for_loop_python_python_3.x.txt
Q: This little script I wrote to monitor my plex sever freezes whenever there's an update I have this python script running on a raspberry pi thats plugged up to a monitor so I can passively monitor my plex server. It displays the current streams, how many of them are transcode streams, and if there's a plex update available. Whenever there's an update available, the whole thing gets stuck and no longer updates with the live info. # Import the required library from tkinter import * from plexapi.server import PlexServer baseurl = 'SERVERIP' token = 'PLEXTOKEN' plex = PlexServer(baseurl, token) import time root=Tk() # frame = Frame(master=root, width=1920, height=200) # frame.pack() root.overrideredirect(True) root.wm_attributes("-topmost", True) root.geometry("1920x200") textv=Text(root, width=200, height=15, fg='red', bg='black', font=('helvetica',30), highlightthickness=0, borderwidth=0) textv.place(x=0, y=0) trans_stream=Text(root, width=20, height=15, fg='red', bg='black', font=('helvetica',30), highlightthickness=0, borderwidth=0) trans_stream.place(x=1500, y=0) update_avail=Text(root, width=20, height=10, fg='red', bg='black', font=('helvetica',12), highlightthickness=0, borderwidth=0) update_avail.place(x=1500, y=120) root.update() def update(): textv['text']=textv.delete("1.0", "end") active = [] for session in plex.sessions(): active.append("{} | {}".format(session.title, str(session.usernames)[2:-2])) for n in active: textv['text'] = textv.insert(END, n + '\n') trans_stream['text']=trans_stream.delete("1.0", "end") trans_count = 0 for trans_ses in plex.transcodeSessions(): trans_count += 1 trans_stream['text'] = trans_stream.insert(END, 'Transcode Streams: ' + str(trans_count)) update_avail['text']=update_avail.delete("1.0", "end") if plex.isLatest() != True: update_avail['text'] = trans_stream.insert(END, '\nServer not up to date') root.after(1000, update) else: update_avail['text'] = trans_stream.insert(END, '\nServer up to date') root.after(1000, update) update() root.mainloop() I tried commenting out the root.after in the if statement that checks if theres an update availble, and adding this to the end instead. while True: time.sleep(1) update() But this produced the same results. A: I'm an idiot! I copied over from a previous section of code the part that changes the text when an update is available, and forgot to change which section of text it's changing. Here's what I changed from: update_avail['text']=update_avail.delete("1.0", "end") if plex.isLatest() != True: update_avail['text'] = trans_stream.insert(END, '\nServer not up to date') root.after(1000, update) else: update_avail['text'] = trans_stream.insert(END, '\nServer up to date') root.after(1000, update) to: update_avail['text']=update_avail.delete("1.0", "end") if plex.isLatest() != True: update_avail['text'] = update_avail.insert(END, '\nServer not up to date') root.after(1000, update) else: update_avail['text'] = update_avail.insert(END, '\nServer up to date') root.after(1000, update)
This little script I wrote to monitor my plex sever freezes whenever there's an update
I have this python script running on a raspberry pi thats plugged up to a monitor so I can passively monitor my plex server. It displays the current streams, how many of them are transcode streams, and if there's a plex update available. Whenever there's an update available, the whole thing gets stuck and no longer updates with the live info. # Import the required library from tkinter import * from plexapi.server import PlexServer baseurl = 'SERVERIP' token = 'PLEXTOKEN' plex = PlexServer(baseurl, token) import time root=Tk() # frame = Frame(master=root, width=1920, height=200) # frame.pack() root.overrideredirect(True) root.wm_attributes("-topmost", True) root.geometry("1920x200") textv=Text(root, width=200, height=15, fg='red', bg='black', font=('helvetica',30), highlightthickness=0, borderwidth=0) textv.place(x=0, y=0) trans_stream=Text(root, width=20, height=15, fg='red', bg='black', font=('helvetica',30), highlightthickness=0, borderwidth=0) trans_stream.place(x=1500, y=0) update_avail=Text(root, width=20, height=10, fg='red', bg='black', font=('helvetica',12), highlightthickness=0, borderwidth=0) update_avail.place(x=1500, y=120) root.update() def update(): textv['text']=textv.delete("1.0", "end") active = [] for session in plex.sessions(): active.append("{} | {}".format(session.title, str(session.usernames)[2:-2])) for n in active: textv['text'] = textv.insert(END, n + '\n') trans_stream['text']=trans_stream.delete("1.0", "end") trans_count = 0 for trans_ses in plex.transcodeSessions(): trans_count += 1 trans_stream['text'] = trans_stream.insert(END, 'Transcode Streams: ' + str(trans_count)) update_avail['text']=update_avail.delete("1.0", "end") if plex.isLatest() != True: update_avail['text'] = trans_stream.insert(END, '\nServer not up to date') root.after(1000, update) else: update_avail['text'] = trans_stream.insert(END, '\nServer up to date') root.after(1000, update) update() root.mainloop() I tried commenting out the root.after in the if statement that checks if theres an update availble, and adding this to the end instead. while True: time.sleep(1) update() But this produced the same results.
[ "I'm an idiot!\nI copied over from a previous section of code the part that changes the text when an update is available, and forgot to change which section of text it's changing.\nHere's what I changed\nfrom:\n update_avail['text']=update_avail.delete(\"1.0\", \"end\")\n if plex.isLatest() != True:\n update_avail['text'] = trans_stream.insert(END, '\\nServer not up to date')\n root.after(1000, update)\n else:\n update_avail['text'] = trans_stream.insert(END, '\\nServer up to date')\n root.after(1000, update)\n\nto:\n update_avail['text']=update_avail.delete(\"1.0\", \"end\")\n if plex.isLatest() != True:\n update_avail['text'] = update_avail.insert(END, '\\nServer not up to date')\n root.after(1000, update)\n else:\n update_avail['text'] = update_avail.insert(END, '\\nServer up to date')\n root.after(1000, update)\n\n" ]
[ 1 ]
[]
[]
[ "plex", "python", "tkinter" ]
stackoverflow_0074469003_plex_python_tkinter.txt
Q: Python Socket Programming: sending and receiving int data between server and client? I'm working on some code which establishes a client and server using socket for python. I want to take user input in my client, send that data over to my server, and then have the server send that info back into my client and store it as an int Here is my server code: import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((socket.gethostname(), 8890)) s.listen(5) while True: s.listen() conn, addr = s.accept() print(f"Connection established from address {addr}") choice = s.recv(2048) conn.send(str(choice).encode('utf8')) conn.close() Here is my code for the client side: import socket choice = input("Select: ") s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((socket.gethostname(), 8890)) s.send(str(choice).encode('utf8')) msg = s.recv(2048) strings = msg.decode('utf8') num = int(strings) Currently, this code gives me this error message: Traceback (most recent call last): File "/Users/user/PycharmProjects/test/client.py", line 12, in <module> msg = s.recv(2048) ConnectionResetError: [Errno 54] Connection reset by peer Any input would be appreciated. A: It was pretty close to working. On the server side choice = s.recv(2048) should be choice = conn.recv(2048). Also s.listen() and conn, addr = s.accept() should be outside the loop. #the server import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((socket.gethostname(), 8890)) s.listen() conn, addr = s.accept() with conn: print(f"Connection established from address {addr}") while True: choice = conn.recv(2048) if not choice: break conn.send(str(choice).encode('utf8')) #the client import socket choice = input("Select: ") s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((socket.gethostname(), 8890)) s.send(str(choice).encode('utf8')) msg = s.recv(2048) strings = msg.decode('utf8') # not sure if best solution but it's a quick fix cleaned_num = int(strings.strip("b\'")) print(f"Number is: {cleaned_num} ") Sources
Python Socket Programming: sending and receiving int data between server and client?
I'm working on some code which establishes a client and server using socket for python. I want to take user input in my client, send that data over to my server, and then have the server send that info back into my client and store it as an int Here is my server code: import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((socket.gethostname(), 8890)) s.listen(5) while True: s.listen() conn, addr = s.accept() print(f"Connection established from address {addr}") choice = s.recv(2048) conn.send(str(choice).encode('utf8')) conn.close() Here is my code for the client side: import socket choice = input("Select: ") s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((socket.gethostname(), 8890)) s.send(str(choice).encode('utf8')) msg = s.recv(2048) strings = msg.decode('utf8') num = int(strings) Currently, this code gives me this error message: Traceback (most recent call last): File "/Users/user/PycharmProjects/test/client.py", line 12, in <module> msg = s.recv(2048) ConnectionResetError: [Errno 54] Connection reset by peer Any input would be appreciated.
[ "It was pretty close to working. On the server side choice = s.recv(2048) should be choice = conn.recv(2048). Also s.listen() and conn, addr = s.accept() should be outside the loop.\n#the server\nimport socket\n\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.bind((socket.gethostname(), 8890))\n\ns.listen()\nconn, addr = s.accept()\nwith conn:\n \n print(f\"Connection established from address {addr}\")\n while True:\n choice = conn.recv(2048)\n if not choice:\n break\n conn.send(str(choice).encode('utf8'))\n\n#the client\nimport socket\n\nchoice = input(\"Select: \")\n\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.connect((socket.gethostname(), 8890))\n\ns.send(str(choice).encode('utf8'))\n\nmsg = s.recv(2048)\nstrings = msg.decode('utf8')\n# not sure if best solution but it's a quick fix\ncleaned_num = int(strings.strip(\"b\\'\"))\nprint(f\"Number is: {cleaned_num} \")\n\nSources\n" ]
[ 0 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0074468725_python_sockets.txt
Q: Scipy.integrate.odeint TypeError: Float object is not callable I am new to Python, and am trying to solve this differential equation using scipy.interpolate.odeint. However, I keep getting a TypeError. I have looked, and cannot find how to fix this issue to get the odeint module to work. Below is my code: import pandas as pd import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import CubicSpline from scipy.integrate import odeint from math import * # Define parameters alpha = 52.875 beta = 13.345 gamma = -1.44 delta = 2.29 # Define model def model(h,t,om): halflife = alpha - (beta(-delta+gamma*om)) k = log(2)/halflife dherb_dt = -kh return dherb_dt # Initial condition h0 = 4.271 # mg/kg, assuming a 2.67 pt/acre application of Dual II Magnum # Time, in days, to interpolate over t = np.linspace(0, 20) # Solve ODE om = 2.5 y1 = odeint(model, h0, t, args=(om,)) # Plot plt.plot(t,y1) plt.xlabel("Months") plt.ylabel("Dose") plt.show() However, the following error occurs: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [17], in <cell line: 22>() 20 # Solve ODE 21 om = 2.5 ---> 22 y1 = odeint(model, h0, t, args=(om,)) 24 # Plot 25 plt.plot(t,y1) File ~\anaconda3\envs\agron893\lib\site-packages\scipy\integrate\_odepack_py.py:241, in odeint(func, y0, t, args, Dfun, col_deriv, full_output, ml, mu, rtol, atol, tcrit, h0, hmax, hmin, ixpr, mxstep, mxhnil, mxordn, mxords, printmessg, tfirst) 239 t = copy(t) 240 y0 = copy(y0) --> 241 output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu, 242 full_output, rtol, atol, tcrit, h0, hmax, hmin, 243 ixpr, mxstep, mxhnil, mxordn, mxords, 244 int(bool(tfirst))) 245 if output[-1] < 0: 246 warning_msg = _msgs[output[-1]] + " Run with full_output = 1 to get quantitative information." Input In [17], in model(h, t, om) 8 def model(h,t,om): ----> 9 halflife = alpha - (beta(-delta+gamma*om)) 10 k = log(2)/halflife 11 dherb_dt = -kh TypeError: 'float' object is not callable How can I fix this issue so that I can solve the equation? A: halflife = alpha - (beta(-delta+gamma*om)) You're trying to use typical math notation a(b) to multiply a * b, but that's not how Python syntax works. You have to explicitly use the symbol * to perform multiplication. To Python, beta(-delta+gamma*om) looks like a function call. Use this instead: halflife = alpha - (beta * (-delta+gamma*om))
Scipy.integrate.odeint TypeError: Float object is not callable
I am new to Python, and am trying to solve this differential equation using scipy.interpolate.odeint. However, I keep getting a TypeError. I have looked, and cannot find how to fix this issue to get the odeint module to work. Below is my code: import pandas as pd import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import CubicSpline from scipy.integrate import odeint from math import * # Define parameters alpha = 52.875 beta = 13.345 gamma = -1.44 delta = 2.29 # Define model def model(h,t,om): halflife = alpha - (beta(-delta+gamma*om)) k = log(2)/halflife dherb_dt = -kh return dherb_dt # Initial condition h0 = 4.271 # mg/kg, assuming a 2.67 pt/acre application of Dual II Magnum # Time, in days, to interpolate over t = np.linspace(0, 20) # Solve ODE om = 2.5 y1 = odeint(model, h0, t, args=(om,)) # Plot plt.plot(t,y1) plt.xlabel("Months") plt.ylabel("Dose") plt.show() However, the following error occurs: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [17], in <cell line: 22>() 20 # Solve ODE 21 om = 2.5 ---> 22 y1 = odeint(model, h0, t, args=(om,)) 24 # Plot 25 plt.plot(t,y1) File ~\anaconda3\envs\agron893\lib\site-packages\scipy\integrate\_odepack_py.py:241, in odeint(func, y0, t, args, Dfun, col_deriv, full_output, ml, mu, rtol, atol, tcrit, h0, hmax, hmin, ixpr, mxstep, mxhnil, mxordn, mxords, printmessg, tfirst) 239 t = copy(t) 240 y0 = copy(y0) --> 241 output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu, 242 full_output, rtol, atol, tcrit, h0, hmax, hmin, 243 ixpr, mxstep, mxhnil, mxordn, mxords, 244 int(bool(tfirst))) 245 if output[-1] < 0: 246 warning_msg = _msgs[output[-1]] + " Run with full_output = 1 to get quantitative information." Input In [17], in model(h, t, om) 8 def model(h,t,om): ----> 9 halflife = alpha - (beta(-delta+gamma*om)) 10 k = log(2)/halflife 11 dherb_dt = -kh TypeError: 'float' object is not callable How can I fix this issue so that I can solve the equation?
[ "halflife = alpha - (beta(-delta+gamma*om))\n\nYou're trying to use typical math notation a(b) to multiply a * b, but that's not how Python syntax works. You have to explicitly use the symbol * to perform multiplication.\nTo Python, beta(-delta+gamma*om) looks like a function call.\nUse this instead:\nhalflife = alpha - (beta * (-delta+gamma*om))\n\n" ]
[ 3 ]
[]
[]
[ "python", "scipy" ]
stackoverflow_0074469055_python_scipy.txt
Q: Is it possible to be better than O(N+M) for Codility lesson MaxCounters using python? This is the code I am using for the Codility lesson: MaxCounters def solution(N, A): counters = [0] * N max_c = 0 for el in A: if el >= 1 and el <= N: counters[el-1] += 1 max_c = max(counters[el-1], max_c) elif el > N: counters = [max_c] * N return counters Every test passes but the last one ("all max_counter operations") times out after 7 seconds so the result is just 88% with time complexity of O(N+M). Is it possible to improve the time complexity of the algorithm and get 100% test result using Python? The MaxCounters task You are given N counters, initially set to 0, and you have two possible operations on them: increase(X) − counter X is increased by 1, max counter − all counters are set to the maximum value of any counter. A non-empty array A of M integers is given. This array represents consecutive operations: if A[K] = X, such that 1 ≤ X ≤ N, then operation K is increase(X), if A[K] = N + 1 then operation K is max counter. Write an efficient algorithm for the following assumptions: N and M are integers within the range [1..100,000]; each element of array A is an integer within the range [1..N + 1]. A: EDIT: following up on the discussion in the comments to this answer, tracking the last operation to avoid unnecessarily resetting the array in successive max_counter operations was the key to achieving the goal. Here's what the different solutions (one keeping track of the max and the second calculating the max on demand) would look like implementing that change: def solution(N, A): counters = [0] * N max_c = 0 last_was_max = False for el in A: if el <= N: counters[el - 1] += 1 max_c = max(counters[el - 1], max_c) last_was_max = False elif not last_was_max: counters = [max_c] * N last_was_max = True return counters def solution2_1(N, A): counters = [0] * N last_was_max = False for el in A: if el <= N: counters[el - 1] += 1 last_was_max = False elif not last_was_max: counters = [max(counters)] * N last_was_max = True return counters I am not aware which implementation was used in the submission. First, you're wasting some time in your if conditions: there's no need to check for the integer being greater or equal to 1, that's a given from the exercise. Then, there's no need to evaluate a second elif-statement, simply go for an else there. If the first condition is not met, the second will be met by definition of the exercise Second, according to my testing, just calculating the maximum at the time it is needed is much faster than keeping track of it through all the runs. This is likely due to the fact that the max-operation will only occur very rarely, especially for large values of M and therefore you're wasting time keeping track of stuff many times vs only calculating the maximum a few times during the run. Addressing @Nearoo's comment, it appears reassigning the array does in fact change the physical address, but according to some tests I ran reassigning is much faster than the for loop anyway. def solution2(N, A): counters = [0] * N for el in A: if el <= N: counters[el - 1] += 1 else: counters = [max(counters)] * N return counters This solution I came up with outperformed your solution by a solid factor of 2-3 on several test runs with different seed values. Here's the code to reproduce: import random random.seed(101) N = random.randint(1, 100000) M = random.randint(1, 100000) A = [random.randint(1, N + 1) for i in range(M)] %timeit solution(N,A) >>> 11.7 ms ± 805 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit solution2(N,A) >>> 3.63 ms ± 169 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) A: Here's a 100% solution: # MaxCounters def solution(N, A): count = [0] * N max_count = 0 last_max = False for val in A: if val == N + 1 and last_max == False: count = [max_count] * N last_max = True continue if val <= N: count[val - 1] += 1 max_count = max(count[val - 1], max_count) last_max = False return count A: # MaxCounters def solution(N, A): max_counter = 0 list_counters = [0]*N if N < min(A): return list_counters for i in A: if i <= N: list_counters[i-1] += 1 max_counter = max(list_counters[i-1], max_counter) else: list_counters = [max_counter]*N return list_counters A: I dont know Python. But my kotlin solution even run faster when more MaxCounter at the extreme test. Check the Report ▶extreme_large all max_counter operations✔OK 0.280s OK 0.264s OK I believe this test is about avoid the full update on the array during max process. If you can avoid that you should able to get the 100%. So try your best to stop a full array update no matter on which language. I get a 88% at my first try, and I used an hour to getting this original new answer. After a max operation all counter has same value, so its meaning less to keep check each counter until the final max operation. I create the array AFTER I found the last Max Operation, so there has no array full update require at all A: actually, this gives a 100% score: def solution(N, A): # write your code in Python 3.6 maxcount=0 counter=[maxcount]*N can_be_updated = True for J in A: if(J<=N ): counter[J-1]+=1 maxcount = max(maxcount,counter[J-1]) can_be_updated = True else: if(can_be_updated): counter = [maxcount]*N can_be_updated = False return(counter) pass A: Actually, every solution that involves creating new list every time there is an update will fail on "large_random2" test. You have to cache two max values. One for global max (minimum for every value) and current max (current maximum value in whole array). def solution(N, A): global_max = 0 current_max = 0 result = [0] * N canUpdate = True for i in A: if i <= N: if result[i-1] <= global_max: result[i-1] = global_max + 1 else: result[i-1] += 1 current_max = max(current_max, result[i-1]) canUpdate = True elif canUpdate: canUpdate = False global_max = current_max # print(result) for i, val in enumerate(result): if val < global_max: result[i] = global_max return result
Is it possible to be better than O(N+M) for Codility lesson MaxCounters using python?
This is the code I am using for the Codility lesson: MaxCounters def solution(N, A): counters = [0] * N max_c = 0 for el in A: if el >= 1 and el <= N: counters[el-1] += 1 max_c = max(counters[el-1], max_c) elif el > N: counters = [max_c] * N return counters Every test passes but the last one ("all max_counter operations") times out after 7 seconds so the result is just 88% with time complexity of O(N+M). Is it possible to improve the time complexity of the algorithm and get 100% test result using Python? The MaxCounters task You are given N counters, initially set to 0, and you have two possible operations on them: increase(X) − counter X is increased by 1, max counter − all counters are set to the maximum value of any counter. A non-empty array A of M integers is given. This array represents consecutive operations: if A[K] = X, such that 1 ≤ X ≤ N, then operation K is increase(X), if A[K] = N + 1 then operation K is max counter. Write an efficient algorithm for the following assumptions: N and M are integers within the range [1..100,000]; each element of array A is an integer within the range [1..N + 1].
[ "EDIT: following up on the discussion in the comments to this answer, tracking the last operation to avoid unnecessarily resetting the array in successive max_counter operations was the key to achieving the goal. Here's what the different solutions (one keeping track of the max and the second calculating the max on demand) would look like implementing that change:\ndef solution(N, A):\n counters = [0] * N\n max_c = 0\n last_was_max = False\n for el in A:\n if el <= N:\n counters[el - 1] += 1\n max_c = max(counters[el - 1], max_c)\n last_was_max = False\n elif not last_was_max:\n counters = [max_c] * N\n last_was_max = True\n return counters\n\n\ndef solution2_1(N, A):\n counters = [0] * N\n last_was_max = False\n for el in A:\n if el <= N:\n counters[el - 1] += 1\n last_was_max = False\n elif not last_was_max:\n counters = [max(counters)] * N\n last_was_max = True\n return counters\n\nI am not aware which implementation was used in the submission.\n\nFirst, you're wasting some time in your if conditions: there's no need to check for the integer being greater or equal to 1, that's a given from the exercise. Then, there's no need to evaluate a second elif-statement, simply go for an else there. If the first condition is not met, the second will be met by definition of the exercise\nSecond, according to my testing, just calculating the maximum at the time it is needed is much faster than keeping track of it through all the runs. This is likely due to the fact that the max-operation will only occur very rarely, especially for large values of M and therefore you're wasting time keeping track of stuff many times vs only calculating the maximum a few times during the run.\nAddressing @Nearoo's comment, it appears reassigning the array does in fact change the physical address, but according to some tests I ran reassigning is much faster than the for loop anyway.\ndef solution2(N, A):\n counters = [0] * N\n for el in A:\n if el <= N:\n counters[el - 1] += 1\n else:\n counters = [max(counters)] * N\n return counters\n\nThis solution I came up with outperformed your solution by a solid factor of 2-3 on several test runs with different seed values. Here's the code to reproduce:\nimport random\n\nrandom.seed(101)\nN = random.randint(1, 100000)\nM = random.randint(1, 100000)\nA = [random.randint(1, N + 1) for i in range(M)]\n\n%timeit solution(N,A)\n>>> 11.7 ms ± 805 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n%timeit solution2(N,A)\n>>> 3.63 ms ± 169 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n", "Here's a 100% solution:\n# MaxCounters\ndef solution(N, A):\n count = [0] * N\n max_count = 0\n last_max = False\n for val in A:\n if val == N + 1 and last_max == False:\n count = [max_count] * N\n last_max = True\n continue\n if val <= N:\n count[val - 1] += 1\n max_count = max(count[val - 1], max_count)\n last_max = False\n return count\n\n\n", " # MaxCounters\ndef solution(N, A):\n max_counter = 0\n list_counters = [0]*N\n if N < min(A):\n return list_counters\n for i in A:\n if i <= N:\n list_counters[i-1] += 1\n max_counter = max(list_counters[i-1], max_counter)\n else:\n list_counters = [max_counter]*N\n return list_counters\n\n", "I dont know Python.\nBut my kotlin solution even run faster when more MaxCounter at the extreme test.\nCheck the Report\n▶extreme_large\nall max_counter operations✔OK\n\n0.280s OK\n0.264s OK\n\nI believe this test is about avoid the full update on the array during max process.\nIf you can avoid that you should able to get the 100%.\nSo try your best to stop a full array update no matter on which language.\nI get a 88% at my first try, and I used an hour to getting this original new answer. After a max operation all counter has same value, so its meaning less to keep check each counter until the final max operation. I create the array AFTER I found the last Max Operation, so there has no array full update require at all\n", "actually, this gives a 100% score:\ndef solution(N, A):\n # write your code in Python 3.6\n maxcount=0\n counter=[maxcount]*N\n can_be_updated = True\n for J in A:\n if(J<=N ):\n counter[J-1]+=1\n maxcount = max(maxcount,counter[J-1])\n can_be_updated = True\n else:\n if(can_be_updated):\n counter = [maxcount]*N\n can_be_updated = False\n return(counter) \npass\n\n", "Actually, every solution that involves creating new list every time there is an update will fail on \"large_random2\" test. You have to cache two max values. One for global max (minimum for every value) and current max (current maximum value in whole array).\ndef solution(N, A):\nglobal_max = 0\ncurrent_max = 0\nresult = [0] * N\ncanUpdate = True\nfor i in A:\n if i <= N:\n if result[i-1] <= global_max:\n result[i-1] = global_max + 1\n else:\n result[i-1] += 1\n current_max = max(current_max, result[i-1])\n canUpdate = True\n elif canUpdate:\n canUpdate = False\n global_max = current_max\n # print(result)\nfor i, val in enumerate(result):\n if val < global_max:\n result[i] = global_max\nreturn result\n\n" ]
[ 3, 2, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0058854370_python.txt
Q: Why can I assign elements to a list that I doesn't have a setter? I've been working on a school OOP python project and I stumbled upon this problem: class AList: def __init__(self, l): self.__a_private_attribute = l @property def l(self): return self.__a_private_attribute if __name__ == '__main__': li = AList([0]) li.l[0] = "this shouldn't work" print(li.l) The output of this is ["this shouldn't work"] How am I able to call methods on a list that does only have a getter and no setter. I know that privacy is not a strength of python, but I can't figure out why I can assign items in a list. I was expecting a AttributeError to be raised by python because I am trying to reassign something without a setter. Does anyone know why am I not raising an Error? A: How am I able to call methods on a list that does only have a getter and no setter Because you never tried to set the list. You got the list, and then changed its first element. This is similar to doing: some_var = li.l some_var[0] = "this shouldn't work" It works because lists are mutable, and you can set elements of the list without reassigning the list itself. If you tried to do li.l = ["this shouldn't work"], it wouldn't work, because this is actually trying to set li.l
Why can I assign elements to a list that I doesn't have a setter?
I've been working on a school OOP python project and I stumbled upon this problem: class AList: def __init__(self, l): self.__a_private_attribute = l @property def l(self): return self.__a_private_attribute if __name__ == '__main__': li = AList([0]) li.l[0] = "this shouldn't work" print(li.l) The output of this is ["this shouldn't work"] How am I able to call methods on a list that does only have a getter and no setter. I know that privacy is not a strength of python, but I can't figure out why I can assign items in a list. I was expecting a AttributeError to be raised by python because I am trying to reassign something without a setter. Does anyone know why am I not raising an Error?
[ "\nHow am I able to call methods on a list that does only have a getter and no setter\n\nBecause you never tried to set the list. You got the list, and then changed its first element. This is similar to doing:\nsome_var = li.l\nsome_var[0] = \"this shouldn't work\"\n\nIt works because lists are mutable, and you can set elements of the list without reassigning the list itself.\nIf you tried to do li.l = [\"this shouldn't work\"], it wouldn't work, because this is actually trying to set li.l\n" ]
[ 3 ]
[]
[]
[ "encapsulation", "oop", "python", "python_3.x" ]
stackoverflow_0074469038_encapsulation_oop_python_python_3.x.txt
Q: Why do I need to run the second loop to get the sigle value in django? The project was to create a filter page where users can filter the model data based on their chosen criteria. The whole thing is working but a specific part is not clear and not making sense. Here is my model- class Author(models.Model): name=models.CharField(max_length=30) def __str__(self): return self.name class Kategory(models.Model): name=models.CharField(max_length=20) def __str__(self): return self.name class Data(models.Model): title=models.CharField(max_length=120) author=models.ForeignKey(Author, on_delete=models.CASCADE) categories=models.ManyToManyField(Kategory) publish_date=models.DateTimeField() views=models.IntegerField(default=0) reviewed=models.BooleanField(default=False) def __str__(self): return self.title Data is the main model while Author was added as ForeignKey and Kategory as many to many fields. My main issue is around this categories field in Data model which has many to many relationship with Kategory model. Here is my views.py file- # This view is for the filter page def is_valid_queryparam(param): return param !='' and param is not None def filter(request): qs=Data.objects.all() kategory=Kategory.objects.all() title_contains_query=request.GET.get('title_contains') id_exact_query=request.GET.get('id_exact') title_or_author_query=request.GET.get('title_or_author') view_count_min=request.GET.get('view_count_min') view_count_max=request.GET.get('view_count_max') date_min=request.GET.get('date_min') date_max=request.GET.get('date_max') category=request.GET.get('category') if is_valid_queryparam(title_contains_query): qs=qs.filter(title__icontains=title_contains_query) if is_valid_queryparam(id_exact_query): qs=qs.filter(id=id_exact_query) if is_valid_queryparam(title_or_author_query): qs=qs.filter(Q(title__icontains=title_or_author_query) | Q(author__name__icontains=title_or_author_query)) if is_valid_queryparam(view_count_min): qs=qs.filter(views__gte=view_count_min) if is_valid_queryparam(view_count_max): qs=qs.filter(views__lte=view_count_max) if is_valid_queryparam(date_min): qs=qs.filter(publish_date__gte=date_min) if is_valid_queryparam(date_max): qs=qs.filter(publish_date__lte=date_max) if is_valid_queryparam(category): qs=qs.filter(categories=category) test=Data.objects.only('author') context={'queryset':qs, 'kategory':kategory, 'test':test} return render(request, 'myapp/filter.html', context) As you can see in the views.py that I have 2 variables holding all of Data and Kategory model data. I have worked with this type of things many times before. When displaying the data in a table on django template we just have to run a for loop. This time I just cannot get the variable in the categories field. All other fields work fine. Instead of giving me the value it either prints out None or myapp.kategory.none or something similar. I had to run an extra loop to get the categories value. I just cannot make any sense why do I have to run this extra loop to get the value. If anyone can shed some light on it and explain a bit, or even lead me to some literature that explains it that would be helpful. Following is the template code to show you the extra loop I am running. Look at the 3rd tag to see the extra loop. I should get the value just by journal.categories. Why do I need to run that loop?- <table class="three"> <tr > <th>Title</th> <th>Author</th> <th>Category</th> <th>Views</th> <th>Date Published</th> </tr> {% for journal in queryset %} <tr > <td>{{ journal.title }}</td> <td>{{ journal.author }}</td> <td>{% for cat in journal.categories.all %} {{ cat }} {% endfor %}</td> <td>{{ journal.views }}</td> <td>{{ journal.publish_date }}</td> </tr> {% endfor %} </table> A: You haven't defined ForeignKey.related_name so you should use default so: <table class="three"> <tr> <th>Title</th> <th>Author</th> <th>Category</th> <th>Views</th> <th>Date Published</th> </tr> {% for journal in queryset %} <tr> <td>{{ journal.title }}</td> <td>{{ journal.author }}</td> <td>{% for cat in journal.categories %} {{ cat }} {% endfor %}</td> <td>{{ journal.views }}</td> <td>{{ journal.publish_date }}</td> </tr> {% endfor %} </table> Additionally, I'd recommend you to use f-stings in __str__() method of models so: class Author(models.Model): name=models.CharField(max_length=30) def __str__(self): return f"{self.name}" class Kategory(models.Model): name=models.CharField(max_length=20) def __str__(self): return f"{self.name}" class Data(models.Model): title=models.CharField(max_length=120) author=models.ForeignKey(Author, on_delete=models.CASCADE) categories=models.ManyToManyField(Kategory) publish_date=models.DateTimeField() views=models.IntegerField(default=0) reviewed=models.BooleanField(default=False) def __str__(self): return f"{self.title}"
Why do I need to run the second loop to get the sigle value in django?
The project was to create a filter page where users can filter the model data based on their chosen criteria. The whole thing is working but a specific part is not clear and not making sense. Here is my model- class Author(models.Model): name=models.CharField(max_length=30) def __str__(self): return self.name class Kategory(models.Model): name=models.CharField(max_length=20) def __str__(self): return self.name class Data(models.Model): title=models.CharField(max_length=120) author=models.ForeignKey(Author, on_delete=models.CASCADE) categories=models.ManyToManyField(Kategory) publish_date=models.DateTimeField() views=models.IntegerField(default=0) reviewed=models.BooleanField(default=False) def __str__(self): return self.title Data is the main model while Author was added as ForeignKey and Kategory as many to many fields. My main issue is around this categories field in Data model which has many to many relationship with Kategory model. Here is my views.py file- # This view is for the filter page def is_valid_queryparam(param): return param !='' and param is not None def filter(request): qs=Data.objects.all() kategory=Kategory.objects.all() title_contains_query=request.GET.get('title_contains') id_exact_query=request.GET.get('id_exact') title_or_author_query=request.GET.get('title_or_author') view_count_min=request.GET.get('view_count_min') view_count_max=request.GET.get('view_count_max') date_min=request.GET.get('date_min') date_max=request.GET.get('date_max') category=request.GET.get('category') if is_valid_queryparam(title_contains_query): qs=qs.filter(title__icontains=title_contains_query) if is_valid_queryparam(id_exact_query): qs=qs.filter(id=id_exact_query) if is_valid_queryparam(title_or_author_query): qs=qs.filter(Q(title__icontains=title_or_author_query) | Q(author__name__icontains=title_or_author_query)) if is_valid_queryparam(view_count_min): qs=qs.filter(views__gte=view_count_min) if is_valid_queryparam(view_count_max): qs=qs.filter(views__lte=view_count_max) if is_valid_queryparam(date_min): qs=qs.filter(publish_date__gte=date_min) if is_valid_queryparam(date_max): qs=qs.filter(publish_date__lte=date_max) if is_valid_queryparam(category): qs=qs.filter(categories=category) test=Data.objects.only('author') context={'queryset':qs, 'kategory':kategory, 'test':test} return render(request, 'myapp/filter.html', context) As you can see in the views.py that I have 2 variables holding all of Data and Kategory model data. I have worked with this type of things many times before. When displaying the data in a table on django template we just have to run a for loop. This time I just cannot get the variable in the categories field. All other fields work fine. Instead of giving me the value it either prints out None or myapp.kategory.none or something similar. I had to run an extra loop to get the categories value. I just cannot make any sense why do I have to run this extra loop to get the value. If anyone can shed some light on it and explain a bit, or even lead me to some literature that explains it that would be helpful. Following is the template code to show you the extra loop I am running. Look at the 3rd tag to see the extra loop. I should get the value just by journal.categories. Why do I need to run that loop?- <table class="three"> <tr > <th>Title</th> <th>Author</th> <th>Category</th> <th>Views</th> <th>Date Published</th> </tr> {% for journal in queryset %} <tr > <td>{{ journal.title }}</td> <td>{{ journal.author }}</td> <td>{% for cat in journal.categories.all %} {{ cat }} {% endfor %}</td> <td>{{ journal.views }}</td> <td>{{ journal.publish_date }}</td> </tr> {% endfor %} </table>
[ "You haven't defined ForeignKey.related_name so you should use default so:\n\n\n<table class=\"three\">\n <tr>\n <th>Title</th>\n <th>Author</th>\n <th>Category</th>\n <th>Views</th>\n <th>Date Published</th>\n \n \n </tr>\n \n {% for journal in queryset %}\n \n \n <tr>\n <td>{{ journal.title }}</td>\n <td>{{ journal.author }}</td>\n <td>{% for cat in journal.categories %}\n {{ cat }}\n {% endfor %}</td>\n <td>{{ journal.views }}</td>\n <td>{{ journal.publish_date }}</td>\n \n \n \n </tr> \n {% endfor %} \n</table>\n\n\n\nAdditionally, I'd recommend you to use f-stings in __str__() method of models so:\nclass Author(models.Model):\n name=models.CharField(max_length=30)\n\n def __str__(self):\n return f\"{self.name}\"\n\nclass Kategory(models.Model):\n name=models.CharField(max_length=20)\n\n def __str__(self):\n return f\"{self.name}\" \n\n\nclass Data(models.Model):\n title=models.CharField(max_length=120)\n author=models.ForeignKey(Author, on_delete=models.CASCADE)\n categories=models.ManyToManyField(Kategory)\n publish_date=models.DateTimeField()\n views=models.IntegerField(default=0)\n reviewed=models.BooleanField(default=False)\n\n def __str__(self):\n return f\"{self.title}\"\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_queryset", "django_templates", "django_views", "python" ]
stackoverflow_0074468833_django_django_queryset_django_templates_django_views_python.txt
Q: why am I not able to input the data I was learning python as a beginner through YouTube. In the video I was following the output was shown in terminal, but not in my case. It doesn't even accept taking in data for the variable. What am I doing wrong? the code was simply : a = input("Enter name") print(a) but the output would only show the text, but wont let me type the input A: You code should be working fine. Maybe, you are trying to edit the text Enter name, which is not possible in the terminal. Try typing the name and pressing <Enter> when the text Enter name shows up. You can test it here: https://pythonsandbox.com/code/pythonsandbox_u20054_20A4kkNBC961W5aZFb2NJCBW_v0.py But keep in mind that this is running on the web, not in a terminal, so it will show an input box in which you can input the data. A: Firstly, please ensure that you installed Python extension. Then you can run the file by click Run Python File button. If you install the code-runner extension and use Run Code button. You need to add the following code to your settings.json (shortcuts "Ctrl+shift+P" and type Preferences : Open User Settings): "code-runner.runInTerminal": true, By the way, read docs is a good choice.
why am I not able to input the data
I was learning python as a beginner through YouTube. In the video I was following the output was shown in terminal, but not in my case. It doesn't even accept taking in data for the variable. What am I doing wrong? the code was simply : a = input("Enter name") print(a) but the output would only show the text, but wont let me type the input
[ "You code should be working fine. Maybe, you are trying to edit the text Enter name, which is not possible in the terminal.\nTry typing the name and pressing <Enter> when the text Enter name shows up.\nYou can test it here: https://pythonsandbox.com/code/pythonsandbox_u20054_20A4kkNBC961W5aZFb2NJCBW_v0.py\nBut keep in mind that this is running on the web, not in a terminal, so it will show an input box in which you can input the data.\n", "Firstly, please ensure that you installed Python extension.\nThen you can run the file by click Run Python File button.\n\nIf you install the code-runner extension and use Run Code button. You need to add the following code to your settings.json (shortcuts \"Ctrl+shift+P\" and type Preferences : Open User Settings):\n\"code-runner.runInTerminal\": true,\n\nBy the way, read docs is a good choice.\n" ]
[ 0, 0 ]
[]
[]
[ "input", "python", "visual_studio_code" ]
stackoverflow_0074460575_input_python_visual_studio_code.txt
Q: When trying to apply a simple function I am getting this error "The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item()...." I have a dataframe with tick data that is below and I am trying to apply a simple function that will allow me to compare whether or not the last price was at the bid or ask and thus representing aggressive buying or selling. However when I apply the function I receive the error "The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()." enter image description here def VD(x): if x > ES_Data['Bid']: result = ES_Data['Volume'] else: result = (ES_Data['Volume']*-1) return result I have tried using a lambda function instead but am getting the same error. I have been messing around with this for a couple hours now and have made no progress A: The problem is that in this if statement if x > ES_Data['Bid'] the result of x > ES_Data['Bid'] is a False/True series comparing the given x to the each element in ES_Data['Bid']. That is why you are getting the error telling you that the if statement is being applied to a full series. If you are trying to apply this to all rows, you could do something like this: result = ES_Data['Volume'] * np.where(x > ES_Data['Bid'], 1, -1) the np.where will return a full array of 1s where the condition is met and -1s where the condition is False. Putting that in a function will look like this: def VD(x): return ES_Data['Volume'] * np.where(x > ES_Data['Bid'], 1, -1)
When trying to apply a simple function I am getting this error "The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item()...."
I have a dataframe with tick data that is below and I am trying to apply a simple function that will allow me to compare whether or not the last price was at the bid or ask and thus representing aggressive buying or selling. However when I apply the function I receive the error "The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()." enter image description here def VD(x): if x > ES_Data['Bid']: result = ES_Data['Volume'] else: result = (ES_Data['Volume']*-1) return result I have tried using a lambda function instead but am getting the same error. I have been messing around with this for a couple hours now and have made no progress
[ "The problem is that in this if statement if x > ES_Data['Bid'] the result of x > ES_Data['Bid'] is a False/True series comparing the given x to the each element in ES_Data['Bid']. That is why you are getting the error telling you that the if statement is being applied to a full series.\nIf you are trying to apply this to all rows, you could do something like this:\nresult = ES_Data['Volume'] * np.where(x > ES_Data['Bid'], 1, -1)\n\nthe np.where will return a full array of 1s where the condition is met and -1s where the condition is False.\nPutting that in a function will look like this:\ndef VD(x):\n return ES_Data['Volume'] * np.where(x > ES_Data['Bid'], 1, -1)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074469043_python.txt
Q: FastAPI, SQLalchemy; By using Postman I can't post raw JSON body request correctly. It works fine with params not with raw JSON body My @router.post is like this: @router.post("/filter/filtering") async def filter_test(skip: int = 0, limit: int = 100, company_name: str = None, db: Session = Depends(get_db)): _audits = crud.filter_test(db, company_name) return Response(status="Ok", code="200", message="Success fetch all data", result=_audits) From crud.py, filter_test function is: def filter_test(db: Session, company_name: str = None): # if company_name: return db.query(Audit).filter(Audit.company_name == company_name).all() When I post request with params as "company_name" as key and "panda" as value, it gives me correct answers. But when I try to do it in a Body format, fails me. With params output: { "code": "200", "status": "Ok", "message": "Success fetch all data", "result": [ { "product_name": "dondurma", "audit_date": "2022-06-06T00:00:00", "request_date": "2022-11-14T18:11:29.450855", "correct_company_name": null, "company_name": "panda", "image_link": "www.test.com", "id": 5, "correct_name": null, "correction_date": null }, { "product_name": "dondurma1", "audit_date": "2022-06-06T00:00:00", "request_date": "2022-11-14T18:40:18.925370", "correct_company_name": null, "company_name": "panda", "image_link": "www.test.com", "id": 6, "correct_name": null, "correction_date": null }, { "product_name": "dondurma1", "audit_date": "2022-06-06T00:00:00", "request_date": "2022-11-14T18:40:18.925370", "correct_company_name": null, "company_name": "panda", "image_link": "www.test.com", "id": 7, "correct_name": null, "correction_date": null } ] } Here is my JSON BODY format: { "parameter":{ "company_name":"panda" } } This one is the output of the JSON Body format: { "code": "200", "status": "Ok", "message": "Success fetch all data", "result": [ { "product_name": "dondurma1", "audit_date": "2022-06-06T00:00:00", "request_date": "2022-11-14T18:40:18.925370", "correct_company_name": "kratos", "company_name": null, "image_link": "www.test.com", "id": 8, "correct_name": "kratos", "correction_date": null } ] } What might be the problem is? Thanks in advance Tried with params, it works fine as for json body format it fails. Screenshot for postman is here: A: You either need to define a pydantic model as the body, or else use Body which is fine in your case as you're only accepting one value: from typing import Optional from fastapi import Body # not mentioning all the other fastapi imports here for brevity @router.post("/filter/filtering") async def filter_test(skip: int = 0, limit: int = 100, company_name: Optional[str] = Body(default=None, embed=True), db: Session = Depends(get_db)): _audits = crud.filter_test(db, company_name) return {'status': "Ok", 'code': "200", 'message': "Success fetch all data", 'result': _audits} This will work passing a request body of {} or {"company_name": null} or {"company_name": "foo"} - the first two cases the company_name variable will default to None. The Response in your example doesn't work for me if it's a fastAPI Response, is that a custom class you have? Here I'm just returning a dictionary which will work fine provided it's json serializable. You might consider dropping the status and code from that dict as the framework is already returning HTTP status 200 by default (i.e. not part of the body), or you can pass a custom status_code via returning a JSONResponse.
FastAPI, SQLalchemy; By using Postman I can't post raw JSON body request correctly. It works fine with params not with raw JSON body
My @router.post is like this: @router.post("/filter/filtering") async def filter_test(skip: int = 0, limit: int = 100, company_name: str = None, db: Session = Depends(get_db)): _audits = crud.filter_test(db, company_name) return Response(status="Ok", code="200", message="Success fetch all data", result=_audits) From crud.py, filter_test function is: def filter_test(db: Session, company_name: str = None): # if company_name: return db.query(Audit).filter(Audit.company_name == company_name).all() When I post request with params as "company_name" as key and "panda" as value, it gives me correct answers. But when I try to do it in a Body format, fails me. With params output: { "code": "200", "status": "Ok", "message": "Success fetch all data", "result": [ { "product_name": "dondurma", "audit_date": "2022-06-06T00:00:00", "request_date": "2022-11-14T18:11:29.450855", "correct_company_name": null, "company_name": "panda", "image_link": "www.test.com", "id": 5, "correct_name": null, "correction_date": null }, { "product_name": "dondurma1", "audit_date": "2022-06-06T00:00:00", "request_date": "2022-11-14T18:40:18.925370", "correct_company_name": null, "company_name": "panda", "image_link": "www.test.com", "id": 6, "correct_name": null, "correction_date": null }, { "product_name": "dondurma1", "audit_date": "2022-06-06T00:00:00", "request_date": "2022-11-14T18:40:18.925370", "correct_company_name": null, "company_name": "panda", "image_link": "www.test.com", "id": 7, "correct_name": null, "correction_date": null } ] } Here is my JSON BODY format: { "parameter":{ "company_name":"panda" } } This one is the output of the JSON Body format: { "code": "200", "status": "Ok", "message": "Success fetch all data", "result": [ { "product_name": "dondurma1", "audit_date": "2022-06-06T00:00:00", "request_date": "2022-11-14T18:40:18.925370", "correct_company_name": "kratos", "company_name": null, "image_link": "www.test.com", "id": 8, "correct_name": "kratos", "correction_date": null } ] } What might be the problem is? Thanks in advance Tried with params, it works fine as for json body format it fails. Screenshot for postman is here:
[ "You either need to define a pydantic model as the body, or else use Body which is fine in your case as you're only accepting one value:\nfrom typing import Optional\nfrom fastapi import Body # not mentioning all the other fastapi imports here for brevity\n\n@router.post(\"/filter/filtering\")\nasync def filter_test(skip: int = 0, limit: int = 100, company_name: Optional[str] = Body(default=None, embed=True), db: Session = Depends(get_db)):\n _audits = crud.filter_test(db, company_name)\n return {'status': \"Ok\", 'code': \"200\", 'message': \"Success fetch all data\", 'result': _audits}\n\nThis will work passing a request body of {} or {\"company_name\": null} or {\"company_name\": \"foo\"} - the first two cases the company_name variable will default to None.\nThe Response in your example doesn't work for me if it's a fastAPI Response, is that a custom class you have? Here I'm just returning a dictionary which will work fine provided it's json serializable.\nYou might consider dropping the status and code from that dict as the framework is already returning HTTP status 200 by default (i.e. not part of the body), or you can pass a custom status_code via returning a JSONResponse.\n" ]
[ 1 ]
[]
[]
[ "database", "fastapi", "postgresql", "postman", "python" ]
stackoverflow_0074463307_database_fastapi_postgresql_postman_python.txt
Q: How can I access specific columns from a CSV file and add it to a list without using external modules? def loadCSVData(filename): list = [] fileContent = open(filename, 'r', encoding = 'utf8') for line in fileContent: # HERE fileContent.close() return list If I were to have a csv file that has 3 columns: name job pay 1 2 3 4 how can I access the name column and add the contents to the list? I want to be able to acess this without the need of pandas or numpy or anything else. A: You can split the columns in each row yourself - at your own risk. Assuming this is a traditional CSV file, you could def loadCSVData(filename): with open(filename) as fileobj: header = next(fileobj).strip().split(",") if header != ["name", "job", "pay"]: raise ValueError("Invalid CSV file") return [line.strip().split(",")[0] for line in fileobj] This will work fine if the values of the CSV don't have commas in them. But suppose name was "Doe, John", (an interior comma, surrounding quotes), then you'd have a problem because you would split on that comma. If that's the case, you could write a more complex parser... or just use the existing csv module in standard lib.
How can I access specific columns from a CSV file and add it to a list without using external modules?
def loadCSVData(filename): list = [] fileContent = open(filename, 'r', encoding = 'utf8') for line in fileContent: # HERE fileContent.close() return list If I were to have a csv file that has 3 columns: name job pay 1 2 3 4 how can I access the name column and add the contents to the list? I want to be able to acess this without the need of pandas or numpy or anything else.
[ "You can split the columns in each row yourself - at your own risk. Assuming this is a traditional CSV file, you could\ndef loadCSVData(filename):\n with open(filename) as fileobj:\n header = next(fileobj).strip().split(\",\")\n if header != [\"name\", \"job\", \"pay\"]:\n raise ValueError(\"Invalid CSV file\")\n return [line.strip().split(\",\")[0] for line in fileobj]\n\nThis will work fine if the values of the CSV don't have commas in them. But suppose name was \"Doe, John\", (an interior comma, surrounding quotes), then you'd have a problem because you would split on that comma. If that's the case, you could write a more complex parser... or just use the existing csv module in standard lib.\n" ]
[ 0 ]
[]
[]
[ "excel", "for_loop", "list", "python", "python_3.x" ]
stackoverflow_0074468559_excel_for_loop_list_python_python_3.x.txt
Q: How can I shift columns based on certain row? My datatable is below. menu_nm dtl rcp 0 sandwich amazing sandwich!!! bread 10g 1 hamburger bread 20g, vegetable 10g ??? 2 salad fresh salad!!! apple sauce 10g, banana 40g, cucumber 5g 3 juice sweet juice!! orange 50g, water 100ml 4 fruits strawberry 10g, grape 20g, melon 10g ??? and I want to get this datatable menu_nm dtl rcp 0 sandwich amazing sandwich!!! bread 10g 1 hamburger bread 20g, vegetable 10g 2 salad fresh salad!!! apple sauce 10g, banana 40g, cucumber 5g 3 juice sweet juice!! orange 50g, water 100ml 4 fruits strawberry 10g, grape 20g, melon 10g I want to shift row 1, 4 to rcp column, but I can't find method or logic that I try. I just know that shifting all row and all column, I don't know how I can shift certain row and column. If you know hint or answer, please tell me. thanks. A: assumption: rcp column contains "???" that needs to be replaced with the a values from dtl # create a filter where value under rcp is "???" m=df['rcp'].eq('???') # using loc, shift the values df.loc[m, 'rcp'] = df['dtl'] df.loc[m, 'dtl'] = "" df menu_nm dtl rcp 0 sandwich amazing sandwich!!! bread 10g 1 hamburger bread 20g, vegetable 10g 2 salad fresh salad!!! apple sauce 10g, banana 40g, cucumber 5g 3 juice sweet juice!! orange 50g, water 100ml 4 fruits strawberry 10g, grape 20g, melon 10g A: You can access index location using .iloc as below: >>> df=pd.DataFrame({"COLA":[1,2,3,4], "COLB":[100,200,300,400], "COLC":[1000,2000,3000,4000]}) >>> df COLA COLB COLC 0 1 100 1000 1 2 200 2000 2 3 300 3000 3 4 400 4000 >>> df['COLC'].iloc[1]=df['COLB'].iloc[1] >>> df COLA COLB COLC 0 1 100 1000 1 2 200 200 2 3 300 3000 3 4 400 4000 >>> df['COLB'].iloc[1]='' >>> df COLA COLB COLC 0 1 100 1000 1 2 200 2 3 300 3000 3 4 400 4000 Follow similar steps for row 4.
How can I shift columns based on certain row?
My datatable is below. menu_nm dtl rcp 0 sandwich amazing sandwich!!! bread 10g 1 hamburger bread 20g, vegetable 10g ??? 2 salad fresh salad!!! apple sauce 10g, banana 40g, cucumber 5g 3 juice sweet juice!! orange 50g, water 100ml 4 fruits strawberry 10g, grape 20g, melon 10g ??? and I want to get this datatable menu_nm dtl rcp 0 sandwich amazing sandwich!!! bread 10g 1 hamburger bread 20g, vegetable 10g 2 salad fresh salad!!! apple sauce 10g, banana 40g, cucumber 5g 3 juice sweet juice!! orange 50g, water 100ml 4 fruits strawberry 10g, grape 20g, melon 10g I want to shift row 1, 4 to rcp column, but I can't find method or logic that I try. I just know that shifting all row and all column, I don't know how I can shift certain row and column. If you know hint or answer, please tell me. thanks.
[ "assumption: rcp column contains \"???\" that needs to be replaced with the a values from dtl\n# create a filter where value under rcp is \"???\"\nm=df['rcp'].eq('???')\n\n# using loc, shift the values\n\ndf.loc[m, 'rcp'] = df['dtl']\ndf.loc[m, 'dtl'] = \"\"\ndf\n\n menu_nm dtl rcp\n0 sandwich amazing sandwich!!! bread 10g\n1 hamburger bread 20g, vegetable 10g\n2 salad fresh salad!!! apple sauce 10g, banana 40g, cucumber 5g\n3 juice sweet juice!! orange 50g, water 100ml\n4 fruits strawberry 10g, grape 20g, melon 10g\n\n", "You can access index location using .iloc as below:\n>>> df=pd.DataFrame({\"COLA\":[1,2,3,4], \"COLB\":[100,200,300,400], \"COLC\":[1000,2000,3000,4000]})\n>>> df\n COLA COLB COLC\n0 1 100 1000\n1 2 200 2000\n2 3 300 3000\n3 4 400 4000\n>>> df['COLC'].iloc[1]=df['COLB'].iloc[1]\n>>> df\n COLA COLB COLC\n0 1 100 1000\n1 2 200 200\n2 3 300 3000\n3 4 400 4000\n>>> df['COLB'].iloc[1]=''\n>>> df\n COLA COLB COLC\n0 1 100 1000\n1 2 200\n2 3 300 3000\n3 4 400 4000\n\nFollow similar steps for row 4.\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074469073_pandas_python.txt
Q: Jupyter Notebooks not running on VS Code Python extension I have installed latest Python 3 (python-3.11.0-amd64) and latest VS Code (VSCodeUserSetup-x64-1.73.1). I also installed the "Python Extension for Visual Studio Code". which as you can see it claimed that it comes with Jupyter Notebooks feature to Create and edit Jupyter Notebooks I have selected the interpreter: and selected the Kernel: which is listed as: but when I ran the cell, I am getting this error message, asking for Jupyter Package: Jupyter cannot be started. Error attempting to locate Jupyter: Running cells with 'Python 3.11.0 64-bit' requires notebook and jupyter package. Run the following command to install 'jupyter and notebook' into the Python environment. Command: 'python -m pip install jupyter notebook -U or conda install jupyter notebook -U' Click here for more info. As you can see "Juputer" has been installed but Why is this happening? A: The error prompt actually tells you how to solve the problem. Click install can solve it. The Jupyter Notebook is an extension which needs jupyter package. So you have to install jupyter package by using command pip install jupyter notebook. The use steps in github also specify: Install Anaconda/Miniconda or another Python environment in which you've installed the Jupyter package
Jupyter Notebooks not running on VS Code Python extension
I have installed latest Python 3 (python-3.11.0-amd64) and latest VS Code (VSCodeUserSetup-x64-1.73.1). I also installed the "Python Extension for Visual Studio Code". which as you can see it claimed that it comes with Jupyter Notebooks feature to Create and edit Jupyter Notebooks I have selected the interpreter: and selected the Kernel: which is listed as: but when I ran the cell, I am getting this error message, asking for Jupyter Package: Jupyter cannot be started. Error attempting to locate Jupyter: Running cells with 'Python 3.11.0 64-bit' requires notebook and jupyter package. Run the following command to install 'jupyter and notebook' into the Python environment. Command: 'python -m pip install jupyter notebook -U or conda install jupyter notebook -U' Click here for more info. As you can see "Juputer" has been installed but Why is this happening?
[ "The error prompt actually tells you how to solve the problem. Click install can solve it.\nThe Jupyter Notebook is an extension which needs jupyter package. So you have to install jupyter package by using command\npip install jupyter notebook.\nThe use steps in github also specify: Install Anaconda/Miniconda or another Python environment in which you've installed the Jupyter package\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "python", "visual_studio_code" ]
stackoverflow_0074466717_jupyter_notebook_python_visual_studio_code.txt
Q: How to escape characters in Pango markup? My program has a gtk.TreeView which displays a gtk.ListStore. The gtk.ListStore contains strings like this: "<span size='medium'><b>"+site_title+"</b></span>"+"\n"+URL Where URL is (obviously) a URL string. Sometimes there are characters in URL that cause pango to fail to parse the markup. Is there a way to escape URL as a whole so that pango will just ignore it so it will be displayed literally? If not, how should I "escape" special characters in URLs? A: glib.markup_escape_text may be a more canonical approach when using GTK. A: You need to escape the values. I'm not sure what exact format Pango requires, but it looks like HTML and the cgi.escape function may be all you need. import cgi print "<span size='medium'><b>%s</b></span>\n%s" % (cgi.escape(site_title), cgi.escape(URL)) A: //edit queue is full, so post here GLib.markup_escape_text from PyGObject demo >>> from gi.repository import GLib >>> GLib.markup_escape_text('abc \b \f < & >') 'abc &#x8; &#xc; &lt; &amp; &gt;' >>> py api docs https://lazka.github.io/pgi-docs/#GLib-2.0/functions.html#GLib.markup_escape_text https://pygobject.readthedocs.io c api docs https://docs.gtk.org/glib/func.markup_escape_text.html
How to escape characters in Pango markup?
My program has a gtk.TreeView which displays a gtk.ListStore. The gtk.ListStore contains strings like this: "<span size='medium'><b>"+site_title+"</b></span>"+"\n"+URL Where URL is (obviously) a URL string. Sometimes there are characters in URL that cause pango to fail to parse the markup. Is there a way to escape URL as a whole so that pango will just ignore it so it will be displayed literally? If not, how should I "escape" special characters in URLs?
[ "glib.markup_escape_text may be a more canonical approach when using GTK.\n", "You need to escape the values. I'm not sure what exact format Pango requires, but it looks like HTML and the cgi.escape function may be all you need.\nimport cgi\nprint \"<span size='medium'><b>%s</b></span>\\n%s\" %\n (cgi.escape(site_title), cgi.escape(URL))\n\n", "//edit queue is full, so post here\nGLib.markup_escape_text from PyGObject\ndemo\n>>> from gi.repository import GLib\n>>> GLib.markup_escape_text('abc \\b \\f < & >')\n'abc &#x8; &#xc; &lt; &amp; &gt;'\n>>> \n\n\npy api docs\nhttps://lazka.github.io/pgi-docs/#GLib-2.0/functions.html#GLib.markup_escape_text\nhttps://pygobject.readthedocs.io\nc api docs\nhttps://docs.gtk.org/glib/func.markup_escape_text.html\n" ]
[ 22, 2, 0 ]
[]
[]
[ "gtk", "pango", "pygtk", "python" ]
stackoverflow_0001760070_gtk_pango_pygtk_python.txt
Q: Mapping two different together on columns or index I want to ask a question that how can I mapping one dataframe into another dataframe. The idea is like this, I have two dataframes, one have around 1,500 pools, and other dataframe contains around 25 rows. I want to match the price from second dataframe, into the first dataframe, by using the rate range as a factor. Currently I do have any code written because I have no idea how to construct it. Would anyone gives me some idea about how can I started it. Hi guys, I come back for more details. So here will be the elaborated information: Considering I have two dataframe, while dataframe A is a detailed spreadsheet contains the details of different bonds. Dataframe B provided the price of the bond. Now I want to map the price from dataframe B into dataframe A like following: dataframe A: Bond Interest 0 1 2 3 4 5 ...... dataframe B: Interest Price 0 1 2 3 ...... Combined dataframe: Bond Interest Price 0 1 2 3 4 ...... Noticed that dataframe A has thousands of rows, but dataframe only have 25. I want to use the interest from dataframe A to match the interest range in dataframe B, and mapping the price into the dataframe A. Does anyone have any solutions about this one. Thank you so much A: With the minimal description and no examples provided. I think what you are searching for is merge. What you would do will look something like this: # given df1, and df2 shared_col_name = 'rate range' df1.merge(df2, how='left', on=shared_col_name ) You can refer to documentation for more details. https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html
Mapping two different together on columns or index
I want to ask a question that how can I mapping one dataframe into another dataframe. The idea is like this, I have two dataframes, one have around 1,500 pools, and other dataframe contains around 25 rows. I want to match the price from second dataframe, into the first dataframe, by using the rate range as a factor. Currently I do have any code written because I have no idea how to construct it. Would anyone gives me some idea about how can I started it. Hi guys, I come back for more details. So here will be the elaborated information: Considering I have two dataframe, while dataframe A is a detailed spreadsheet contains the details of different bonds. Dataframe B provided the price of the bond. Now I want to map the price from dataframe B into dataframe A like following: dataframe A: Bond Interest 0 1 2 3 4 5 ...... dataframe B: Interest Price 0 1 2 3 ...... Combined dataframe: Bond Interest Price 0 1 2 3 4 ...... Noticed that dataframe A has thousands of rows, but dataframe only have 25. I want to use the interest from dataframe A to match the interest range in dataframe B, and mapping the price into the dataframe A. Does anyone have any solutions about this one. Thank you so much
[ "With the minimal description and no examples provided. I think what you are searching for is merge. What you would do will look something like this:\n# given df1, and df2\nshared_col_name = 'rate range'\n\ndf1.merge(df2, how='left', on=shared_col_name )\n\n\nYou can refer to documentation for more details.\nhttps://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074469117_dataframe_python.txt
Q: How to add input in the middle of a string? I'm very new to programming, only started learning python ~4 days ago and I'm having trouble figuring out how to print a user input as a string, in between other strings on the same line. Being so new to programming, I feel like the answer is staring me right in the face but I don't have the tools or the knowledge to figure it out lol. what I'm trying to do is: Wow (PlayerName) that's cool so far what I have is: name = input("Name? ") print("Wow") (print(name)) (print("that's cool")) python came back with an error saying object 'NoneType' is not callable, so instead i tried to write it as a function and call that instead: name = input("Name? ") def name_call(): print(name) print("Wow") (name_call()) (print("that's cool")) same issue, I tried various similar things, but at this point I'm just throwing darts I'm not 100% sure why neither of these worked, but I do know that it probably has something to do with me writing it incorrectly. I could just print the name on a new line, but I want to try and put them all on the same line if possible. A: you can try this code: # Python3 code to demonstrate working of # Add Phrase in middle of String # Using split() + slicing + join() # initializing string test_str = 'Wow that\'s cool!' # printing original string print("The original string is : " + str(test_str)) # initializing mid string mid_str = (input('Please input name = ')) # splitting string to list temp = test_str.split() mid_pos = len(temp) // 3 # joining and construction using single line res = ' '.join(temp[:mid_pos] + [mid_str] + temp[mid_pos:]) # printing result print("Formulated String : " + str(res)) The result will be like this: The original string is : Wow that's cool! Please input name = Alice Formulated String : Wow Alice that's cool! you can input any name to the program. A: As others have said, I think you're looking for string interpolation. As of Python 3.6 we have f-strings. name = input("Name? ") print(f"Wow {name} that's cool") https://www.programiz.com/python-programming/string-interpolation A: Your print's need to be on new lines. name = input("Name? ") print("Wow") print(name) print("that's cool") Python thinks you are trying to call the result of the print function (which returns None) as a function of its own. | V you are accidentally calling the return value here print("Wow")(print(name)) A: val = 'name' print(f"Wow {val} that's cool.") Btw, if you want name_call() to play a role, the following code also works def name_call(): return ('name') print(f"Wow {name_call()} that's cool.") A: You may use the format method to insert name into the string's placeholder {}: print("Wow {} that's cool".format(str(name))) A: x = str(input('Name: ')) print('user entered {} as their name'.format(x))
How to add input in the middle of a string?
I'm very new to programming, only started learning python ~4 days ago and I'm having trouble figuring out how to print a user input as a string, in between other strings on the same line. Being so new to programming, I feel like the answer is staring me right in the face but I don't have the tools or the knowledge to figure it out lol. what I'm trying to do is: Wow (PlayerName) that's cool so far what I have is: name = input("Name? ") print("Wow") (print(name)) (print("that's cool")) python came back with an error saying object 'NoneType' is not callable, so instead i tried to write it as a function and call that instead: name = input("Name? ") def name_call(): print(name) print("Wow") (name_call()) (print("that's cool")) same issue, I tried various similar things, but at this point I'm just throwing darts I'm not 100% sure why neither of these worked, but I do know that it probably has something to do with me writing it incorrectly. I could just print the name on a new line, but I want to try and put them all on the same line if possible.
[ "you can try this code:\n# Python3 code to demonstrate working of\n# Add Phrase in middle of String\n# Using split() + slicing + join()\n \n# initializing string\ntest_str = 'Wow that\\'s cool!'\n \n# printing original string\nprint(\"The original string is : \" + str(test_str))\n \n# initializing mid string\nmid_str = (input('Please input name = '))\n \n# splitting string to list\ntemp = test_str.split()\nmid_pos = len(temp) // 3\n \n# joining and construction using single line\nres = ' '.join(temp[:mid_pos] + [mid_str] + temp[mid_pos:])\n \n# printing result\nprint(\"Formulated String : \" + str(res))\n\nThe result will be like this:\nThe original string is : Wow that's cool!\nPlease input name = Alice\nFormulated String : Wow Alice that's cool!\n\nyou can input any name to the program.\n", "As others have said, I think you're looking for string interpolation. As of Python 3.6 we have f-strings.\nname = input(\"Name? \")\nprint(f\"Wow {name} that's cool\")\n\nhttps://www.programiz.com/python-programming/string-interpolation\n", "Your print's need to be on new lines.\nname = input(\"Name? \")\n\nprint(\"Wow\")\nprint(name)\nprint(\"that's cool\")\n\nPython thinks you are trying to call the result of the print function (which returns None) as a function of its own.\n |\n V you are accidentally calling the return value here\nprint(\"Wow\")(print(name))\n\n", "val = 'name'\nprint(f\"Wow {val} that's cool.\")\n\nBtw, if you want name_call() to play a role, the following code also works\ndef name_call(): \n return ('name')\n\nprint(f\"Wow {name_call()} that's cool.\")\n\n", "You may use the format method to insert name into the string's placeholder {}:\nprint(\"Wow {} that's cool\".format(str(name)))\n\n", "x = str(input('Name: '))\nprint('user entered {} as their name'.format(x))\n\n" ]
[ 4, 3, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074468954_python.txt
Q: Strange behavior assigning builtin methods as class attributes I encountered a strange behavior that I cannot explain when assigning builtin methods as attributes to a class in Python. If I run the following python file: class A: a = bin b = lambda x: bin(x) print(A().a(2)) print(A().b(2)) The call to A().a(2) returns a byte string, but the call to A().b(2) raises: TypeError: <lambda>() takes 1 positional argument but 2 were given The signature of the builtin function bin is supposedly bin(number, /), which seems identical to the signature provided by the lambda above. However, it appears as if A().a is treated as a static method, whereas A().b is treated like an "instance" method (with a self argument implicitly added to the provided lambda). There is an explanation of a similar issue here (calling a function saved in a class attribute: different behavior with built-in function vs. normal function), which claims that the reason these two are treated differently is because one is a builtin_function_or_method and the other is a function type. However, there is inconsistent behavior even within builtins. class B(int): c = pow d = round print(B(1).d(2)) print(B(1).c(2)) In the case of pow and round, round is treated like an instance method while pow is treated as a static method. Both of these builtins are callables capable of taking two unnamed arguments. This behavior exists across all the versions of Python 2.* and 3.* I've tried. A: The answer you've referenced is correct. In the counter-example you gave the two built-in functions are indeed treated the same, that is no bound method object is created: B(1).d(2) == round(2) # not round(B(1), 2) B(1).c(2) == pow(2) # not pow(B(1), 2) the issue arises from passing only one argument to pow which takes at least 2, as opposed to round which does only need one
Strange behavior assigning builtin methods as class attributes
I encountered a strange behavior that I cannot explain when assigning builtin methods as attributes to a class in Python. If I run the following python file: class A: a = bin b = lambda x: bin(x) print(A().a(2)) print(A().b(2)) The call to A().a(2) returns a byte string, but the call to A().b(2) raises: TypeError: <lambda>() takes 1 positional argument but 2 were given The signature of the builtin function bin is supposedly bin(number, /), which seems identical to the signature provided by the lambda above. However, it appears as if A().a is treated as a static method, whereas A().b is treated like an "instance" method (with a self argument implicitly added to the provided lambda). There is an explanation of a similar issue here (calling a function saved in a class attribute: different behavior with built-in function vs. normal function), which claims that the reason these two are treated differently is because one is a builtin_function_or_method and the other is a function type. However, there is inconsistent behavior even within builtins. class B(int): c = pow d = round print(B(1).d(2)) print(B(1).c(2)) In the case of pow and round, round is treated like an instance method while pow is treated as a static method. Both of these builtins are callables capable of taking two unnamed arguments. This behavior exists across all the versions of Python 2.* and 3.* I've tried.
[ "The answer you've referenced is correct.\nIn the counter-example you gave the two built-in functions are indeed treated the same, that is no bound method object is created:\nB(1).d(2) == round(2) # not round(B(1), 2)\nB(1).c(2) == pow(2) # not pow(B(1), 2)\n\nthe issue arises from passing only one argument to pow which takes at least 2, as opposed to round which does only need one\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074469027_python.txt
Q: Python IndexError: list index out of range... Student Records Project, At a loss trying to trace the error I've cleaned up my code a bit according to some great recommendations from the community. However I still get the error. All of my close disclosing the error message is posted below the error message. Hopefully this is correctly formatted; I greatly appreciate the help of this community. DELETING STUDENT INFORMATION : ENTER STUDENT'S LAST NAME: blow Traceback (most recent call last): File "/Users/jake./PycharmProjects/StudentRecords/StudentRecords.py", line 281, in <module> SMS.delete_student_information() File "/Users/jake./PycharmProjects/StudentRecords/StudentRecords.py", line 70, in delete_student_information student_mobile_number.remove(student_mobile_number[LOC]) IndexError: list index out of range Process finished with exit code 1 import sys first_name = [] last_name = [] student_address = [] student_email = [] student_age = [] student_mobile_number = [] student_id = [] class student_management_system: @staticmethod def add_student_information(): print("ADDING STUDENT INFORMATION : \n") print("ENTER STUDENT FIRST NAME :", end=" ") NAME = input().upper() first_name.append(NAME) print("ENTER STUDENT LAST NAME :", end=" ") lname = str(input()) last_name.append(lname) print("ENTER STUDENT AGE :", end=" ") AGE = int(input()) student_age.append(AGE) print("ENTER STUDENT ID :", end=" ") ID = input().upper() student_id.append(ID) print("ENTER STUDENT E-MAIL ID :", end=" ") EMAIL_ID = input().upper() student_email.append(EMAIL_ID) print("ENTER STUDENT ADDRESS :", end=" ") ADDRESS = input().upper() student_address.append(ADDRESS) print("ENTER STUDENT MOBILE NUMBER :", end=" ") MOBILE_NUMBER = input() MOBILE_NUMBER_LEN = len(MOBILE_NUMBER) if MOBILE_NUMBER_LEN < 10: print("\t PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER.") else: student_mobile_number.append(MOBILE_NUMBER) print("\n") print("\t STUDENT INFORMATION ADDED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO 'DELETE' DATA OF STUDENT def delete_student_information(): print("DELETING STUDENT INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'PLEASE FILL SOME INFORMATION DON'T KEEP IT EMPTY") print("\n") else: print("ENTER STUDENT'S LAST NAME:", end=" ") l_name = str(input()) LOC = last_name.index(l_name) last_name.remove(last_name[LOC]) first_name.remove(first_name[LOC]) student_mobile_number.remove(student_mobile_number[LOC]) student_age.remove(student_age[LOC]) student_address.remove(student_address[LOC]) student_email.remove(student_email[LOC]) student_id.remove(student_id[LOC]) print("\n") print("\t\t STUDENT INFORMATION DELETED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO 'UPDATE' DATA OF STUDENT. def update_student_information(): print("UPDATE STUDENT INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'PLEASE FILL SOME INFORMATION DON'T KEEP IT EMPTY") print("\n") else: print("ENTER STUDENT ATTRIBUTE YOU WANT TO DELETE :", end="\n") print("LIKE 'NAME, ROLL NUMBER, AGE, MOBILE NUMBER, ADDRESS, EMAIL, CLASS.") print("ENTER HERE :", end=" ") ATTRIBUTE = input().upper() if ATTRIBUTE == 'NAME': print("ENTER 'OLD FIRST NAME' :", end=" ") OLD_NAME = input() LOC_NAME = first_name.index(OLD_NAME) print("ENTER 'NEW FIRST NAME' :", end=" ") NEW_NAME = input() first_name[LOC_NAME] = NEW_NAME print("\t 'FIRST NAME UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'LAST NAME': print("ENTER 'OLD LAST NAME' :", end=" ") old_last_name = str(input()) LOC_ROLL = last_name.index(old_last_name) print("ENTER 'NEW ROLL NUMBER' :", end=" ") NEW_NAME = int(input()) last_name[LOC_ROLL] = NEW_NAME print("\t 'ROLL NUMBER UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'AGE': print("ENTER 'OLD AGE' :", end=" ") OLD_AGE = int(input()) LOC_ROLL = student_age.index(OLD_AGE) print("ENTER 'NEW AGE' :", end=" ") NEW_AGE = int(input()) student_age[LOC_ROLL] = NEW_AGE print("\t 'AGE UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'ADDRESS': print("ENTER 'OLD ADDRESS' :", end=" ") OLD_ADDRESS = input() LOC_ADDRESS = student_address.index(OLD_ADDRESS) print("ENTER 'NEW ADDRESS' :", end=" ") NEW_ADDRESS = input() student_address[LOC_ADDRESS] = NEW_ADDRESS print("\t 'ADDRESS UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'EMAIL': print("ENTER 'OLD EMAIL' :", end=" ") OLD_EMAIL = input() LOC_EMAIL = student_email.index(OLD_EMAIL) print("ENTER 'NEW EMAIL' :", end=" ") NEW_EMAIL = input() student_email[LOC_EMAIL] = NEW_EMAIL print("\t 'EMAIL - ID UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'ID': print("ENTER 'OLD STUDENT ID' :", end=" ") OLD_CLASS = input() LOC_CLASS = student_id.index(OLD_CLASS) print("ENTER 'NEW STUDENT ID' :", end=" ") NEW_CLASS = input() student_id[LOC_CLASS] = NEW_CLASS print("\t 'CLASS UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'MOBILE NUMBER': print("ENTER 'OLD MOBILE NUMBER' :", end=" ") OLD_MOBILE = input() print("ENTER 'NEW MOBILE NUMBER' :", end=" ") NEW_MOBILE = input() MOBILE_NUMBER_LEN = len(OLD_MOBILE) M_N_LEN = len(NEW_MOBILE) if MOBILE_NUMBER_LEN < 10: print(end="\n") print("PLEASE ENTER TEN DIGIT MOBILE NUMBER.") print("SYSTEM HAS STOP, PLEASE TRY AGAIN.") sys.exit() elif M_N_LEN < 10: print(end="\n") print("\t PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER.") print("\t SYSTEM WORKING HAS STOP PLEASE TRY AGAIN.") sys.exit() else: LOC_MOBILE = student_mobile_number.index(OLD_MOBILE) student_mobile_number[LOC_MOBILE] = NEW_MOBILE print("\t 'MOBILE NUMBER UPDATED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO UPDATE 'DATA' OF STUDENT. def DISPLAY_STUDENT_INFORMATION(): print("DISPLAYING STUDENTS INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'OOPS ! NOTHING TO DISPLAY, BECAUSE NO DATA IS THERE.") print("\n") else: print("STUDENT'S FIRST NAME : ", end="\n") for x in first_name: print(x) print() print(end="\n") print("STUDENT'S LAST NAME :", end="\n") for y in last_name: print(y) print() print(end="\n") print("STUDENT'S AGE :", end="\n") for z in student_age: print(z) print() print(end="\n") print("STUDENT'S MOBILE NUMBER :", end="\n") for x in student_mobile_number: print(x) print() print(end="\n") print("STUDENT'S EMAIL :", end="\n") for y in student_email: print(y) print() print(end="\n") print("STUDENT'S ID :", end="\n") for z in student_id: print(z) print() print(end="\n") print("STUDENT'S ADDRESS :", end="\n") for x in student_address: print(x) print() print(end="\n") SMS = student_management_system() if __name__ == '__main__': print("\n") print("' STUDENT RECORDS ' \n") run = True while run: print("PRESS FROM THE FOLLOWING OPTION : \n") print("PRESS 1 : TO ADD STUDENT INFORMATION.") print("PRESS 2 : TO DELETE STUDENT INFORMATION.") print("PRESS 3 : TO UPDATE STUDENT INFORMATION.") print("PRESS 4 : TO DISPLAY STUDENT INFORMATION.") print("PRESS 5 : TO EXIT SYSTEM.") OPTION = int(input("ENTER YOUR OPTION : ")) print("\n") print(end="\n") if OPTION == 1: SMS.add_student_information() elif OPTION == 2: SMS.delete_student_information() elif OPTION == 3: SMS.update_student_information() elif OPTION == 4: SMS.display_student_information() elif OPTION == 5: print("THANK YOU ! VISIT AGAIN.") run = False else: print("PLEASE CHOOSE CORRECT OPTION FROM THE FOLLOWING.") print("\n") Removing multiple instances/bloated code A: When adding a phone number in add_student_information there is a check if the number is too short. If it is, a message is displayed but a new number is not retried, so the length of student_mobile_number will get out of sync with the rest of the arrays. The delete function operates by the index of the student's last name, so if last_name = ["blay", "blow"] and student_mobile_number = ['1234567890'], trying to delete "blow" will result in index error, regardless of if the short number was entered for student "blay" or "blow". I suggest modifying the function as follows: print("ENTER STUDENT MOBILE NUMBER :", end=" ") MOBILE_NUMBER = input() while len(MOBILE_NUMBER) < 10: print("PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER :", end=" ") MOBILE_NUMBER = input() student_mobile_number.append(MOBILE_NUMBER) print("\n") print("\t STUDENT INFORMATION ADDED SUCCESSFULLY.") print("\n") A: Here you have your code working with some small changes. The changes are marked with # <-- comments. But see recommendations at the botton, I only made the minimum changes to make it working. import sys first_name = [] last_name = [] student_address = [] student_email = [] student_age = [] student_mobile_number = [] student_id = [] class student_management_system: @staticmethod def add_student_information(): print("ADDING STUDENT INFORMATION : \n") print("ENTER STUDENT FIRST NAME :", end=" ") NAME = input('').upper() # <-- Added empty string first_name.append(NAME) print("ENTER STUDENT LAST NAME :", end=" ") lname = str(input('')) last_name.append(lname) print("ENTER STUDENT AGE :", end=" ") AGE = int(input('')) student_age.append(AGE) print("ENTER STUDENT ID :", end=" ") ID = input('').upper() student_id.append(ID) print("ENTER STUDENT E-MAIL ID :", end=" ") EMAIL_ID = input('').upper() student_email.append(EMAIL_ID) print("ENTER STUDENT ADDRESS :", end=" ") ADDRESS = input('').upper() student_address.append(ADDRESS) print("ENTER STUDENT MOBILE NUMBER :", end=" ") MOBILE_NUMBER = input('') MOBILE_NUMBER_LEN = len(MOBILE_NUMBER) if MOBILE_NUMBER_LEN < 10: student_mobile_number.append(None) # <-- Add something print("\t PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER.") else: student_mobile_number.append(MOBILE_NUMBER) print("\n") print("\t STUDENT INFORMATION ADDED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO 'DELETE' DATA OF STUDENT def delete_student_information(): print("DELETING STUDENT INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'PLEASE FILL SOME INFORMATION DON'T KEEP IT EMPTY") print("\n") else: print("ENTER STUDENT'S LAST NAME:", end=" ") l_name = str(input('')) LOC = last_name.index(l_name) last_name.remove(last_name[LOC]) first_name.remove(first_name[LOC]) student_mobile_number.remove(student_mobile_number[LOC]) student_age.remove(student_age[LOC]) student_address.remove(student_address[LOC]) student_email.remove(student_email[LOC]) student_id.remove(student_id[LOC]) print("\n") print("\t\t STUDENT INFORMATION DELETED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO 'UPDATE' DATA OF STUDENT. def update_student_information(): print("UPDATE STUDENT INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'PLEASE FILL SOME INFORMATION DON'T KEEP IT EMPTY") print("\n") else: print("ENTER STUDENT ATTRIBUTE YOU WANT TO DELETE :", end="\n") print("LIKE 'NAME, ROLL NUMBER, AGE, MOBILE NUMBER, ADDRESS, EMAIL, CLASS.") print("ENTER HERE :", end=" ") ATTRIBUTE = input('').upper() if ATTRIBUTE == 'NAME': print("ENTER 'OLD FIRST NAME' :", end=" ") OLD_NAME = input('') try: LOC_NAME = first_name.index(OLD_NAME.upper()) # <- upper!!! print("ENTER 'NEW FIRST NAME' :", end=" ") NEW_NAME = input('') # <- upper!!! first_name[LOC_NAME] = NEW_NAME print("\t 'FIRST NAME UPDATED SUCCESSFULLY.") print("\n") except ValueError: # <-- What happens if doesn't exist?? pass elif ATTRIBUTE == 'LAST NAME': print("ENTER 'OLD LAST NAME' :", end=" ") old_last_name = str(input('')) LOC_ROLL = last_name.index(old_last_name) print("ENTER 'NEW ROLL NUMBER' :", end=" ") NEW_NAME = int(input('')) last_name[LOC_ROLL] = NEW_NAME print("\t 'ROLL NUMBER UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'AGE': print("ENTER 'OLD AGE' :", end=" ") OLD_AGE = int(input('')) LOC_ROLL = student_age.index(OLD_AGE) print("ENTER 'NEW AGE' :", end=" ") NEW_AGE = int(input('')) student_age[LOC_ROLL] = NEW_AGE print("\t 'AGE UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'ADDRESS': print("ENTER 'OLD ADDRESS' :", end=" ") OLD_ADDRESS = input('') LOC_ADDRESS = student_address.index(OLD_ADDRESS) print("ENTER 'NEW ADDRESS' :", end=" ") NEW_ADDRESS = input('') student_address[LOC_ADDRESS] = NEW_ADDRESS print("\t 'ADDRESS UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'EMAIL': print("ENTER 'OLD EMAIL' :", end=" ") OLD_EMAIL = input('') LOC_EMAIL = student_email.index(OLD_EMAIL) print("ENTER 'NEW EMAIL' :", end=" ") NEW_EMAIL = input('') student_email[LOC_EMAIL] = NEW_EMAIL print("\t 'EMAIL - ID UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'ID': print("ENTER 'OLD STUDENT ID' :", end=" ") OLD_CLASS = input('') LOC_CLASS = student_id.index(OLD_CLASS) print("ENTER 'NEW STUDENT ID' :", end=" ") NEW_CLASS = input('') student_id[LOC_CLASS] = NEW_CLASS print("\t 'CLASS UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'MOBILE NUMBER': print("ENTER 'OLD MOBILE NUMBER' :", end=" ") OLD_MOBILE = input('') print("ENTER 'NEW MOBILE NUMBER' :", end=" ") NEW_MOBILE = input('') MOBILE_NUMBER_LEN = len(OLD_MOBILE) M_N_LEN = len(NEW_MOBILE) if MOBILE_NUMBER_LEN < 10: print(end="\n") print("PLEASE ENTER TEN DIGIT MOBILE NUMBER.") print("SYSTEM HAS STOP, PLEASE TRY AGAIN.") sys.exit() elif M_N_LEN < 10: print(end="\n") print("\t PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER.") print("\t SYSTEM WORKING HAS STOP PLEASE TRY AGAIN.") sys.exit() else: LOC_MOBILE = student_mobile_number.index(OLD_MOBILE) student_mobile_number[LOC_MOBILE] = NEW_MOBILE print("\t 'MOBILE NUMBER UPDATED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO UPDATE 'DATA' OF STUDENT. def display_student_information(): print("DISPLAYING STUDENTS INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'OOPS ! NOTHING TO DISPLAY, BECAUSE NO DATA IS THERE.") print("\n") else: print("STUDENT'S FIRST NAME : ", end="\n") for x in first_name: print(x) print() print(end="\n") print("STUDENT'S LAST NAME :", end="\n") for y in last_name: print(y) print() print(end="\n") print("STUDENT'S AGE :", end="\n") for z in student_age: print(z) print() print(end="\n") print("STUDENT'S MOBILE NUMBER :", end="\n") for x in student_mobile_number: print(x) print() print(end="\n") print("STUDENT'S EMAIL :", end="\n") for y in student_email: print(y) print() print(end="\n") print("STUDENT'S ID :", end="\n") for z in student_id: print(z) print() print(end="\n") print("STUDENT'S ADDRESS :", end="\n") for x in student_address: print(x) print() print(end="\n") SMS = student_management_system() if __name__ == '__main__': print("\n") print("' STUDENT RECORDS ' \n") run = True while run: print("PRESS FROM THE FOLLOWING OPTION : \n") print("PRESS 1 : TO ADD STUDENT INFORMATION.") print("PRESS 2 : TO DELETE STUDENT INFORMATION.") print("PRESS 3 : TO UPDATE STUDENT INFORMATION.") print("PRESS 4 : TO DISPLAY STUDENT INFORMATION.") print("PRESS 5 : TO EXIT SYSTEM.") OPTION = int(input("ENTER YOUR OPTION : ")) print("\n") print(end="\n") if OPTION == 1: SMS.add_student_information() elif OPTION == 2: SMS.delete_student_information() elif OPTION == 3: SMS.update_student_information() elif OPTION == 4: SMS.display_student_information() elif OPTION == 5: print("THANK YOU ! VISIT AGAIN.") run = False else: print("PLEASE CHOOSE CORRECT OPTION FROM THE FOLLOWING.") print("\n") But this code is so far to be acceptable for many different reasons: Take care about the lists. Use append always or never, not depending on something (phone number in your case). I have appended None phone if is not a valid phone number. Namings. See Python naming conventions Add docstrings to classes and methods. input. Add the messages here instead of printing them before. Moreover, input() is wrong, at least write input('') Use upper() always or never, but don't mix string with and without upper(), otherwise you never will find the existing string. I have changed this only with NAME. Try to use variables inside the class. Use try-except to catch exceptions: more info. I have added just one in your code (when updating NAME), but you should add in many more places. This is a good programming practice, and you could learn more about exceptions and possible tracebacks. The most important thing: Understand how classes and Object-Oriented-Programming work. You are not using classes properly. More info about Object-Oriented-Programming
Python IndexError: list index out of range... Student Records Project, At a loss trying to trace the error
I've cleaned up my code a bit according to some great recommendations from the community. However I still get the error. All of my close disclosing the error message is posted below the error message. Hopefully this is correctly formatted; I greatly appreciate the help of this community. DELETING STUDENT INFORMATION : ENTER STUDENT'S LAST NAME: blow Traceback (most recent call last): File "/Users/jake./PycharmProjects/StudentRecords/StudentRecords.py", line 281, in <module> SMS.delete_student_information() File "/Users/jake./PycharmProjects/StudentRecords/StudentRecords.py", line 70, in delete_student_information student_mobile_number.remove(student_mobile_number[LOC]) IndexError: list index out of range Process finished with exit code 1 import sys first_name = [] last_name = [] student_address = [] student_email = [] student_age = [] student_mobile_number = [] student_id = [] class student_management_system: @staticmethod def add_student_information(): print("ADDING STUDENT INFORMATION : \n") print("ENTER STUDENT FIRST NAME :", end=" ") NAME = input().upper() first_name.append(NAME) print("ENTER STUDENT LAST NAME :", end=" ") lname = str(input()) last_name.append(lname) print("ENTER STUDENT AGE :", end=" ") AGE = int(input()) student_age.append(AGE) print("ENTER STUDENT ID :", end=" ") ID = input().upper() student_id.append(ID) print("ENTER STUDENT E-MAIL ID :", end=" ") EMAIL_ID = input().upper() student_email.append(EMAIL_ID) print("ENTER STUDENT ADDRESS :", end=" ") ADDRESS = input().upper() student_address.append(ADDRESS) print("ENTER STUDENT MOBILE NUMBER :", end=" ") MOBILE_NUMBER = input() MOBILE_NUMBER_LEN = len(MOBILE_NUMBER) if MOBILE_NUMBER_LEN < 10: print("\t PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER.") else: student_mobile_number.append(MOBILE_NUMBER) print("\n") print("\t STUDENT INFORMATION ADDED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO 'DELETE' DATA OF STUDENT def delete_student_information(): print("DELETING STUDENT INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'PLEASE FILL SOME INFORMATION DON'T KEEP IT EMPTY") print("\n") else: print("ENTER STUDENT'S LAST NAME:", end=" ") l_name = str(input()) LOC = last_name.index(l_name) last_name.remove(last_name[LOC]) first_name.remove(first_name[LOC]) student_mobile_number.remove(student_mobile_number[LOC]) student_age.remove(student_age[LOC]) student_address.remove(student_address[LOC]) student_email.remove(student_email[LOC]) student_id.remove(student_id[LOC]) print("\n") print("\t\t STUDENT INFORMATION DELETED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO 'UPDATE' DATA OF STUDENT. def update_student_information(): print("UPDATE STUDENT INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'PLEASE FILL SOME INFORMATION DON'T KEEP IT EMPTY") print("\n") else: print("ENTER STUDENT ATTRIBUTE YOU WANT TO DELETE :", end="\n") print("LIKE 'NAME, ROLL NUMBER, AGE, MOBILE NUMBER, ADDRESS, EMAIL, CLASS.") print("ENTER HERE :", end=" ") ATTRIBUTE = input().upper() if ATTRIBUTE == 'NAME': print("ENTER 'OLD FIRST NAME' :", end=" ") OLD_NAME = input() LOC_NAME = first_name.index(OLD_NAME) print("ENTER 'NEW FIRST NAME' :", end=" ") NEW_NAME = input() first_name[LOC_NAME] = NEW_NAME print("\t 'FIRST NAME UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'LAST NAME': print("ENTER 'OLD LAST NAME' :", end=" ") old_last_name = str(input()) LOC_ROLL = last_name.index(old_last_name) print("ENTER 'NEW ROLL NUMBER' :", end=" ") NEW_NAME = int(input()) last_name[LOC_ROLL] = NEW_NAME print("\t 'ROLL NUMBER UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'AGE': print("ENTER 'OLD AGE' :", end=" ") OLD_AGE = int(input()) LOC_ROLL = student_age.index(OLD_AGE) print("ENTER 'NEW AGE' :", end=" ") NEW_AGE = int(input()) student_age[LOC_ROLL] = NEW_AGE print("\t 'AGE UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'ADDRESS': print("ENTER 'OLD ADDRESS' :", end=" ") OLD_ADDRESS = input() LOC_ADDRESS = student_address.index(OLD_ADDRESS) print("ENTER 'NEW ADDRESS' :", end=" ") NEW_ADDRESS = input() student_address[LOC_ADDRESS] = NEW_ADDRESS print("\t 'ADDRESS UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'EMAIL': print("ENTER 'OLD EMAIL' :", end=" ") OLD_EMAIL = input() LOC_EMAIL = student_email.index(OLD_EMAIL) print("ENTER 'NEW EMAIL' :", end=" ") NEW_EMAIL = input() student_email[LOC_EMAIL] = NEW_EMAIL print("\t 'EMAIL - ID UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'ID': print("ENTER 'OLD STUDENT ID' :", end=" ") OLD_CLASS = input() LOC_CLASS = student_id.index(OLD_CLASS) print("ENTER 'NEW STUDENT ID' :", end=" ") NEW_CLASS = input() student_id[LOC_CLASS] = NEW_CLASS print("\t 'CLASS UPDATED SUCCESSFULLY.") print("\n") elif ATTRIBUTE == 'MOBILE NUMBER': print("ENTER 'OLD MOBILE NUMBER' :", end=" ") OLD_MOBILE = input() print("ENTER 'NEW MOBILE NUMBER' :", end=" ") NEW_MOBILE = input() MOBILE_NUMBER_LEN = len(OLD_MOBILE) M_N_LEN = len(NEW_MOBILE) if MOBILE_NUMBER_LEN < 10: print(end="\n") print("PLEASE ENTER TEN DIGIT MOBILE NUMBER.") print("SYSTEM HAS STOP, PLEASE TRY AGAIN.") sys.exit() elif M_N_LEN < 10: print(end="\n") print("\t PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER.") print("\t SYSTEM WORKING HAS STOP PLEASE TRY AGAIN.") sys.exit() else: LOC_MOBILE = student_mobile_number.index(OLD_MOBILE) student_mobile_number[LOC_MOBILE] = NEW_MOBILE print("\t 'MOBILE NUMBER UPDATED SUCCESSFULLY.") print("\n") @staticmethod # THIS FUNCTION HELP TO UPDATE 'DATA' OF STUDENT. def DISPLAY_STUDENT_INFORMATION(): print("DISPLAYING STUDENTS INFORMATION : \n") if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len( student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len( student_email) == 0: print("\n") print("\t\t\t 'OOPS ! NOTHING TO DISPLAY, BECAUSE NO DATA IS THERE.") print("\n") else: print("STUDENT'S FIRST NAME : ", end="\n") for x in first_name: print(x) print() print(end="\n") print("STUDENT'S LAST NAME :", end="\n") for y in last_name: print(y) print() print(end="\n") print("STUDENT'S AGE :", end="\n") for z in student_age: print(z) print() print(end="\n") print("STUDENT'S MOBILE NUMBER :", end="\n") for x in student_mobile_number: print(x) print() print(end="\n") print("STUDENT'S EMAIL :", end="\n") for y in student_email: print(y) print() print(end="\n") print("STUDENT'S ID :", end="\n") for z in student_id: print(z) print() print(end="\n") print("STUDENT'S ADDRESS :", end="\n") for x in student_address: print(x) print() print(end="\n") SMS = student_management_system() if __name__ == '__main__': print("\n") print("' STUDENT RECORDS ' \n") run = True while run: print("PRESS FROM THE FOLLOWING OPTION : \n") print("PRESS 1 : TO ADD STUDENT INFORMATION.") print("PRESS 2 : TO DELETE STUDENT INFORMATION.") print("PRESS 3 : TO UPDATE STUDENT INFORMATION.") print("PRESS 4 : TO DISPLAY STUDENT INFORMATION.") print("PRESS 5 : TO EXIT SYSTEM.") OPTION = int(input("ENTER YOUR OPTION : ")) print("\n") print(end="\n") if OPTION == 1: SMS.add_student_information() elif OPTION == 2: SMS.delete_student_information() elif OPTION == 3: SMS.update_student_information() elif OPTION == 4: SMS.display_student_information() elif OPTION == 5: print("THANK YOU ! VISIT AGAIN.") run = False else: print("PLEASE CHOOSE CORRECT OPTION FROM THE FOLLOWING.") print("\n") Removing multiple instances/bloated code
[ "When adding a phone number in add_student_information there is a check if the number is too short. If it is, a message is displayed but a new number is not retried, so the length of student_mobile_number will get out of sync with the rest of the arrays. The delete function operates by the index of the student's last name, so if last_name = [\"blay\", \"blow\"] and student_mobile_number = ['1234567890'], trying to delete \"blow\" will result in index error, regardless of if the short number was entered for student \"blay\" or \"blow\".\nI suggest modifying the function as follows:\n print(\"ENTER STUDENT MOBILE NUMBER :\", end=\" \")\n MOBILE_NUMBER = input()\n while len(MOBILE_NUMBER) < 10:\n print(\"PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER :\", end=\" \")\n MOBILE_NUMBER = input()\n student_mobile_number.append(MOBILE_NUMBER)\n\n print(\"\\n\")\n print(\"\\t STUDENT INFORMATION ADDED SUCCESSFULLY.\")\n print(\"\\n\")\n\n", "Here you have your code working with some small changes. The changes are marked with # <-- comments. But see recommendations at the botton, I only made the minimum changes to make it working.\nimport sys\n\nfirst_name = []\nlast_name = []\nstudent_address = []\nstudent_email = []\nstudent_age = []\nstudent_mobile_number = []\nstudent_id = []\n\n\nclass student_management_system:\n @staticmethod\n def add_student_information():\n print(\"ADDING STUDENT INFORMATION : \\n\")\n print(\"ENTER STUDENT FIRST NAME :\", end=\" \")\n NAME = input('').upper() # <-- Added empty string\n first_name.append(NAME)\n\n print(\"ENTER STUDENT LAST NAME :\", end=\" \")\n lname = str(input(''))\n last_name.append(lname)\n\n print(\"ENTER STUDENT AGE :\", end=\" \")\n AGE = int(input(''))\n student_age.append(AGE)\n\n print(\"ENTER STUDENT ID :\", end=\" \")\n ID = input('').upper()\n student_id.append(ID)\n\n print(\"ENTER STUDENT E-MAIL ID :\", end=\" \")\n EMAIL_ID = input('').upper()\n student_email.append(EMAIL_ID)\n\n print(\"ENTER STUDENT ADDRESS :\", end=\" \")\n ADDRESS = input('').upper()\n student_address.append(ADDRESS)\n\n print(\"ENTER STUDENT MOBILE NUMBER :\", end=\" \")\n MOBILE_NUMBER = input('')\n MOBILE_NUMBER_LEN = len(MOBILE_NUMBER)\n\n if MOBILE_NUMBER_LEN < 10:\n student_mobile_number.append(None) # <-- Add something\n print(\"\\t PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER.\")\n else:\n student_mobile_number.append(MOBILE_NUMBER)\n print(\"\\n\")\n print(\"\\t STUDENT INFORMATION ADDED SUCCESSFULLY.\")\n print(\"\\n\")\n\n @staticmethod\n # THIS FUNCTION HELP TO 'DELETE' DATA OF STUDENT\n def delete_student_information():\n print(\"DELETING STUDENT INFORMATION : \\n\")\n\n if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len(\n student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len(\n student_email) == 0:\n print(\"\\n\")\n print(\"\\t\\t\\t 'PLEASE FILL SOME INFORMATION DON'T KEEP IT EMPTY\")\n print(\"\\n\")\n else:\n print(\"ENTER STUDENT'S LAST NAME:\", end=\" \")\n l_name = str(input(''))\n LOC = last_name.index(l_name)\n\n last_name.remove(last_name[LOC])\n first_name.remove(first_name[LOC])\n student_mobile_number.remove(student_mobile_number[LOC])\n student_age.remove(student_age[LOC])\n student_address.remove(student_address[LOC])\n student_email.remove(student_email[LOC])\n student_id.remove(student_id[LOC])\n\n print(\"\\n\")\n print(\"\\t\\t STUDENT INFORMATION DELETED SUCCESSFULLY.\")\n print(\"\\n\")\n\n @staticmethod\n # THIS FUNCTION HELP TO 'UPDATE' DATA OF STUDENT.\n def update_student_information():\n print(\"UPDATE STUDENT INFORMATION : \\n\")\n\n if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len(\n student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len(\n student_email) == 0:\n print(\"\\n\")\n print(\"\\t\\t\\t 'PLEASE FILL SOME INFORMATION DON'T KEEP IT EMPTY\")\n print(\"\\n\")\n else:\n print(\"ENTER STUDENT ATTRIBUTE YOU WANT TO DELETE :\", end=\"\\n\")\n print(\"LIKE 'NAME, ROLL NUMBER, AGE, MOBILE NUMBER, ADDRESS, EMAIL, CLASS.\")\n print(\"ENTER HERE :\", end=\" \")\n ATTRIBUTE = input('').upper()\n\n if ATTRIBUTE == 'NAME':\n print(\"ENTER 'OLD FIRST NAME' :\", end=\" \")\n OLD_NAME = input('')\n try:\n LOC_NAME = first_name.index(OLD_NAME.upper()) # <- upper!!!\n print(\"ENTER 'NEW FIRST NAME' :\", end=\" \")\n NEW_NAME = input('') # <- upper!!!\n first_name[LOC_NAME] = NEW_NAME\n print(\"\\t 'FIRST NAME UPDATED SUCCESSFULLY.\")\n print(\"\\n\")\n except ValueError: # <-- What happens if doesn't exist??\n pass\n\n elif ATTRIBUTE == 'LAST NAME':\n print(\"ENTER 'OLD LAST NAME' :\", end=\" \")\n old_last_name = str(input(''))\n LOC_ROLL = last_name.index(old_last_name)\n\n print(\"ENTER 'NEW ROLL NUMBER' :\", end=\" \")\n NEW_NAME = int(input(''))\n last_name[LOC_ROLL] = NEW_NAME\n print(\"\\t 'ROLL NUMBER UPDATED SUCCESSFULLY.\")\n print(\"\\n\")\n\n elif ATTRIBUTE == 'AGE':\n print(\"ENTER 'OLD AGE' :\", end=\" \")\n OLD_AGE = int(input(''))\n LOC_ROLL = student_age.index(OLD_AGE)\n\n print(\"ENTER 'NEW AGE' :\", end=\" \")\n NEW_AGE = int(input(''))\n student_age[LOC_ROLL] = NEW_AGE\n print(\"\\t 'AGE UPDATED SUCCESSFULLY.\")\n print(\"\\n\")\n\n elif ATTRIBUTE == 'ADDRESS':\n print(\"ENTER 'OLD ADDRESS' :\", end=\" \")\n OLD_ADDRESS = input('')\n LOC_ADDRESS = student_address.index(OLD_ADDRESS)\n\n print(\"ENTER 'NEW ADDRESS' :\", end=\" \")\n NEW_ADDRESS = input('')\n student_address[LOC_ADDRESS] = NEW_ADDRESS\n print(\"\\t 'ADDRESS UPDATED SUCCESSFULLY.\")\n print(\"\\n\")\n\n elif ATTRIBUTE == 'EMAIL':\n print(\"ENTER 'OLD EMAIL' :\", end=\" \")\n OLD_EMAIL = input('')\n LOC_EMAIL = student_email.index(OLD_EMAIL)\n\n print(\"ENTER 'NEW EMAIL' :\", end=\" \")\n NEW_EMAIL = input('')\n student_email[LOC_EMAIL] = NEW_EMAIL\n print(\"\\t 'EMAIL - ID UPDATED SUCCESSFULLY.\")\n print(\"\\n\")\n\n elif ATTRIBUTE == 'ID':\n print(\"ENTER 'OLD STUDENT ID' :\", end=\" \")\n OLD_CLASS = input('')\n LOC_CLASS = student_id.index(OLD_CLASS)\n\n print(\"ENTER 'NEW STUDENT ID' :\", end=\" \")\n NEW_CLASS = input('')\n student_id[LOC_CLASS] = NEW_CLASS\n print(\"\\t 'CLASS UPDATED SUCCESSFULLY.\")\n print(\"\\n\")\n\n elif ATTRIBUTE == 'MOBILE NUMBER':\n print(\"ENTER 'OLD MOBILE NUMBER' :\", end=\" \")\n OLD_MOBILE = input('')\n\n print(\"ENTER 'NEW MOBILE NUMBER' :\", end=\" \")\n NEW_MOBILE = input('')\n MOBILE_NUMBER_LEN = len(OLD_MOBILE)\n M_N_LEN = len(NEW_MOBILE)\n\n if MOBILE_NUMBER_LEN < 10:\n print(end=\"\\n\")\n print(\"PLEASE ENTER TEN DIGIT MOBILE NUMBER.\")\n print(\"SYSTEM HAS STOP, PLEASE TRY AGAIN.\")\n sys.exit()\n elif M_N_LEN < 10:\n print(end=\"\\n\")\n print(\"\\t PLEASE ENTER VALID TEN DIGIT MOBILE NUMBER.\")\n print(\"\\t SYSTEM WORKING HAS STOP PLEASE TRY AGAIN.\")\n sys.exit()\n else:\n LOC_MOBILE = student_mobile_number.index(OLD_MOBILE)\n student_mobile_number[LOC_MOBILE] = NEW_MOBILE\n print(\"\\t 'MOBILE NUMBER UPDATED SUCCESSFULLY.\")\n print(\"\\n\")\n\n @staticmethod\n # THIS FUNCTION HELP TO UPDATE 'DATA' OF STUDENT.\n def display_student_information():\n print(\"DISPLAYING STUDENTS INFORMATION : \\n\")\n\n if len(first_name) == 0 and len(last_name) == 0 and len(student_age) == 0 and len(\n student_id) == 0 and len(student_mobile_number) == 0 and len(student_address) == 0 and len(\n student_email) == 0:\n print(\"\\n\")\n print(\"\\t\\t\\t 'OOPS ! NOTHING TO DISPLAY, BECAUSE NO DATA IS THERE.\")\n print(\"\\n\")\n else:\n print(\"STUDENT'S FIRST NAME : \", end=\"\\n\")\n\n for x in first_name:\n print(x)\n print()\n\n print(end=\"\\n\")\n\n print(\"STUDENT'S LAST NAME :\", end=\"\\n\")\n\n for y in last_name:\n print(y)\n print()\n\n print(end=\"\\n\")\n\n print(\"STUDENT'S AGE :\", end=\"\\n\")\n\n for z in student_age:\n print(z)\n print()\n\n print(end=\"\\n\")\n\n print(\"STUDENT'S MOBILE NUMBER :\", end=\"\\n\")\n\n for x in student_mobile_number:\n print(x)\n print()\n\n print(end=\"\\n\")\n\n print(\"STUDENT'S EMAIL :\", end=\"\\n\")\n\n for y in student_email:\n print(y)\n print()\n\n print(end=\"\\n\")\n\n print(\"STUDENT'S ID :\", end=\"\\n\")\n\n for z in student_id:\n print(z)\n print()\n\n print(end=\"\\n\")\n\n print(\"STUDENT'S ADDRESS :\", end=\"\\n\")\n\n for x in student_address:\n print(x)\n print()\n\n print(end=\"\\n\")\n\n\nSMS = student_management_system()\n\nif __name__ == '__main__':\n print(\"\\n\")\n\n print(\"' STUDENT RECORDS ' \\n\")\n run = True\n\n while run:\n print(\"PRESS FROM THE FOLLOWING OPTION : \\n\")\n\n print(\"PRESS 1 : TO ADD STUDENT INFORMATION.\")\n print(\"PRESS 2 : TO DELETE STUDENT INFORMATION.\")\n print(\"PRESS 3 : TO UPDATE STUDENT INFORMATION.\")\n print(\"PRESS 4 : TO DISPLAY STUDENT INFORMATION.\")\n print(\"PRESS 5 : TO EXIT SYSTEM.\")\n\n OPTION = int(input(\"ENTER YOUR OPTION : \"))\n print(\"\\n\")\n print(end=\"\\n\")\n\n if OPTION == 1:\n SMS.add_student_information()\n elif OPTION == 2:\n SMS.delete_student_information()\n elif OPTION == 3:\n SMS.update_student_information()\n elif OPTION == 4:\n SMS.display_student_information()\n elif OPTION == 5:\n print(\"THANK YOU ! VISIT AGAIN.\")\n run = False\n else:\n print(\"PLEASE CHOOSE CORRECT OPTION FROM THE FOLLOWING.\")\n print(\"\\n\")\n\nBut this code is so far to be acceptable for many different reasons:\n\nTake care about the lists. Use append always or never, not depending on something (phone number in your case). I have appended None phone if is not a valid phone number.\nNamings. See Python naming conventions\nAdd docstrings to classes and methods.\ninput. Add the messages here instead of printing them before. Moreover, input() is wrong, at least write input('')\nUse upper() always or never, but don't mix string with and without upper(), otherwise you never will find the existing string. I have changed this only with NAME.\nTry to use variables inside the class.\nUse try-except to catch exceptions: more info. I have added just one in your code (when updating NAME), but you should add in many more places. This is a good programming practice, and you could learn more about exceptions and possible tracebacks.\nThe most important thing: Understand how classes and Object-Oriented-Programming work. You are not using classes properly. More info about Object-Oriented-Programming\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074468662_python.txt
Q: creating list from a string and than change that new list Hy guys this is my first time here.I am a beginner and i wantend to check how can i from a given string (which is: string="5,93,14,2,33" ) make a list, after that to get the square of each number from the list and than to return that list (with a squared values) in to string? input should to be: string = "5,93,14,2,33" output: string = "25,8649,196,4,1089" i tried to make new list with .split() and than to do square of each element, but i understand that i didnt convert the string with int().I just cant put all that together so i hope that you guys can help.Thanks and sorry if this question was stupid, i just started learning A: Yes you started correctly. # First, let's split into a list: list_of_str = your_list.split(',') # '2,3,3,4,5' -> ['2','3','4','5'] # Then, with list comprehension, we transform each string into integer # (assuming there will only be integers) list_of_numbers = [int(number) for number in list_of_str] Now you have your list of integers! A: You can do it using the following approach: Split the string of numbers into an array of integers Square them Convert them back to a string Join them to a string Print the output # Input input = "5,93,14,2,33" # Get numbers as integers numbers = [int(x) for x in input.split(",")] # Stringify while squaring and join them into a string output = ",".join([str(x ** 2) for x in numbers]) # Print the output print(output)
creating list from a string and than change that new list
Hy guys this is my first time here.I am a beginner and i wantend to check how can i from a given string (which is: string="5,93,14,2,33" ) make a list, after that to get the square of each number from the list and than to return that list (with a squared values) in to string? input should to be: string = "5,93,14,2,33" output: string = "25,8649,196,4,1089" i tried to make new list with .split() and than to do square of each element, but i understand that i didnt convert the string with int().I just cant put all that together so i hope that you guys can help.Thanks and sorry if this question was stupid, i just started learning
[ "Yes you started correctly.\n# First, let's split into a list:\nlist_of_str = your_list.split(',') # '2,3,3,4,5' -> ['2','3','4','5']\n\n# Then, with list comprehension, we transform each string into integer \n# (assuming there will only be integers)\nlist_of_numbers = [int(number) for number in list_of_str]\n\nNow you have your list of integers!\n", "You can do it using the following approach:\n\nSplit the string of numbers into an array of integers\nSquare them\nConvert them back to a string\nJoin them to a string\nPrint the output\n\n# Input\ninput = \"5,93,14,2,33\"\n# Get numbers as integers\nnumbers = [int(x) for x in input.split(\",\")]\n# Stringify while squaring and join them into a string\noutput = \",\".join([str(x ** 2) for x in numbers])\n# Print the output\nprint(output)\n\n" ]
[ 1, 1 ]
[]
[]
[ "list", "python", "string" ]
stackoverflow_0074469207_list_python_string.txt
Q: pycharm not showing anything on my laptop enter image description here every time I open my pycharm it is no doing anything just this screen It is not even showing my files I tried reinstalling after deleting allthe files A: I'm new too but I believe all you have to do is create a new .py by right-clicking on project, then new, then new .py
pycharm not showing anything on my laptop
enter image description here every time I open my pycharm it is no doing anything just this screen It is not even showing my files I tried reinstalling after deleting allthe files
[ "I'm new too but I believe all you have to do is create a new .py by right-clicking on project, then new, then new .py\n" ]
[ 0 ]
[]
[]
[ "pycharm", "python" ]
stackoverflow_0074469176_pycharm_python.txt
Q: Issue with new isort extension installed as from VS-Code Update October 2022 (version 1.73) I'm using VS-Code version 1.73.1, with MS Python extension v2022.18.2, on Windows 10 Pro, Build 10.0.19045. After installing the October 2022 update of VS Code, when writing Python code I noticed nagging error diagnostics being issued by the isort extension about the import order of modules. Previously, I had never encountered such diagnostics. I traced this behaviour back to the VS Code release notes for the Update October 2022. These announce the migration of VS Code to a new stand-alone isort extension, instead of the isort support built into the Python extension, by automatically installing it alongside the Python extension. When opening a file in which the imports do not follow isort standards, the extension is intended to issue an error diagnostic and display a Code Action to fix the import order. Whilst the extension seems to work as intended, I found the issues described below: 1. Even after having executed the Code Action to fix the import order, a 'light-bulb' with the same error diagnostic and Code Action again pops up on moving the cursor to a new line of code. 2. The error diagnostic and Code Action 'light-bulb' are also displayed when moving the cursor to any new line of code, even when all lines of code in the file have been commented out; that is, effectively, there are no longer any import statements in the code, and therefore also nothing to be sorted. I'd appreciate comments on whether this is a recognised issue in VS Code, and if so, whether any workarounds are available. It defeats the purpose of having an 'error lightbulb' pop up on every line of code, just to find a code action recommending to fix the import order, even when this requires no fixing. I have opened this question on this forum as recommended on the GitHub 'Contributing to VS Code' page. A: Upgrade the isort extension version to latest(v2022.8.0).
Issue with new isort extension installed as from VS-Code Update October 2022 (version 1.73)
I'm using VS-Code version 1.73.1, with MS Python extension v2022.18.2, on Windows 10 Pro, Build 10.0.19045. After installing the October 2022 update of VS Code, when writing Python code I noticed nagging error diagnostics being issued by the isort extension about the import order of modules. Previously, I had never encountered such diagnostics. I traced this behaviour back to the VS Code release notes for the Update October 2022. These announce the migration of VS Code to a new stand-alone isort extension, instead of the isort support built into the Python extension, by automatically installing it alongside the Python extension. When opening a file in which the imports do not follow isort standards, the extension is intended to issue an error diagnostic and display a Code Action to fix the import order. Whilst the extension seems to work as intended, I found the issues described below: 1. Even after having executed the Code Action to fix the import order, a 'light-bulb' with the same error diagnostic and Code Action again pops up on moving the cursor to a new line of code. 2. The error diagnostic and Code Action 'light-bulb' are also displayed when moving the cursor to any new line of code, even when all lines of code in the file have been commented out; that is, effectively, there are no longer any import statements in the code, and therefore also nothing to be sorted. I'd appreciate comments on whether this is a recognised issue in VS Code, and if so, whether any workarounds are available. It defeats the purpose of having an 'error lightbulb' pop up on every line of code, just to find a code action recommending to fix the import order, even when this requires no fixing. I have opened this question on this forum as recommended on the GitHub 'Contributing to VS Code' page.
[ "Upgrade the isort extension version to latest(v2022.8.0).\n\n" ]
[ 3 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0074461394_python_visual_studio_code.txt
Q: Creating a List and maintaining integer value I am new to python a bit. I am trying to convert a dataframe to list after changing the datatype of a particular column to integer. The funny thing is when converted to list, the column still has float. There are three columns in the dataframe, first two is float and I want the last to be integer, but it still comes as float. If I change all to integer, then the list creates as integer. 0 1.53 3.13 0.0 1 0.58 2.83 0.0 2 0.28 2.69 0.0 3 1.14 2.14 0.0 4 1.46 3.39 0.0 ... ... ... ... 495 2.37 0.93 1.0 496 2.85 0.52 1.0 497 2.35 0.39 1.0 498 2.96 1.68 1.0 499 2.56 0.16 1.0 Above is the Dataframe. Below is the last column converted #convert last column to integer datatype data[6] = data[6].astype(dtype ='int64') display(data.dtypes) The below is converting the dataframe to list. #Turn DF to list data_to_List = data.values.tolist() data_to_List #below is what is shown now. [[1.53, 3.13, 0.0], [0.58, 2.83, 0.0], [0.28, 2.69, 0.0], [1.14, 2.14, 0.0], [3.54, 0.75, 1.0], [3.04, 0.15, 1.0], [2.49, 0.15, 1.0], [2.27, 0.39, 1.0], [3.65, 1.5, 1.0], I want the last column to be just 0 or 1 and not 0.0 or 1.0 A: Yes, you are correct pandas is converting int to float when you use data.values You can convert your float to int by using the below list comprehension: data_to_List = [[x[0],x[1],int(x[2])] for x in data.values.tolist()] print(data_to_List) [[1.53, 3.13, 0], [0.58, 2.83, 0], [0.28, 2.69, 0], [1.14, 2.14, 0], [1.46, 3.39, 0]]
Creating a List and maintaining integer value
I am new to python a bit. I am trying to convert a dataframe to list after changing the datatype of a particular column to integer. The funny thing is when converted to list, the column still has float. There are three columns in the dataframe, first two is float and I want the last to be integer, but it still comes as float. If I change all to integer, then the list creates as integer. 0 1.53 3.13 0.0 1 0.58 2.83 0.0 2 0.28 2.69 0.0 3 1.14 2.14 0.0 4 1.46 3.39 0.0 ... ... ... ... 495 2.37 0.93 1.0 496 2.85 0.52 1.0 497 2.35 0.39 1.0 498 2.96 1.68 1.0 499 2.56 0.16 1.0 Above is the Dataframe. Below is the last column converted #convert last column to integer datatype data[6] = data[6].astype(dtype ='int64') display(data.dtypes) The below is converting the dataframe to list. #Turn DF to list data_to_List = data.values.tolist() data_to_List #below is what is shown now. [[1.53, 3.13, 0.0], [0.58, 2.83, 0.0], [0.28, 2.69, 0.0], [1.14, 2.14, 0.0], [3.54, 0.75, 1.0], [3.04, 0.15, 1.0], [2.49, 0.15, 1.0], [2.27, 0.39, 1.0], [3.65, 1.5, 1.0], I want the last column to be just 0 or 1 and not 0.0 or 1.0
[ "Yes, you are correct pandas is converting int to float when you use data.values\nYou can convert your float to int by using the below list comprehension:\ndata_to_List = [[x[0],x[1],int(x[2])] for x in data.values.tolist()]\n\nprint(data_to_List)\n\n[[1.53, 3.13, 0],\n [0.58, 2.83, 0],\n [0.28, 2.69, 0],\n [1.14, 2.14, 0],\n [1.46, 3.39, 0]]\n\n" ]
[ 1 ]
[]
[]
[ "list", "pandas", "python" ]
stackoverflow_0074469122_list_pandas_python.txt
Q: How to summarize pytorch model Hello I am building a DQN model for reinforcement learning on cartpole and want to print my model summary like keras model.summary() function Here is my model class. class DQN(): ''' Deep Q Neural Network class. ''' def __init__(self, state_dim, action_dim, hidden_dim=64, lr=0.05): super(DQN, self).__init__() self.criterion = torch.nn.MSELoss() self.model = torch.nn.Sequential( torch.nn.Linear(state_dim, hidden_dim), torch.nn.ReLU(), torch.nn.Linear(hidden_dim, hidden_dim*2), torch.nn.ReLU(), torch.nn.Linear(hidden_dim*2, action_dim) ) self.optimizer = torch.optim.Adam(self.model.parameters(), lr) def update(self, state, y): """Update the weights of the network given a training sample. """ y_pred = self.model(torch.Tensor(state)) loss = self.criterion(y_pred, Variable(torch.Tensor(y))) self.optimizer.zero_grad() loss.backward() self.optimizer.step() def predict(self, state): """ Compute Q values for all actions using the DQL. """ with torch.no_grad(): return self.model(torch.Tensor(state)) Here is the model instance with the parameters passed. # Number of states = 4 n_state = env.observation_space.shape[0] # Number of actions = 2 n_action = env.action_space.n # Number of episodes episodes = 150 # Number of hidden nodes in the DQN n_hidden = 50 # Learning rate lr = 0.001 simple_dqn = DQN(n_state, n_action, n_hidden, lr) I tried using torchinfo summary but I get an AttributeError: 'DQN' object has no attribute 'named_parameters' from torchinfo import summary simple_dqn = DQN(n_state, n_action, n_hidden, lr) summary(simple_dqn, input_size=(4, 2, 50)) Any help is appreciated. A: Your DQN should be a subclass of nn.Module class DQN(nn.Module): def __init__(self, state_dim, action_dim, hidden_dim=64, lr=0.05): ...
How to summarize pytorch model
Hello I am building a DQN model for reinforcement learning on cartpole and want to print my model summary like keras model.summary() function Here is my model class. class DQN(): ''' Deep Q Neural Network class. ''' def __init__(self, state_dim, action_dim, hidden_dim=64, lr=0.05): super(DQN, self).__init__() self.criterion = torch.nn.MSELoss() self.model = torch.nn.Sequential( torch.nn.Linear(state_dim, hidden_dim), torch.nn.ReLU(), torch.nn.Linear(hidden_dim, hidden_dim*2), torch.nn.ReLU(), torch.nn.Linear(hidden_dim*2, action_dim) ) self.optimizer = torch.optim.Adam(self.model.parameters(), lr) def update(self, state, y): """Update the weights of the network given a training sample. """ y_pred = self.model(torch.Tensor(state)) loss = self.criterion(y_pred, Variable(torch.Tensor(y))) self.optimizer.zero_grad() loss.backward() self.optimizer.step() def predict(self, state): """ Compute Q values for all actions using the DQL. """ with torch.no_grad(): return self.model(torch.Tensor(state)) Here is the model instance with the parameters passed. # Number of states = 4 n_state = env.observation_space.shape[0] # Number of actions = 2 n_action = env.action_space.n # Number of episodes episodes = 150 # Number of hidden nodes in the DQN n_hidden = 50 # Learning rate lr = 0.001 simple_dqn = DQN(n_state, n_action, n_hidden, lr) I tried using torchinfo summary but I get an AttributeError: 'DQN' object has no attribute 'named_parameters' from torchinfo import summary simple_dqn = DQN(n_state, n_action, n_hidden, lr) summary(simple_dqn, input_size=(4, 2, 50)) Any help is appreciated.
[ "Your DQN should be a subclass of nn.Module\nclass DQN(nn.Module):\n def __init__(self, state_dim, action_dim, hidden_dim=64, lr=0.05):\n ...\n\n" ]
[ 0 ]
[]
[]
[ "python", "pytorch" ]
stackoverflow_0074464424_python_pytorch.txt