content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How to calculate the first Monday of the month; python 3.3+ I need to run a monthly report on the first Monday of the month and calculate this day with Python. The code I have so far will go into a module in our ETL program and will determine if the date is actually the first day of the month. Ideally, what I need is if the Monday is the first Monday of the month, run the report (execute = 1) only on this day. Otherwise, do not run anything (execute = 0). What I have: # Calculate first Monday of the month # import module(s) from datetime import datetime, date, timedelta today = date.today() # - timedelta(days = 1) datee = datetime.strptime(str(today), "%Y-%m-%d") print(f'Today: {today}') # function finds first Monday of the month given the date passed in "today" def find_first_monday(year, month, day): d = datetime(year, int(month), int(day)) offset = 0-d.weekday() #weekday = 0 means monday if offset < 0: offset+=7 return d+timedelta(offset) # converts datetime object to date first_monday_of_month = find_first_monday(datee.year, datee.month, datee.day).date() # prints the next Monday given the date that is passed as "today" print(f'Today\'s date: {today}') print(f'First Monday of the month date: {first_monday_of_month}') # if first Monday is true, execute = 1, else execute = 0; 1 will execute the next module of code if today == first_monday_of_month: execute = 1 print(execute) else: execute = 0 print(execute) It works assuming the date in "today" is not after the first Monday of the month. When "today" is after the first Monday of the month, it prints the next coming Monday. Our ETL scheduler allows us to run daily, weekly, or monthly. I'm thinking I'll have to run this daily, even though this is a monthly report, and the module with this code will determine if "today" is the first Monday of the month or not. If it's not the first Monday, it will not execute the next code modules (execute = 0). I'm not confident this will actually run if "today" is the first Monday of the month since it prints the next coming Monday for any date passed in "today." I can't seem to find the answer I need for making sure it only calculates the first Monday of the month and only runs the report on that day. Thanks in advance. A: One way to do this is to ignore the passed in day value, and just use 7 instead; then you can simply subtract the weekday offset: def find_first_monday(year, month, day): d = datetime(year, int(month), 7) offset = -d.weekday() #weekday = 0 means monday return d + timedelta(offset) A: With numpy, calculating the first Monday of the month is much simpler that that. import datetime import numpy as np any_date_in_month = datetime.datetime(year, month, day) year_month = any_date_in_month.strftime('%Y-%m') first_monday = np.busday_offset(year_month, 0, roll='forward', weekmask='Mon') so just check anything you need against first_monday and you are set. A: A slightly different method – The date.weekday() function gives your an index of the day of the week (where Monday is 0 and Sunday is 6). You can use this value to directly calculate the which date any day of the week will fall on. For Mondays, something like this... def first_monday(year, month): day = (8 - datetime.date(year, month, 1).weekday()) % 7 return datetime.date(year, month, day) Of course, you could make a generic version, that let you specify which day of the week you were after, like this: def first_dow(year, month, dow): day = ((8 + dow) - datetime.date(year, month, 1).weekday()) % 7 return datetime.date(year, month, day) It accepts the same indexes as the date.weekday() function returns (Monday is 0, Sunday is 6). For example, to find the first Wednesday (2) of July 2022... >>> first_dow(2022, 7, 2) datetime.date(2022, 7, 6) A: here is a function that will find the first occurrence of a day in a given month. def find_first_week_day(year: int, month: int, week_day: int) -> date: """ Return the first weekday of the given month. :param year: Year :param month: Month :param week_day: Weekday to find [0-6] :return: Date of the first weekday of the given month """ first_day = date(year, month, 1) if first_day.weekday() == week_day: return first_day if first_day.weekday() < week_day: return first_day + timedelta(week_day - first_day.weekday()) return first_day + timedelta(7 - first_day.weekday() + week_day)
How to calculate the first Monday of the month; python 3.3+
I need to run a monthly report on the first Monday of the month and calculate this day with Python. The code I have so far will go into a module in our ETL program and will determine if the date is actually the first day of the month. Ideally, what I need is if the Monday is the first Monday of the month, run the report (execute = 1) only on this day. Otherwise, do not run anything (execute = 0). What I have: # Calculate first Monday of the month # import module(s) from datetime import datetime, date, timedelta today = date.today() # - timedelta(days = 1) datee = datetime.strptime(str(today), "%Y-%m-%d") print(f'Today: {today}') # function finds first Monday of the month given the date passed in "today" def find_first_monday(year, month, day): d = datetime(year, int(month), int(day)) offset = 0-d.weekday() #weekday = 0 means monday if offset < 0: offset+=7 return d+timedelta(offset) # converts datetime object to date first_monday_of_month = find_first_monday(datee.year, datee.month, datee.day).date() # prints the next Monday given the date that is passed as "today" print(f'Today\'s date: {today}') print(f'First Monday of the month date: {first_monday_of_month}') # if first Monday is true, execute = 1, else execute = 0; 1 will execute the next module of code if today == first_monday_of_month: execute = 1 print(execute) else: execute = 0 print(execute) It works assuming the date in "today" is not after the first Monday of the month. When "today" is after the first Monday of the month, it prints the next coming Monday. Our ETL scheduler allows us to run daily, weekly, or monthly. I'm thinking I'll have to run this daily, even though this is a monthly report, and the module with this code will determine if "today" is the first Monday of the month or not. If it's not the first Monday, it will not execute the next code modules (execute = 0). I'm not confident this will actually run if "today" is the first Monday of the month since it prints the next coming Monday for any date passed in "today." I can't seem to find the answer I need for making sure it only calculates the first Monday of the month and only runs the report on that day. Thanks in advance.
[ "One way to do this is to ignore the passed in day value, and just use 7 instead; then you can simply subtract the weekday offset:\ndef find_first_monday(year, month, day):\n d = datetime(year, int(month), 7)\n offset = -d.weekday() #weekday = 0 means monday\n return d + timedelta(offset)\n\n", "With numpy, calculating the first Monday of the month is much simpler that that.\nimport datetime\nimport numpy as np\n\nany_date_in_month = datetime.datetime(year, month, day)\n\nyear_month = any_date_in_month.strftime('%Y-%m')\nfirst_monday = np.busday_offset(year_month, 0, roll='forward', weekmask='Mon')\n\nso just check anything you need against first_monday and you are set.\n", "A slightly different method – The date.weekday() function gives your an index of the day of the week (where Monday is 0 and Sunday is 6). You can use this value to directly calculate the which date any day of the week will fall on. For Mondays, something like this...\ndef first_monday(year, month):\n day = (8 - datetime.date(year, month, 1).weekday()) % 7\n return datetime.date(year, month, day)\n\nOf course, you could make a generic version, that let you specify which day of the week you were after, like this:\ndef first_dow(year, month, dow):\n day = ((8 + dow) - datetime.date(year, month, 1).weekday()) % 7\n return datetime.date(year, month, day)\n\nIt accepts the same indexes as the date.weekday() function returns (Monday is 0, Sunday is 6). For example, to find the first Wednesday (2) of July 2022...\n>>> first_dow(2022, 7, 2)\ndatetime.date(2022, 7, 6)\n\n", "here is a function that will find the first occurrence of a day in a given month.\ndef find_first_week_day(year: int, month: int, week_day: int) -> date:\n \"\"\"\n Return the first weekday of the given month.\n\n :param year: Year\n :param month: Month\n :param week_day: Weekday to find [0-6]\n\n :return: Date of the first weekday of the given month\n \"\"\"\n first_day = date(year, month, 1)\n if first_day.weekday() == week_day:\n return first_day\n if first_day.weekday() < week_day:\n return first_day + timedelta(week_day - first_day.weekday())\n return first_day + timedelta(7 - first_day.weekday() + week_day)\n\n" ]
[ 9, 1, 0, 0 ]
[]
[]
[ "date", "function", "python", "python_3.x", "python_datetime" ]
stackoverflow_0067378357_date_function_python_python_3.x_python_datetime.txt
Q: I need to parse an XML file using python, but I cannot import any library that requires pip The situation is I need the book title & number value under Score and place them on a 2d list. My current code, can retrieve the book title and score place them on a list, but the problem is there's some sections in the XML file where the score is not present, and I need to be able to leave an indicator (ex. N/A) on the list to indicate that value is empty for that particular book title. Please Note: Asked this question previously, but the answer included a library function using pip, and was too narrow in scope. It included an answer that assumed the problem only appears once in the xml file, as it does here in the sample xml file. This is a sample, simplified version of the xml file. There are in 100+ book titles and scores in the full xml file. Some contain scores, some do not. Thus no code can use, [1] as an index to get past this problem. This question is being posted again to avoid that very problem. <bookstore> <book>[A-23] Everyday Italian</book> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> <field></field> <key id="6408">[A-23]Everyday Italian</key> <brief>Everyday Italian</brief> <success></success> <province> id="256" key=".com.place.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="490" key=".com.ave.fieldtypes:float"> <name>Score</name> <numbers> <number>4.0</number> </numbers> </province> <province> id="531" key=".com.spot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-42] Pottery</book> <author>Leo Di Plos</author> <year>2012</year> <price>25.00</price> <field></field> <key id="4502">[A-42] Pottery</key> <brief>Pottery</brief> <success></success> <province> id="627" key=".com.tri.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="124" key=".com.doct.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-12] Skipping the Line</book> <author>Gloria Gasol</author> <year>1999</year> <price>22.00</price> <field></field> <key id="1468">[A-23]Skipping the Line</key> <brief>Skipping the Line</brief> <success></success> <province> id="754" key=".com.cit.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="211" key=".com.soct.fieldtypes:float"> <name>Score</name> <numbers> <number>12.0</number> </numbers> </province> <province> id="458" key=".com.lot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> ........................ and so on 100+ more times The is the code: import xml.etree.ElementTree as ET import re tree = ET.parse('book.xml') root = tree.getroot() book = [] for book in root.iter('book'): item1 = book.text book.append(item1) score = [] for province in root.iter('province'): for child in province: for grandchild in child: if re.match('^[+-]?\d*?\.\d+$', grandchild.text) != None: item2 = float(grandchild.text) score.append(item2) print(book, score) The expected output is: ([A-23] Everyday Italian, 4.0), ([A-42] Pottery, N/A), ([A-12] Skipping the Line, 12.0), ..... etc up to 100+ items on this list Actual output is: ([A-23] Everyday Italian, 4.0), ([A-42] Pottery, 12.0), ([A-12] Skipping the Line) A: Try: import xml.etree.ElementTree as ET tree = ET.parse("your_xml_file.xml") root = tree.getroot() out = [] for bookstore in root.iter("bookstore"): name = bookstore.find("book").text score = bookstore.find('.//*[name="Score"]') if score: score = score.find(".//number").text out.append((name, score or "N/A")) print(out) Prints: [ ("[A-23] Everyday Italian", "4.0"), ("[A-42] Pottery", "N/A"), ("[A-12] Skipping the Line", "12.0"), ]
I need to parse an XML file using python, but I cannot import any library that requires pip
The situation is I need the book title & number value under Score and place them on a 2d list. My current code, can retrieve the book title and score place them on a list, but the problem is there's some sections in the XML file where the score is not present, and I need to be able to leave an indicator (ex. N/A) on the list to indicate that value is empty for that particular book title. Please Note: Asked this question previously, but the answer included a library function using pip, and was too narrow in scope. It included an answer that assumed the problem only appears once in the xml file, as it does here in the sample xml file. This is a sample, simplified version of the xml file. There are in 100+ book titles and scores in the full xml file. Some contain scores, some do not. Thus no code can use, [1] as an index to get past this problem. This question is being posted again to avoid that very problem. <bookstore> <book>[A-23] Everyday Italian</book> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> <field></field> <key id="6408">[A-23]Everyday Italian</key> <brief>Everyday Italian</brief> <success></success> <province> id="256" key=".com.place.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="490" key=".com.ave.fieldtypes:float"> <name>Score</name> <numbers> <number>4.0</number> </numbers> </province> <province> id="531" key=".com.spot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-42] Pottery</book> <author>Leo Di Plos</author> <year>2012</year> <price>25.00</price> <field></field> <key id="4502">[A-42] Pottery</key> <brief>Pottery</brief> <success></success> <province> id="627" key=".com.tri.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="124" key=".com.doct.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-12] Skipping the Line</book> <author>Gloria Gasol</author> <year>1999</year> <price>22.00</price> <field></field> <key id="1468">[A-23]Skipping the Line</key> <brief>Skipping the Line</brief> <success></success> <province> id="754" key=".com.cit.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="211" key=".com.soct.fieldtypes:float"> <name>Score</name> <numbers> <number>12.0</number> </numbers> </province> <province> id="458" key=".com.lot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> ........................ and so on 100+ more times The is the code: import xml.etree.ElementTree as ET import re tree = ET.parse('book.xml') root = tree.getroot() book = [] for book in root.iter('book'): item1 = book.text book.append(item1) score = [] for province in root.iter('province'): for child in province: for grandchild in child: if re.match('^[+-]?\d*?\.\d+$', grandchild.text) != None: item2 = float(grandchild.text) score.append(item2) print(book, score) The expected output is: ([A-23] Everyday Italian, 4.0), ([A-42] Pottery, N/A), ([A-12] Skipping the Line, 12.0), ..... etc up to 100+ items on this list Actual output is: ([A-23] Everyday Italian, 4.0), ([A-42] Pottery, 12.0), ([A-12] Skipping the Line)
[ "Try:\nimport xml.etree.ElementTree as ET\n\ntree = ET.parse(\"your_xml_file.xml\")\nroot = tree.getroot()\n\nout = []\nfor bookstore in root.iter(\"bookstore\"):\n name = bookstore.find(\"book\").text\n score = bookstore.find('.//*[name=\"Score\"]')\n if score:\n score = score.find(\".//number\").text\n out.append((name, score or \"N/A\"))\n\nprint(out)\n\nPrints:\n[\n (\"[A-23] Everyday Italian\", \"4.0\"),\n (\"[A-42] Pottery\", \"N/A\"),\n (\"[A-12] Skipping the Line\", \"12.0\"),\n]\n\n" ]
[ 2 ]
[]
[]
[ "list", "parsing", "python", "xml" ]
stackoverflow_0074480263_list_parsing_python_xml.txt
Q: How to input 2d list in function? I want function maxmintuple(m), that takes m, a 2D list returns a tuple with the min value and max value in the corresponding brackets. eg: maxmintuple ([[3,5],[6,8]]) (3,8) This is how I call it: maxmintuple([1,5],[2,8]) and it returns this : Traceback (most recent call last): File "<pyshell#17>", line 1, in <module> maxmintuple([1,5],[2,8]) TypeError: maxmintuple() takes 1 positional argument but 2 were given Here's what I have, but it keeps saying maxmintuple() takes 1 positional argument but 2 were given Here's what I did: def maxmintuple(m): max1 = m[O][O] min1 = m[O][O] for zero in m: for one in zero: if one > max1: max1 = one if one < min1: min1 = one return (min1,max1) A: I think you have to change your function to: def maxmintuple(m): min1 = min(m[0]) max1 = max(m[1]) return (min1,max1) Let's apply it with your 2D list example: maxmintuple([[3,5],[6,8]]) Output (3, 8) A: You are calling the function wrong. you need to call like below maxmintuple([[1,5],[2,8]])
How to input 2d list in function?
I want function maxmintuple(m), that takes m, a 2D list returns a tuple with the min value and max value in the corresponding brackets. eg: maxmintuple ([[3,5],[6,8]]) (3,8) This is how I call it: maxmintuple([1,5],[2,8]) and it returns this : Traceback (most recent call last): File "<pyshell#17>", line 1, in <module> maxmintuple([1,5],[2,8]) TypeError: maxmintuple() takes 1 positional argument but 2 were given Here's what I have, but it keeps saying maxmintuple() takes 1 positional argument but 2 were given Here's what I did: def maxmintuple(m): max1 = m[O][O] min1 = m[O][O] for zero in m: for one in zero: if one > max1: max1 = one if one < min1: min1 = one return (min1,max1)
[ "I think you have to change your function to:\ndef maxmintuple(m):\n\n min1 = min(m[0])\n max1 = max(m[1])\n\n return (min1,max1)\n\nLet's apply it with your 2D list example:\nmaxmintuple([[3,5],[6,8]])\n\nOutput\n(3, 8)\n\n", "You are calling the function wrong. you need to call like below\nmaxmintuple([[1,5],[2,8]])\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074480742_python.txt
Q: Python Plotly display other information on Hover Here is the code that I have tried: # import pandas as pd import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots df = pd.read_csv("resultant_data.txt", index_col = 0, sep = ",") display=df[["Velocity", "WinLoss"]] pos = lambda col : col[col > 0].sum() neg = lambda col : col[col < 0].sum() Related_Display_Info = df.groupby("RacerCount").agg(Counts=("Velocity","count"), WinLoss=("WinLoss","sum"), Positives=("WinLoss", pos), Negatives=("WinLoss", neg), ) # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # Add traces fig.add_trace( go.Scatter(x=display.index, y=display["Velocity"], name="Velocity", mode="markers"), secondary_y=False ) fig.add_trace( go.Scatter(x=Related_Display_Info.index, y=Related_Display_Info["WinLoss"], name="Win/Loss", mode="markers", marker=dict( color=( (Related_Display_Info["WinLoss"] < 0) ).astype('int'), colorscale=[[0, 'green'], [1, 'red']] ) ), secondary_y=True, ) # Add figure title fig.update_layout( title_text="Race Analysis" ) # Set x-axis title fig.update_xaxes(title_text="<b>Racer Counts</b>") # Set y-axes titles fig.update_yaxes(title_text="<b>Velocity</b>", secondary_y=False) fig.update_yaxes(title_text="<b>Win/Loss/b>", secondary_y=True) fig.update_layout(hovermode="x unified") fig.show() The output is: But I was willing to display the following information when I hover on the point: RaceCount = From Display dataframe value Number of the race corresponding to the dot I hover on. Velocity = From Display Dataframe value Velocity at that point Counts = From Related_Display_Info Column WinLoss = From Related_Display_Info Column Positives = From Related_Display_Info Column Negatives = From Related_Display_Info Column Please can anyone tell me what to do to get this information on my chart? I have checked this but was not helpful since I got many errors: Python/Plotly: How to customize hover-template on with what information to show? Data: RacerCount,Velocity,WinLoss 111,0.36,1 141,0.31,1 156,0.3,1 141,0.23,1 147,0.23,1 156,0.22,1 165,0.2,1 174,0.18,1 177,0.18,1 183,0.18,1 114,0.32,1 117,0.3,1 120,0.29,1 123,0.29,1 126,0.28,1 129,0.27,1 120,0.32,1 144,0.3,1 147,0.3,1 159,0.27,1 165,0.26,1 168,0.25,1 156,0.29,1 165,0.26,1 168,0.26,1 165,0.28,1 213,0.17,1 243,0.15,1 249,0.14,1 228,0.54,1 177,0.67,1 180,0.66,1 183,0.65,1 192,0.66,1 195,0.62,1 198,0.6,1 180,0.66,1 222,0.56,1 114,0.41,1 81,0.82,1 102,0.56,1 111,0.55,1 90,1.02,1 93,1.0,1 90,1.18,1 90,1.18,1 93,1.1,1 96,1.07,1 99,1.04,1 102,0.99,1 105,0.94,1 108,0.92,1 111,0.9,1 162,0.66,1 159,0.63,1 162,0.65,-1 162,0.66,-1 168,0.64,-1 159,0.68,-1 162,0.67,-1 174,0.62,-1 168,0.65,-1 171,0.64,-1 198,0.55,-1 300,0.47,-1 201,0.56,-1 174,0.63,-1 180,0.61,-1 171,0.64,-1 174,0.62,-1 303,0.47,-1 312,0.48,-1 258,0.51,-1 261,0.51,-1 264,0.5,-1 279,0.47,-1 288,0.48,-1 294,0.47,-1 258,0.52,-1 261,0.51,-1 267,0.5,-1 222,0.53,-1 171,0.64,-1 177,0.63,-1 177,0.63,-1 A: Essentially, this code ungroups the data frame before plotting to create the hovertemplate you're looking for. As stated in the comments, the data has to have the same number of rows to be shown in the hovertemplate. At the end of my answer, I added the code all in one chunk. Since you have hovermode as x unified, you probably only want one of these traces to have hover content. I slightly modified the creation of Related_Display_Info. Instead of WinLoss, which is already in the parent data frame, I modified it to WinLoss_sum, so there wouldn't be a naming conflict when I ungrouped. Related_Display_Info = df.groupby("RacerCount").agg( Counts=("Velocity","count"), WinLoss_sum=("WinLoss","sum"), Positives=("WinLoss", pos), Negatives=("WinLoss", neg)) Now it's time to ungroup the data you grouped. I created dui (stands for display info ungrouped). dui = pd.merge(df, Related_Display_Info, how = "outer", on="RacerCount", suffixes=(False, False)) I created the hovertemplate for both traces. I passed the entire ungrouped data frame to customdata. It looks like the only column that isn't in the template is the original WinLoss. # create hover template for all traces ht="<br>".join(["<br>RacerCount: %{customdata[0]}", "Velocity: %{customdata[1]:.2f}", "Counts: %{customdata[3]}", "Winloss: %{customdata[4]}", "Positives: %{customdata[5]}", "Negatives: %{customdata[6]}<br>"]) The creation of fig is unchanged. However, the traces are both based on dui. Additionally, the index isn't RacerCount, so I used the literal field instead. # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # Add traces fig.add_trace(go.Scatter(x=dui["RacerCount"], y=dui["Velocity"], name="Velocity", mode="markers", customdata=dui, hovertemplate=ht), secondary_y=False) fig.add_trace( go.Scatter(x = dui["RacerCount"], y=dui["WinLoss_sum"], customdata=dui, name="Win/Loss", mode="markers", marker=dict(color=((dui["WinLoss_sum"] < 0)).astype('int'), colorscale=[[0, 'green'], [1, 'red']]), hovertemplate=ht), secondary_y=True) All the code altogether (for easier copy + paste) import pandas as pd import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots df = pd.read_clipboard(sep = ',') display=df[["Velocity", "WinLoss"]] pos = lambda col : col[col > 0].sum() neg = lambda col : col[col < 0].sum() Related_Display_Info = df.groupby("RacerCount").agg( Counts=("Velocity","count"), WinLoss_sum=("WinLoss","sum"), Positives=("WinLoss", pos), Negatives=("WinLoss", neg)) # ungroup the data for the hovertemplate dui = pd.merge(df, Related_Display_Info, how = "outer", on="RacerCount", suffixes=(False, False)) # create hover template for all traces ht="<br>".join(["<br>RacerCount: %{customdata[0]}", "Velocity: %{customdata[1]:.2f}", "Counts: %{customdata[3]}", "Winloss: %{customdata[4]}", "Positives: %{customdata[5]}", "Negatives: %{customdata[6]}<br>"]) # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # Add traces fig.add_trace(go.Scatter(x=dui["RacerCount"], y=dui["Velocity"], name="Velocity", mode="markers", customdata=dui, hovertemplate=ht), secondary_y=False) fig.add_trace( go.Scatter(x = dui["RacerCount"], y=dui["WinLoss_sum"], customdata=dui, name="Win/Loss", mode="markers", marker=dict(color=((dui["WinLoss_sum"] < 0)).astype('int'), colorscale=[[0, 'green'], [1, 'red']]), hovertemplate=ht), secondary_y=True) # Add figure title fig.update_layout( title_text="Race Analysis" ) # Set x-axis title fig.update_xaxes(title_text="<b>Racer Counts</b>") # Set y-axes titles fig.update_yaxes(title_text="<b>Velocity</b>", secondary_y=False) fig.update_yaxes(title_text="<b>Win/Loss/b>", secondary_y=True) fig.update_layout(hovermode="x unified") fig.show()
Python Plotly display other information on Hover
Here is the code that I have tried: # import pandas as pd import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots df = pd.read_csv("resultant_data.txt", index_col = 0, sep = ",") display=df[["Velocity", "WinLoss"]] pos = lambda col : col[col > 0].sum() neg = lambda col : col[col < 0].sum() Related_Display_Info = df.groupby("RacerCount").agg(Counts=("Velocity","count"), WinLoss=("WinLoss","sum"), Positives=("WinLoss", pos), Negatives=("WinLoss", neg), ) # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # Add traces fig.add_trace( go.Scatter(x=display.index, y=display["Velocity"], name="Velocity", mode="markers"), secondary_y=False ) fig.add_trace( go.Scatter(x=Related_Display_Info.index, y=Related_Display_Info["WinLoss"], name="Win/Loss", mode="markers", marker=dict( color=( (Related_Display_Info["WinLoss"] < 0) ).astype('int'), colorscale=[[0, 'green'], [1, 'red']] ) ), secondary_y=True, ) # Add figure title fig.update_layout( title_text="Race Analysis" ) # Set x-axis title fig.update_xaxes(title_text="<b>Racer Counts</b>") # Set y-axes titles fig.update_yaxes(title_text="<b>Velocity</b>", secondary_y=False) fig.update_yaxes(title_text="<b>Win/Loss/b>", secondary_y=True) fig.update_layout(hovermode="x unified") fig.show() The output is: But I was willing to display the following information when I hover on the point: RaceCount = From Display dataframe value Number of the race corresponding to the dot I hover on. Velocity = From Display Dataframe value Velocity at that point Counts = From Related_Display_Info Column WinLoss = From Related_Display_Info Column Positives = From Related_Display_Info Column Negatives = From Related_Display_Info Column Please can anyone tell me what to do to get this information on my chart? I have checked this but was not helpful since I got many errors: Python/Plotly: How to customize hover-template on with what information to show? Data: RacerCount,Velocity,WinLoss 111,0.36,1 141,0.31,1 156,0.3,1 141,0.23,1 147,0.23,1 156,0.22,1 165,0.2,1 174,0.18,1 177,0.18,1 183,0.18,1 114,0.32,1 117,0.3,1 120,0.29,1 123,0.29,1 126,0.28,1 129,0.27,1 120,0.32,1 144,0.3,1 147,0.3,1 159,0.27,1 165,0.26,1 168,0.25,1 156,0.29,1 165,0.26,1 168,0.26,1 165,0.28,1 213,0.17,1 243,0.15,1 249,0.14,1 228,0.54,1 177,0.67,1 180,0.66,1 183,0.65,1 192,0.66,1 195,0.62,1 198,0.6,1 180,0.66,1 222,0.56,1 114,0.41,1 81,0.82,1 102,0.56,1 111,0.55,1 90,1.02,1 93,1.0,1 90,1.18,1 90,1.18,1 93,1.1,1 96,1.07,1 99,1.04,1 102,0.99,1 105,0.94,1 108,0.92,1 111,0.9,1 162,0.66,1 159,0.63,1 162,0.65,-1 162,0.66,-1 168,0.64,-1 159,0.68,-1 162,0.67,-1 174,0.62,-1 168,0.65,-1 171,0.64,-1 198,0.55,-1 300,0.47,-1 201,0.56,-1 174,0.63,-1 180,0.61,-1 171,0.64,-1 174,0.62,-1 303,0.47,-1 312,0.48,-1 258,0.51,-1 261,0.51,-1 264,0.5,-1 279,0.47,-1 288,0.48,-1 294,0.47,-1 258,0.52,-1 261,0.51,-1 267,0.5,-1 222,0.53,-1 171,0.64,-1 177,0.63,-1 177,0.63,-1
[ "Essentially, this code ungroups the data frame before plotting to create the hovertemplate you're looking for.\nAs stated in the comments, the data has to have the same number of rows to be shown in the hovertemplate. At the end of my answer, I added the code all in one chunk.\n\nSince you have hovermode as x unified, you probably only want one of these traces to have hover content.\n\nI slightly modified the creation of Related_Display_Info. Instead of WinLoss, which is already in the parent data frame, I modified it to WinLoss_sum, so there wouldn't be a naming conflict when I ungrouped.\nRelated_Display_Info = df.groupby(\"RacerCount\").agg(\n Counts=(\"Velocity\",\"count\"), WinLoss_sum=(\"WinLoss\",\"sum\"),\n Positives=(\"WinLoss\", pos), Negatives=(\"WinLoss\", neg))\n\nNow it's time to ungroup the data you grouped. I created dui (stands for display info ungrouped).\ndui = pd.merge(df, Related_Display_Info, how = \"outer\", on=\"RacerCount\", \n suffixes=(False, False))\n\nI created the hovertemplate for both traces. I passed the entire ungrouped data frame to customdata. It looks like the only column that isn't in the template is the original WinLoss.\n# create hover template for all traces\nht=\"<br>\".join([\"<br>RacerCount: %{customdata[0]}\",\n \"Velocity: %{customdata[1]:.2f}\",\n \"Counts: %{customdata[3]}\",\n \"Winloss: %{customdata[4]}\",\n \"Positives: %{customdata[5]}\",\n \"Negatives: %{customdata[6]}<br>\"])\n\nThe creation of fig is unchanged. However, the traces are both based on dui. Additionally, the index isn't RacerCount, so I used the literal field instead.\n# Create figure with secondary y-axis\nfig = make_subplots(specs=[[{\"secondary_y\": True}]])\n\n# Add traces\nfig.add_trace(go.Scatter(x=dui[\"RacerCount\"], y=dui[\"Velocity\"], \n name=\"Velocity\", mode=\"markers\",\n customdata=dui, hovertemplate=ht), \n secondary_y=False)\n\nfig.add_trace(\n go.Scatter(x = dui[\"RacerCount\"], y=dui[\"WinLoss_sum\"], customdata=dui,\n name=\"Win/Loss\", mode=\"markers\", \n marker=dict(color=((dui[\"WinLoss_sum\"] < 0)).astype('int'),\n colorscale=[[0, 'green'], [1, 'red']]), \n hovertemplate=ht),\n secondary_y=True)\n\n\n\nAll the code altogether (for easier copy + paste)\nimport pandas as pd\nimport numpy as np\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\n\ndf = pd.read_clipboard(sep = ',')\n\ndisplay=df[[\"Velocity\", \"WinLoss\"]]\n\npos = lambda col : col[col > 0].sum()\nneg = lambda col : col[col < 0].sum()\n\nRelated_Display_Info = df.groupby(\"RacerCount\").agg(\n Counts=(\"Velocity\",\"count\"), WinLoss_sum=(\"WinLoss\",\"sum\"),\n Positives=(\"WinLoss\", pos), Negatives=(\"WinLoss\", neg))\n\n# ungroup the data for the hovertemplate\ndui = pd.merge(df, Related_Display_Info, how = \"outer\", on=\"RacerCount\", \n suffixes=(False, False))\n\n# create hover template for all traces\nht=\"<br>\".join([\"<br>RacerCount: %{customdata[0]}\",\n \"Velocity: %{customdata[1]:.2f}\",\n \"Counts: %{customdata[3]}\",\n \"Winloss: %{customdata[4]}\",\n \"Positives: %{customdata[5]}\",\n \"Negatives: %{customdata[6]}<br>\"])\n\n# Create figure with secondary y-axis\nfig = make_subplots(specs=[[{\"secondary_y\": True}]])\n\n# Add traces\nfig.add_trace(go.Scatter(x=dui[\"RacerCount\"], y=dui[\"Velocity\"], \n name=\"Velocity\", mode=\"markers\",\n customdata=dui, hovertemplate=ht), \n secondary_y=False)\n\nfig.add_trace(\n go.Scatter(x = dui[\"RacerCount\"], y=dui[\"WinLoss_sum\"], customdata=dui,\n name=\"Win/Loss\", mode=\"markers\", \n marker=dict(color=((dui[\"WinLoss_sum\"] < 0)).astype('int'),\n colorscale=[[0, 'green'], [1, 'red']]), \n hovertemplate=ht),\n secondary_y=True)\n\n# Add figure title\nfig.update_layout(\n title_text=\"Race Analysis\"\n)\n\n# Set x-axis title\nfig.update_xaxes(title_text=\"<b>Racer Counts</b>\")\n\n# Set y-axes titles\nfig.update_yaxes(title_text=\"<b>Velocity</b>\", secondary_y=False)\nfig.update_yaxes(title_text=\"<b>Win/Loss/b>\", secondary_y=True)\nfig.update_layout(hovermode=\"x unified\")\nfig.show()\n\n" ]
[ 2 ]
[]
[]
[ "plotly", "python", "python_3.x" ]
stackoverflow_0074476392_plotly_python_python_3.x.txt
Q: batch execute_write to neo4j with Python SDK I aim to iterate through a dataframe to extract values, then create multiple Node in a batch manner to neo4j via the Python SDK. However, execute_write seems to allow on a single statement per query {code: Neo.ClientError.Statement.SyntaxError} {message: Expected exactly one statement per query but got: 3542 (there are 3542 rows in my df) My attempt: def create_Person(tx, df): query_string = """""" for i, row in df.iterrows(): query_string = query_string + f""" MERGE (l:Person {{id: "{row['col']}"}}) SET l.name = "{row['col1']}", l.Person_Type = "{row['col2']}"; """ return tx.run(query_string) with neo4j_driver.session() as session: # Run the unit of work within a Read Transaction result = session.execute_write(create_Person, df) session.close() A: not sure if it is the best but this works for me: with neo4j_driver.session() as session: # Run the unit of work within a Read Transaction with session.begin_transaction() as tx: for i, row in df.iterrows(): tx.run(f""" MERGE (l:Person {{id: "{row['col']}"}}) SET l.name = "{row['col1']}", l.Person_Type = "{row['col2']}"; """) tx.commit() session.close()
batch execute_write to neo4j with Python SDK
I aim to iterate through a dataframe to extract values, then create multiple Node in a batch manner to neo4j via the Python SDK. However, execute_write seems to allow on a single statement per query {code: Neo.ClientError.Statement.SyntaxError} {message: Expected exactly one statement per query but got: 3542 (there are 3542 rows in my df) My attempt: def create_Person(tx, df): query_string = """""" for i, row in df.iterrows(): query_string = query_string + f""" MERGE (l:Person {{id: "{row['col']}"}}) SET l.name = "{row['col1']}", l.Person_Type = "{row['col2']}"; """ return tx.run(query_string) with neo4j_driver.session() as session: # Run the unit of work within a Read Transaction result = session.execute_write(create_Person, df) session.close()
[ "not sure if it is the best but this works for me:\nwith neo4j_driver.session() as session:\n# Run the unit of work within a Read Transaction\nwith session.begin_transaction() as tx:\n for i, row in df.iterrows():\n tx.run(f\"\"\"\n MERGE (l:Person {{id: \"{row['col']}\"}})\n SET l.name = \"{row['col1']}\",\n l.Person_Type = \"{row['col2']}\";\n \"\"\")\n tx.commit()\n\nsession.close()\n\n" ]
[ 0 ]
[]
[]
[ "cypher", "neo4j", "python" ]
stackoverflow_0074479685_cypher_neo4j_python.txt
Q: Python, comment that wraps entire code been trying to wrap entire code in a comment, how do i do that? i tried #, """, with no success, and as a question, is this even possible? i guess im stacking comments on top of other comments but im sure there is a way, im wrapping this code because i want to keep it in one file along with other projects in the same file but i dont want to activate ALL of the code. here's the code i want to wrap as a comment: """Artithmetic expressions""" addition = 1 + 1; subtraction = 2-1; miltiplication = 2*2; division = 5/3; """5/3 = 1""" """Variables and Assignment""" a, b = addition, subtraction; """a = addition, b = subtraction""" """ prints 2 1""" print a, b """Strings, indexing strings""" string1 = "hello world hell" string2 = string1[2] """prints 1""" print string2 """string extraction""" string3 = string1[0:5] """ hello """ print string3 """Finding""" stringfind1 = string1.find("hell", 4) """ prints 12 """ print stringfind1 """Python 2""" """If statement""" if (3 < 10): print "true" else: print "false" """ Logical Operators""" if (3 and 4 < 10): print "true" """may use or""" """Loops, ex prints 10 iterations""" count = 0 while (count < 10): print 'The count is: ', count count = count + 1 print "Good bye!" """converting between numbers and strings: str(one) converts int to string""" """use 'ord' for string --> int, lastly chr = """ one = 1 convert = str(one) if convert == 1: print "true" else: print "false" '''returns one character string from number input''' var1 = chr(65) print var1 """splitting strings: () prints all words in a string""" """ ' ', 1 prints all except the last word?""" string10 = "fucking hell i hate your life" var2 = string10.split() print var2 print string10.split(' ', 1) """Looping through strings with 'for' loop, ex prints all chars in 'string10' in new lines""" for fuckoff in string10: print 'Current letter :', fuckoff A: You can't: Python comments are single line. And docstrings are not comments. However, during development if you need to "switch off" a block of code you can put it into an if False: block. Eg: if False: addition = 1 + 1; subtraction = 2-1; miltiplication = 2*2; division = 5/3; A: After seven years of being not right answered here the right answer to the question above: You can 'switch off' a block of code using appropriate triple quotes. It will not always be possible (if the code uses both kinds of triple quoted strings), but in the case of code in the question it is possible to use ''' single quoted triple quoted string to achieve what you are after. Below how it looks like: ''' """Artithmetic expressions""" addition = 1 + 1; subtraction = 2-1; miltiplication = 2*2; division = 5/3; """5/3 = 1""" """Variables and Assignment""" a, b = addition, subtraction; """a = addition, b = subtraction""" """ prints 2 1""" print a, b """Strings, indexing strings""" string1 = "hello world hell" string2 = string1[2] """prints 1""" print string2 """string extraction""" string3 = string1[0:5] """ hello """ print string3 """Finding""" stringfind1 = string1.find("hell", 4) """ prints 12 """ print stringfind1 """Python 2""" """If statement""" if (3 < 10): print "true" else: print "false" """ Logical Operators""" if (3 and 4 < 10): print "true" """may use or""" """Loops, ex prints 10 iterations""" count = 0 while (count < 10): print 'The count is: ', count count = count + 1 print "Good bye!" """converting between numbers and strings: str(one) converts int to string""" """use 'ord' for string --> int, lastly chr = """ one = 1 convert = str(one) if convert == 1: print "true" else: print "false" '''returns one character string from number input''' var1 = chr(65) print var1 """splitting strings: () prints all words in a string""" """ ' ', 1 prints all except the last word?""" string10 = "fucking hell i hate your life" var2 = string10.split() print var2 print string10.split(' ', 1) """Looping through strings with 'for' loop, ex prints all chars in 'string10' in new lines""" for fuckoff in string10: print 'Current letter :', fuckoff ''' You can see the evidence that it works as expected from the kind of highlighting of the above piece of code in its code textbox.
Python, comment that wraps entire code
been trying to wrap entire code in a comment, how do i do that? i tried #, """, with no success, and as a question, is this even possible? i guess im stacking comments on top of other comments but im sure there is a way, im wrapping this code because i want to keep it in one file along with other projects in the same file but i dont want to activate ALL of the code. here's the code i want to wrap as a comment: """Artithmetic expressions""" addition = 1 + 1; subtraction = 2-1; miltiplication = 2*2; division = 5/3; """5/3 = 1""" """Variables and Assignment""" a, b = addition, subtraction; """a = addition, b = subtraction""" """ prints 2 1""" print a, b """Strings, indexing strings""" string1 = "hello world hell" string2 = string1[2] """prints 1""" print string2 """string extraction""" string3 = string1[0:5] """ hello """ print string3 """Finding""" stringfind1 = string1.find("hell", 4) """ prints 12 """ print stringfind1 """Python 2""" """If statement""" if (3 < 10): print "true" else: print "false" """ Logical Operators""" if (3 and 4 < 10): print "true" """may use or""" """Loops, ex prints 10 iterations""" count = 0 while (count < 10): print 'The count is: ', count count = count + 1 print "Good bye!" """converting between numbers and strings: str(one) converts int to string""" """use 'ord' for string --> int, lastly chr = """ one = 1 convert = str(one) if convert == 1: print "true" else: print "false" '''returns one character string from number input''' var1 = chr(65) print var1 """splitting strings: () prints all words in a string""" """ ' ', 1 prints all except the last word?""" string10 = "fucking hell i hate your life" var2 = string10.split() print var2 print string10.split(' ', 1) """Looping through strings with 'for' loop, ex prints all chars in 'string10' in new lines""" for fuckoff in string10: print 'Current letter :', fuckoff
[ "You can't: Python comments are single line. And docstrings are not comments. However, during development if you need to \"switch off\" a block of code you can put it into an if False: block.\nEg:\nif False:\n addition = 1 + 1;\n subtraction = 2-1;\n miltiplication = 2*2;\n division = 5/3;\n\n", "After seven years of being not right answered here the right answer to the question above:\nYou can 'switch off' a block of code using appropriate triple quotes.\nIt will not always be possible (if the code uses both kinds of triple quoted strings), but in the case of code in the question it is possible to use ''' single quoted triple quoted string to achieve what you are after.\nBelow how it looks like:\n'''\n\"\"\"Artithmetic expressions\"\"\"\n\naddition = 1 + 1;\nsubtraction = 2-1;\nmiltiplication = 2*2;\ndivision = 5/3; \"\"\"5/3 = 1\"\"\"\n\n\"\"\"Variables and Assignment\"\"\"\n\na, b = addition, subtraction; \"\"\"a = addition, b = subtraction\"\"\"\n\n\"\"\" prints 2 1\"\"\"\nprint a, b \n\n\"\"\"Strings, indexing strings\"\"\"\n\nstring1 = \"hello world hell\"\n\nstring2 = string1[2] \n\"\"\"prints 1\"\"\"\nprint string2 \n\n\"\"\"string extraction\"\"\"\n\nstring3 = string1[0:5]\n\"\"\" hello \"\"\"\nprint string3 \n\n\"\"\"Finding\"\"\"\n\nstringfind1 = string1.find(\"hell\", 4)\n\"\"\" prints 12 \"\"\"\nprint stringfind1 \n\n\n\n\n\n\n\"\"\"Python 2\"\"\"\n\"\"\"If statement\"\"\"\nif (3 < 10):\nprint \"true\"\n\nelse:\nprint \"false\"\n\n\n\"\"\" Logical Operators\"\"\"\n\nif (3 and 4 < 10): \nprint \"true\"\n\"\"\"may use or\"\"\"\n\n\n\"\"\"Loops, ex prints 10 iterations\"\"\"\ncount = 0\nwhile (count < 10):\nprint 'The count is: ', count\ncount = count + 1\n\nprint \"Good bye!\"\n\n\"\"\"converting between numbers and strings: str(one) converts int to string\"\"\"\n\"\"\"use 'ord' for string --> int, lastly chr = \"\"\"\none = 1\nconvert = str(one) \nif convert == 1:\nprint \"true\"\n\nelse:\nprint \"false\"\n\n'''returns one character string from number input'''\nvar1 = chr(65)\nprint var1\n\n\n\"\"\"splitting strings: () prints all words in a string\"\"\"\n\"\"\" ' ', 1 prints all except the last word?\"\"\"\nstring10 = \"fucking hell i hate your life\"\nvar2 = string10.split()\n\nprint var2\nprint string10.split(' ', 1)\n\n\n\"\"\"Looping through strings with 'for' loop, ex prints all chars in 'string10' in new lines\"\"\"\n\nfor fuckoff in string10:\nprint 'Current letter :', fuckoff\n'''\n\nYou can see the evidence that it works as expected from the kind of highlighting of the above piece of code in its code textbox.\n" ]
[ 1, 0 ]
[]
[]
[ "comments", "python" ]
stackoverflow_0032186685_comments_python.txt
Q: How to append and pair coordinate values in nested for loop I am finding the distance between two pairs of random points, I am then duplicating the points in a 3 x 3 pattern so that the same points are seen after a certain distance, which is done with a nested for loop. I am trying to find the distance between the newly created points from the a for loop. I tried using append within the loop to store the points, which gives me the distances, but it is only giving me 24 distances when there should be a lot more between 9 copies of 4 points. Am I not implementing append correcting to account for additional distances? Code import numpy as np import matplotlib.pyplot as plt import random import math dist = [] #scale of the plot scalevalue = 10 x = [random.uniform(1, 10) for n in range(4)] y = [random.uniform(1, 10) for n in range(4)] tiles = np.linspace(-scalevalue, scalevalue, 3) for i in tiles: for j in tiles: bg_tile = plt.scatter(x + i,y + j, c="black", s=3) dist.append(i) dist.append(j) pairs = list(zip(x + i,y + j)) plt.show() def distance(x, y): return math.sqrt((x[0]-x[1])**2 + (y[0]-y[1])**2) for i in range(len(pairs)): for j in range(i+1,len(pairs)): dist.append(distance(pairs[i],pairs[j])) print(dist) A: Run your code: x (and y) is a list of numbers (4): In [553]: x Out[553]: [8.699962201099193, 3.1643082386096975, 5.245385542599207, 3.0412506367299033] tiles is an array: In [554]: tiles Out[554]: array([-10., 0., 10.]) And the first iteration - without the plot, and doing one (i,j) append, rather than the sequential. This better separates the i values from the j ones: In [558]: dist=[] ...: for i in tiles: ...: for j in tiles: ...: dist.append((i,j)) ...: pairs = list(zip(x + i,y + j)) In [559]: dist Out[559]: [(-10.0, -10.0), # that just reflects how you iterate on tiles (-10.0, 0.0), (-10.0, 10.0), (0.0, -10.0), (0.0, 0.0), (0.0, 10.0), (10.0, -10.0), (10.0, 0.0), (10.0, 10.0)] That flat list you show in the comment confuses those values. Why are you doing this? pairs ends up with the last i,j values; earlier iterations are thrown away. In [560]: pairs Out[560]: [(18.699962201099193, 18.63063210113664), (13.164308238609697, 12.329695190243902), (15.245385542599207, 16.685778921185936), (13.041250636729902, 15.89730196643608)] So the first column is: In [561]: i Out[561]: 10.0 In [562]: x+i Out[562]: array([18.6999622 , 13.16430824, 15.24538554, 13.04125064]) x is a list, but i is np.float64, so the addition is array addition (list 'addition' is join). pairs With that last pairs: In [567]: alist = [] ...: for i in range(len(pairs)): ...: for j in range(i+1,len(pairs)): ...: alist.append(distance(pairs[i],pairs[j])) ...: In [568]: alist Out[568]: [0.8374876734992962, 1.442060937629651, 2.8568926932380996, 1.664725810930718, 2.9755013255616056, 3.1987125977481807] What the iteration is doing is get the 6 combinations of these 4 pairs In [574]: distance(pairs[0],pairs[1]) Out[574]: 0.8374876734992962 Those 6 values (different in my case because of different random numbers) have nothing to do with the tile values that you previously accumulated in dist. If I make a 2d array from pairs: In [575]: arr = np.array(pairs); arr Out[575]: array([[18.6999622 , 18.6306321 ], [13.16430824, 12.32969519], [15.24538554, 16.68577892], [13.04125064, 15.89730197]]) I can replicate the distance with: In [576]: (arr[:,1]-arr[:,0])**2 Out[576]: array([4.80666276e-03, 6.96578941e-01, 2.07473309e+00, 8.15702920e+00]) In [577]: np.sqrt(np.sum(_[:2])) Out[577]: 0.8374876734992962 I don't know what's the significance of this. pairs is just the x,y values with an added 10: In [579]: np.column_stack((x,y))+10 Out[579]: array([[18.6999622 , 18.6306321 ], [13.16430824, 12.32969519], [15.24538554, 16.68577892], [13.04125064, 15.89730197]])
How to append and pair coordinate values in nested for loop
I am finding the distance between two pairs of random points, I am then duplicating the points in a 3 x 3 pattern so that the same points are seen after a certain distance, which is done with a nested for loop. I am trying to find the distance between the newly created points from the a for loop. I tried using append within the loop to store the points, which gives me the distances, but it is only giving me 24 distances when there should be a lot more between 9 copies of 4 points. Am I not implementing append correcting to account for additional distances? Code import numpy as np import matplotlib.pyplot as plt import random import math dist = [] #scale of the plot scalevalue = 10 x = [random.uniform(1, 10) for n in range(4)] y = [random.uniform(1, 10) for n in range(4)] tiles = np.linspace(-scalevalue, scalevalue, 3) for i in tiles: for j in tiles: bg_tile = plt.scatter(x + i,y + j, c="black", s=3) dist.append(i) dist.append(j) pairs = list(zip(x + i,y + j)) plt.show() def distance(x, y): return math.sqrt((x[0]-x[1])**2 + (y[0]-y[1])**2) for i in range(len(pairs)): for j in range(i+1,len(pairs)): dist.append(distance(pairs[i],pairs[j])) print(dist)
[ "Run your code:\nx (and y) is a list of numbers (4):\nIn [553]: x\nOut[553]: [8.699962201099193, 3.1643082386096975, 5.245385542599207, 3.0412506367299033]\n\ntiles is an array:\nIn [554]: tiles\nOut[554]: array([-10., 0., 10.])\n\nAnd the first iteration - without the plot, and doing one (i,j) append, rather than the sequential. This better separates the i values from the j ones:\nIn [558]: dist=[]\n ...: for i in tiles:\n ...: for j in tiles:\n ...: dist.append((i,j))\n ...: pairs = list(zip(x + i,y + j))\n\nIn [559]: dist\nOut[559]: \n[(-10.0, -10.0), # that just reflects how you iterate on tiles\n (-10.0, 0.0),\n (-10.0, 10.0),\n (0.0, -10.0),\n (0.0, 0.0),\n (0.0, 10.0),\n (10.0, -10.0),\n (10.0, 0.0),\n (10.0, 10.0)]\n\nThat flat list you show in the comment confuses those values. Why are you doing this?\npairs ends up with the last i,j values; earlier iterations are thrown away.\nIn [560]: pairs\nOut[560]: \n[(18.699962201099193, 18.63063210113664),\n (13.164308238609697, 12.329695190243902),\n (15.245385542599207, 16.685778921185936),\n (13.041250636729902, 15.89730196643608)]\n\nSo the first column is:\nIn [561]: i\nOut[561]: 10.0\nIn [562]: x+i\nOut[562]: array([18.6999622 , 13.16430824, 15.24538554, 13.04125064])\n\nx is a list, but i is np.float64, so the addition is array addition (list 'addition' is join).\npairs\nWith that last pairs:\nIn [567]: alist = []\n ...: for i in range(len(pairs)):\n ...: for j in range(i+1,len(pairs)):\n ...: alist.append(distance(pairs[i],pairs[j]))\n ...: \n\nIn [568]: alist\nOut[568]: \n[0.8374876734992962,\n 1.442060937629651,\n 2.8568926932380996,\n 1.664725810930718,\n 2.9755013255616056,\n 3.1987125977481807]\n\nWhat the iteration is doing is get the 6 combinations of these 4 pairs\nIn [574]: distance(pairs[0],pairs[1])\nOut[574]: 0.8374876734992962\n\nThose 6 values (different in my case because of different random numbers) have nothing to do with the tile values that you previously accumulated in dist.\nIf I make a 2d array from pairs:\nIn [575]: arr = np.array(pairs); arr\nOut[575]: \narray([[18.6999622 , 18.6306321 ],\n [13.16430824, 12.32969519],\n [15.24538554, 16.68577892],\n [13.04125064, 15.89730197]])\n\nI can replicate the distance with:\nIn [576]: (arr[:,1]-arr[:,0])**2\nOut[576]: array([4.80666276e-03, 6.96578941e-01, 2.07473309e+00, 8.15702920e+00])\n\nIn [577]: np.sqrt(np.sum(_[:2]))\nOut[577]: 0.8374876734992962\n\nI don't know what's the significance of this. pairs is just the x,y values with an added 10:\nIn [579]: np.column_stack((x,y))+10\nOut[579]: \narray([[18.6999622 , 18.6306321 ],\n [13.16430824, 12.32969519],\n [15.24538554, 16.68577892],\n [13.04125064, 15.89730197]])\n\n" ]
[ 0 ]
[]
[]
[ "append", "distance", "numpy", "python" ]
stackoverflow_0074480125_append_distance_numpy_python.txt
Q: Python pandas: Write variable to excel cell in existing sheet I'm using python pandas to read a large dataset from excel. I then do some calculations and want to write a variable to a single cell in a existing excel file in an existing sheet. So far I have only seen documentation to write a dataframe with pandas. Is this the way to go? If so, I then will make a dataframe only containing that one variable. What is the best and easiest way to go forward here? A: One option is doing something like this using pandas dataframe and passing only the variable to the dataframe: with pd.ExcelWriter("tom_test.xlsx", mode="a", engine="openpyxl", if_sheet_exists='overlay') as writer: test = pd.DataFrame([fkbareal_tot]) test.to_excel(writer, sheet_name="sheet 1", startrow=1, startcol=1, index=False, header=False)
Python pandas: Write variable to excel cell in existing sheet
I'm using python pandas to read a large dataset from excel. I then do some calculations and want to write a variable to a single cell in a existing excel file in an existing sheet. So far I have only seen documentation to write a dataframe with pandas. Is this the way to go? If so, I then will make a dataframe only containing that one variable. What is the best and easiest way to go forward here?
[ "One option is doing something like this using pandas dataframe and passing only the variable to the dataframe:\nwith pd.ExcelWriter(\"tom_test.xlsx\", mode=\"a\", engine=\"openpyxl\", if_sheet_exists='overlay') as writer:\n test = pd.DataFrame([fkbareal_tot])\n test.to_excel(writer, sheet_name=\"sheet 1\", startrow=1, startcol=1, index=False, header=False)\n\n" ]
[ 0 ]
[]
[]
[ "excel", "pandas", "python" ]
stackoverflow_0074480554_excel_pandas_python.txt
Q: Python protofub: how to pass response message from one grpc call to another Im new to grpc/protobuf so please excuse any terminology errors in my question. I need to take a response from one gRPC request and feed it into the next request. I cant figure out how to populate the "spec" line. Proto file1: message UpdateClusterRequest { string service_name = 3; ClusterTemplate spec = 4; string config_revision = 5; string deploy_strategy = 6; } Proto file2: message ClusterTemplate { message AppSettings { string version = 1; repeated InstanceType instance_layout = 2; repeated ClientIDTemplate client_ids = 3; } AppSettings app = 1; } So in my code, the template_response captures the output from the get_template_revisions gRPC API call. I then need to pass the contents to request.spec to the next gRPC API request, which is what I need help with. template_response=get_template_revisions(client_stub,payload_project_id,metadata_okta_token_and_env)grpc_logger.debug(template_response.revisions[0].template.app) request=app_pb2.UpdateClusterRequest() request.spec = ??? response=client_stub.get_grpc_app_stub(grpc_stub_method).UpdateCluster(request=request,metadata=metadata_okta_token_and_env) This is a heavily nested message mapping and I have tried many permutations without success below and not limited to: request.spec.extend([template_response.revisions[0].template.app]) request.spec = template_response.revisions[0].template request.spec.MergeFromString(template_response.revisions[0].template.app) I've read all the python protobuf documentation and I just cant get it. Help is aprreciated... A: It's unclear from your question because the (message) type of template_response isn't explicit but hinted (template_response.revisions[0].template.app). So...if the Proto were: foo.proto: syntax = "proto3"; message UpdateClusterRequest { string service_name = 3; ClusterTemplate spec = 4; string config_revision = 5; string deploy_strategy = 6; } message ClusterTemplate { message AppSettings { string version = 1; // repeated InstanceType instance_layout = 2; // repeated ClientIDTemplate client_ids = 3; } AppSettings app = 1; } // Assume TemplateResponse.Revision's template is a ClusterTemplate message TemplateResponse { message Revision { ClusterTemplate template = 1; } repeated Revision revisions = 1; } NOTE I've commented out InstanceType and ClientIDTemplate because they're also undefined but not necessary for the explanation. And: python3 \ -m grpc_tools.protoc \ --proto_path=${PWD} \ --python_out=${PWD} \ ${PWD}/foo.proto Then: from google.protobuf.json_format import ParseDict import foo_pb2 d = { "revisions":[ { "template": { "app": { "version": "1", } } }, { "template": { "app": { "version": "2", } } } ] } template_response = foo_pb2.TemplateResponse() # Create a TemplateResponse from the dictionary ParseDict(d,template_response) # Its type is <class 'foo_pb2.ClusterTemplate'> print(type(template_response.revisions[0].template)) # Create `UpdateClusterResponse` update_cluster_request = foo_pb2.UpdateClusterRequest() # Scalar assignments update_cluster_request.service_name = "xxx" update_cluster_request.config_revision = "xxx" update_cluster_request.deploy_strategy = "xxx" # Uses `google.protobuf.message.CopyFrom` # Can't assign messages update_cluster_request.spec.CopyFrom(template_response.revisions[0].template) print(update_cluster_request) Python is a little gnarly around protocol buffers. In other language implementations, you'd be able to assign the message more "idiomatically" but, in Python, it's not possible to assign messages (among other quirks). So, assuming that template_response.revisions[*].template is exactly the same type as UpdateClusterRequest's spec type, then you can use CopyFrom to achieve this.
Python protofub: how to pass response message from one grpc call to another
Im new to grpc/protobuf so please excuse any terminology errors in my question. I need to take a response from one gRPC request and feed it into the next request. I cant figure out how to populate the "spec" line. Proto file1: message UpdateClusterRequest { string service_name = 3; ClusterTemplate spec = 4; string config_revision = 5; string deploy_strategy = 6; } Proto file2: message ClusterTemplate { message AppSettings { string version = 1; repeated InstanceType instance_layout = 2; repeated ClientIDTemplate client_ids = 3; } AppSettings app = 1; } So in my code, the template_response captures the output from the get_template_revisions gRPC API call. I then need to pass the contents to request.spec to the next gRPC API request, which is what I need help with. template_response=get_template_revisions(client_stub,payload_project_id,metadata_okta_token_and_env)grpc_logger.debug(template_response.revisions[0].template.app) request=app_pb2.UpdateClusterRequest() request.spec = ??? response=client_stub.get_grpc_app_stub(grpc_stub_method).UpdateCluster(request=request,metadata=metadata_okta_token_and_env) This is a heavily nested message mapping and I have tried many permutations without success below and not limited to: request.spec.extend([template_response.revisions[0].template.app]) request.spec = template_response.revisions[0].template request.spec.MergeFromString(template_response.revisions[0].template.app) I've read all the python protobuf documentation and I just cant get it. Help is aprreciated...
[ "It's unclear from your question because the (message) type of template_response isn't explicit but hinted (template_response.revisions[0].template.app).\nSo...if the Proto were:\nfoo.proto:\nsyntax = \"proto3\";\n\n\nmessage UpdateClusterRequest {\n string service_name = 3;\n\n ClusterTemplate spec = 4;\n string config_revision = 5;\n string deploy_strategy = 6;\n\n}\n\nmessage ClusterTemplate {\n message AppSettings {\n string version = 1;\n // repeated InstanceType instance_layout = 2;\n // repeated ClientIDTemplate client_ids = 3;\n }\n\n AppSettings app = 1;\n}\n\n// Assume TemplateResponse.Revision's template is a ClusterTemplate\nmessage TemplateResponse {\n message Revision {\n ClusterTemplate template = 1;\n }\n\n repeated Revision revisions = 1;\n}\n\n\nNOTE I've commented out InstanceType and ClientIDTemplate because they're also undefined but not necessary for the explanation.\n\nAnd:\npython3 \\\n-m grpc_tools.protoc \\\n--proto_path=${PWD} \\\n--python_out=${PWD} \\\n${PWD}/foo.proto\n\nThen:\nfrom google.protobuf.json_format import ParseDict\n\nimport foo_pb2\n\nd = {\n \"revisions\":[\n {\n \"template\": {\n \"app\": {\n \"version\": \"1\",\n }\n }\n },\n {\n \"template\": {\n \"app\": {\n \"version\": \"2\",\n }\n }\n }\n\n ]\n}\ntemplate_response = foo_pb2.TemplateResponse()\n\n# Create a TemplateResponse from the dictionary\nParseDict(d,template_response)\n\n# Its type is <class 'foo_pb2.ClusterTemplate'>\nprint(type(template_response.revisions[0].template))\n\n\n# Create `UpdateClusterResponse`\nupdate_cluster_request = foo_pb2.UpdateClusterRequest()\n\n# Scalar assignments\nupdate_cluster_request.service_name = \"xxx\"\nupdate_cluster_request.config_revision = \"xxx\"\nupdate_cluster_request.deploy_strategy = \"xxx\"\n\n# Uses `google.protobuf.message.CopyFrom`\n# Can't assign messages\nupdate_cluster_request.spec.CopyFrom(template_response.revisions[0].template)\n\n\nprint(update_cluster_request)\n\nPython is a little gnarly around protocol buffers. In other language implementations, you'd be able to assign the message more \"idiomatically\" but, in Python, it's not possible to assign messages (among other quirks).\nSo, assuming that template_response.revisions[*].template is exactly the same type as UpdateClusterRequest's spec type, then you can use CopyFrom to achieve this.\n" ]
[ 0 ]
[]
[]
[ "grpc_python", "protocol_buffers", "python" ]
stackoverflow_0074469727_grpc_python_protocol_buffers_python.txt
Q: Select an array based on another array in Python I created these two arrays students = np.array([['Hannah'],['Alonzo'], ['Antoinette'], ['Latasha'], ['Phil']]) grades = np.array([[86, 94], [83, 79], [97, 95], [90, 87], [73, 76]]) how do I select all rows from grade based on the student name, for example Alonzo? I tried to select it all using index but for some reason the syntax was wrong, and I'm not sure how to select it. A: import numpy as np students = np.array([['Hannah'],['Alonzo'], ['Antoinette'], ['Latasha'], ['Phil']]) grades = np.array([[86, 94], [83, 79], [97, 95], [90, 87], [73, 76]]) for index,student in enumerate(students): if student == 'Alonzo': print(grades[index]) output: [83 79] A: You're probably using the wrong data structure - look into using a dictionary instead (the same concept is often called a map in other languages.) If you want to push forward with what you have, look for an indexOf operation going by name, and then use that to access something from grades. Though, that syntax looks strange to me - I think you're creating a tuple? In general it's difficult to dynamically index into a tuple - I assume there's a way to do it in python, but that's really not what they're for. A: find the index from first array and then extract that index from grades: find = 'Alonzo' for i,student in enumerate(students): if student == find: print(grades[i]) break output: >> [83, 79] A: students = np.array([['Hannah'],['Alonzo'], ['Antoinette'], ['Latasha'], ['Phil']]) grades = np.array([[86, 94], [83, 79], [97, 95], [90, 87], [73, 76]]) grades[(students == ['Alonzo']).flatten()]
Select an array based on another array in Python
I created these two arrays students = np.array([['Hannah'],['Alonzo'], ['Antoinette'], ['Latasha'], ['Phil']]) grades = np.array([[86, 94], [83, 79], [97, 95], [90, 87], [73, 76]]) how do I select all rows from grade based on the student name, for example Alonzo? I tried to select it all using index but for some reason the syntax was wrong, and I'm not sure how to select it.
[ "import numpy as np\nstudents = np.array([['Hannah'],['Alonzo'], ['Antoinette'], ['Latasha'], ['Phil']])\n\ngrades = np.array([[86, 94], [83, 79], [97, 95], [90, 87], [73, 76]])\n\nfor index,student in enumerate(students):\n if student == 'Alonzo':\n print(grades[index])\n\noutput:\n[83 79]\n\n", "You're probably using the wrong data structure - look into using a dictionary instead (the same concept is often called a map in other languages.)\nIf you want to push forward with what you have, look for an indexOf operation going by name, and then use that to access something from grades.\nThough, that syntax looks strange to me - I think you're creating a tuple? In general it's difficult to dynamically index into a tuple - I assume there's a way to do it in python, but that's really not what they're for.\n", "find the index from first array and then extract that index from grades:\nfind = 'Alonzo'\n\nfor i,student in enumerate(students):\n if student == find:\n print(grades[i])\n break\n\noutput:\n>>\n[83, 79]\n\n", "students = np.array([['Hannah'],['Alonzo'], ['Antoinette'], ['Latasha'], ['Phil']])\ngrades = np.array([[86, 94], [83, 79], [97, 95], [90, 87], [73, 76]])\ngrades[(students == ['Alonzo']).flatten()]\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074480824_python.txt
Q: Parsing an XML document using python. Cannot use any library that requires pip I'm parsing an XML document, and I need the book title & number value under Score and place them on a 2d list. My current code, can retrieve that data and place it on a list, but the problem is there's some sections in the XML file where the score is not present, and I need to be able to leave an indicator (ex. N/A) on the list to indicate that value is empty for that particular book title. This is a sample, simplified version of the xml file. Please note, that this problem repeats throughout the much longer version of the xml file. So no code can use, 1 as an index to get past this problem. <bookstore> <book>[A-23] Everyday Italian</book> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> <field></field> <key id="6408">[A-23]Everyday Italian</key> <brief>Everyday Italian</brief> <success></success> <province> id="256" key=".com.place.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="490" key=".com.ave.fieldtypes:float"> <name>Score</name> <numbers> <number>4.0</number> </numbers> </province> <province> id="531" key=".com.spot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-42] Pottery</book> <author>Leo Di Plos</author> <year>2012</year> <price>25.00</price> <field></field> <key id="4502">[A-42] Pottery</key> <brief>Pottery</brief> <success></success> <province> id="627" key=".com.tri.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="124" key=".com.doct.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-12] Skipping the Line</book> <author>Gloria Gasol</author> <year>1999</year> <price>22.00</price> <field></field> <key id="1468">[A-23]Skipping the Line</key> <brief>Skipping the Line</brief> <success></success> <province> id="754" key=".com.cit.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="211" key=".com.soct.fieldtypes:float"> <name>Score</name> <numbers> <number>12.0</number> </numbers> </province> <province> id="458" key=".com.lot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> This is my current code: book = [] for book in root.iter('book'): item1 = book.text title.append(item1) score = [] for province in root.iter('province'): for child in province: for grandchild in child: if re.match('^[+-]?\d*?\.\d+$', grandchild.text) != None: item2 = float(grandchild.text) score.append(item2) print(book, score) The expected output is: ([A-23] Everyday Italian, 4.0), ([A-42] Pottery, N/A), ([A-12] Skipping the Line, 12.0) But the actual output is: ([A-23] Everyday Italian, 4.0), ([A-42] Pottery, 12.0), ([A-12] Skipping the Line) A: python's strength is the speed in creating a solution, among others, using ready-made libraries. Why you don't use lib like xmltodict? for single bookstore: <bookstore> <book>[A-23] Everyday Italian</book>** <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> <field></field> <key id="6408">[A-23]Everyday Italian</key> <brief>Everyday Italian</brief> <success></success> <province> id="256" key=".com.place.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="490" key=".com.ave.fieldtypes:float"> ** <name>Score</name>** <numbers> ** <number>4.0</number>** </numbers> </province> <province> id="531" key=".com.spot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> python code for read it: import xmltodict dict_data = xmltodict.parse(xml_data) dict_data title = dict_data["bookstore"]["book"] score = dict_data["bookstore"]["province"][1]["numbers"]["number"] Are You sure that your xml is correct? You should create something like list of bookstore objects e.g.: <BookstoreList> <Bookstore> //data here </Bookstore> <Bookstore> //data here </Bookstore> // etc. </BookstoreList> A: Here we go.. import xml.etree.ElementTree as ET xml = '''<r> <bookstore> <book>[A-23] Everyday Italian</book> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> <field></field> <key id="6408">[A-23]Everyday Italian</key> <brief>Everyday Italian</brief> <success></success> <province> id="256" key=".com.place.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="490" key=".com.ave.fieldtypes:float"> <name>Score</name> <numbers> <number>4.0</number> </numbers> </province> <province> id="531" key=".com.spot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-42] Pottery</book> <author>Leo Di Plos</author> <year>2012</year> <price>25.00</price> <field></field> <key id="4502">[A-42] Pottery</key> <brief>Pottery</brief> <success></success> <province> id="627" key=".com.tri.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="124" key=".com.doct.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-12] Skipping the Line</book> <author>Gloria Gasol</author> <year>1999</year> <price>22.00</price> <field></field> <key id="1468">[A-23]Skipping the Line</key> <brief>Skipping the Line</brief> <success></success> <province> id="754" key=".com.cit.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="211" key=".com.soct.fieldtypes:float"> <name>Score</name> <numbers> <number>12.0</number> </numbers> </province> <province> id="458" key=".com.lot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> </r> ''' root = ET.fromstring(xml) data = [] for bs in root.findall('.//bookstore'): book = bs.find('book').text scores = [s.text for s in bs.findall('.//number') if s.text] score = 'N/A' if not scores else scores[0] data.append((book, score)) print(data) output [('[A-23] Everyday Italian', '4.0'), ('[A-42] Pottery', 'N/A'), ('[A-12] Skipping the Line', '12.0')]
Parsing an XML document using python. Cannot use any library that requires pip
I'm parsing an XML document, and I need the book title & number value under Score and place them on a 2d list. My current code, can retrieve that data and place it on a list, but the problem is there's some sections in the XML file where the score is not present, and I need to be able to leave an indicator (ex. N/A) on the list to indicate that value is empty for that particular book title. This is a sample, simplified version of the xml file. Please note, that this problem repeats throughout the much longer version of the xml file. So no code can use, 1 as an index to get past this problem. <bookstore> <book>[A-23] Everyday Italian</book> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> <field></field> <key id="6408">[A-23]Everyday Italian</key> <brief>Everyday Italian</brief> <success></success> <province> id="256" key=".com.place.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="490" key=".com.ave.fieldtypes:float"> <name>Score</name> <numbers> <number>4.0</number> </numbers> </province> <province> id="531" key=".com.spot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-42] Pottery</book> <author>Leo Di Plos</author> <year>2012</year> <price>25.00</price> <field></field> <key id="4502">[A-42] Pottery</key> <brief>Pottery</brief> <success></success> <province> id="627" key=".com.tri.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="124" key=".com.doct.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> <bookstore> <book>[A-12] Skipping the Line</book> <author>Gloria Gasol</author> <year>1999</year> <price>22.00</price> <field></field> <key id="1468">[A-23]Skipping the Line</key> <brief>Skipping the Line</brief> <success></success> <province> id="754" key=".com.cit.fieldtypes:float"> <name>Post</name> <numbers> <number></number> </numbers> </province> <province> id="211" key=".com.soct.fieldtypes:float"> <name>Score</name> <numbers> <number>12.0</number> </numbers> </province> <province> id="458" key=".com.lot.fieldtypes:float"> <name>Doc</name> <numbers> <number></number> </numbers> </province> </bookstore> This is my current code: book = [] for book in root.iter('book'): item1 = book.text title.append(item1) score = [] for province in root.iter('province'): for child in province: for grandchild in child: if re.match('^[+-]?\d*?\.\d+$', grandchild.text) != None: item2 = float(grandchild.text) score.append(item2) print(book, score) The expected output is: ([A-23] Everyday Italian, 4.0), ([A-42] Pottery, N/A), ([A-12] Skipping the Line, 12.0) But the actual output is: ([A-23] Everyday Italian, 4.0), ([A-42] Pottery, 12.0), ([A-12] Skipping the Line)
[ "python's strength is the speed in creating a solution, among others, using ready-made libraries.\nWhy you don't use lib like xmltodict?\nfor single bookstore:\n<bookstore>\n <book>[A-23] Everyday Italian</book>**\n\n <author>Giada De Laurentiis</author>\n <year>2005</year>\n <price>30.00</price>\n <field></field>\n <key id=\"6408\">[A-23]Everyday Italian</key>\n <brief>Everyday Italian</brief>\n <success></success>\n <province> id=\"256\" key=\".com.place.fieldtypes:float\">\n <name>Post</name>\n <numbers>\n <number></number>\n </numbers>\n </province>\n <province> id=\"490\" key=\".com.ave.fieldtypes:float\">\n **\n <name>Score</name>**\n \n <numbers>\n **\n <number>4.0</number>**\n \n </numbers>\n </province>\n <province> id=\"531\" key=\".com.spot.fieldtypes:float\">\n <name>Doc</name>\n <numbers>\n <number></number>\n </numbers>\n </province>\n</bookstore>\n\npython code for read it:\nimport xmltodict\n\ndict_data = xmltodict.parse(xml_data)\ndict_data\n\ntitle = dict_data[\"bookstore\"][\"book\"]\nscore = dict_data[\"bookstore\"][\"province\"][1][\"numbers\"][\"number\"]\n\nAre You sure that your xml is correct? You should create something like list of bookstore objects e.g.:\n<BookstoreList>\n <Bookstore>\n //data here\n </Bookstore>\n <Bookstore>\n //data here\n </Bookstore>\n // etc.\n</BookstoreList>\n\n", "Here we go..\nimport xml.etree.ElementTree as ET\n\nxml = '''<r>\n <bookstore>\n <book>[A-23] Everyday Italian</book>\n <author>Giada De Laurentiis</author>\n <year>2005</year>\n <price>30.00</price>\n <field></field>\n <key id=\"6408\">[A-23]Everyday Italian</key>\n <brief>Everyday Italian</brief>\n <success></success>\n <province> id=\"256\" key=\".com.place.fieldtypes:float\">\n <name>Post</name>\n <numbers>\n <number></number>\n </numbers>\n </province>\n <province> id=\"490\" key=\".com.ave.fieldtypes:float\">\n <name>Score</name>\n <numbers>\n <number>4.0</number>\n </numbers>\n </province>\n <province> id=\"531\" key=\".com.spot.fieldtypes:float\">\n <name>Doc</name>\n <numbers>\n <number></number>\n </numbers>\n </province>\n </bookstore>\n <bookstore>\n <book>[A-42] Pottery</book>\n <author>Leo Di Plos</author>\n <year>2012</year>\n <price>25.00</price>\n <field></field>\n <key id=\"4502\">[A-42] Pottery</key>\n <brief>Pottery</brief>\n <success></success>\n <province> id=\"627\" key=\".com.tri.fieldtypes:float\">\n <name>Post</name>\n <numbers>\n <number></number>\n </numbers>\n </province>\n <province> id=\"124\" key=\".com.doct.fieldtypes:float\">\n <name>Doc</name>\n <numbers>\n <number></number>\n </numbers>\n </province>\n </bookstore>\n <bookstore>\n <book>[A-12] Skipping the Line</book>\n <author>Gloria Gasol</author>\n <year>1999</year>\n <price>22.00</price>\n <field></field>\n <key id=\"1468\">[A-23]Skipping the Line</key>\n <brief>Skipping the Line</brief>\n <success></success>\n <province> id=\"754\" key=\".com.cit.fieldtypes:float\">\n <name>Post</name>\n <numbers>\n <number></number>\n </numbers>\n </province>\n <province> id=\"211\" key=\".com.soct.fieldtypes:float\">\n <name>Score</name>\n <numbers>\n <number>12.0</number>\n </numbers>\n </province>\n <province> id=\"458\" key=\".com.lot.fieldtypes:float\">\n <name>Doc</name>\n <numbers>\n <number></number>\n </numbers>\n </province>\n </bookstore>\n</r>\n'''\nroot = ET.fromstring(xml)\ndata = []\nfor bs in root.findall('.//bookstore'):\n book = bs.find('book').text\n scores = [s.text for s in bs.findall('.//number') if s.text]\n score = 'N/A' if not scores else scores[0]\n data.append((book, score))\nprint(data)\n\noutput\n[('[A-23] Everyday Italian', '4.0'), ('[A-42] Pottery', 'N/A'), ('[A-12] Skipping the Line', '12.0')]\n\n" ]
[ 2, 1 ]
[]
[]
[ "list", "parsing", "python", "xml" ]
stackoverflow_0074478984_list_parsing_python_xml.txt
Q: Create new list from two lists and create "helper" key to match 2 keys Weird title, but the question is pretty complex. (Please don't hesitate to change the title if you know a better one) I need to create a fresh new list with altered keys from other list, substrings from keys to check key name of other list and match these key substrings with another key from list. I hope it gets clear when I try to clarify what I need. First list named ansible_facts["ansible_net_virtual-systems"][0].vsys_zonelist outputs this: { "ansible_facts": { "ansible_net_virtual-systems": [ { "vsys_zonelist": [ "L3_v0123_Zone1", "L3_v0124_Zone2", "L3_v0125_Zone3", "L3_Trans_v0020_Zone4" ] } ] } } Second list ansible_facts.ansible_net_routing_table: { "ansible_facts": { "ansible_net_routing_table": [ { "virtual_router": "Internal", "destination": "10.12.123.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.123", "route_table": "unicast" }, { "virtual_router": "Internal", "destination": "10.12.124.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.124", "route_table": "unicast" }, { "virtual_router": "Internal", "destination": "10.12.125.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.125", "route_table": "unicast" }, { "virtual_router": "Internal", "destination": "10.12.20.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.20", "route_table": "unicast" } ] } } Now I have the substring v0123 from first list and interface: ae1.123 from second list. That means that they belong together. I now need the destination from the second list for each matching lists and also alter the name I get from ansible_facts["ansible_net_virtual-systems"][0].vsys_zonelist. What I need: Create a list that should look like this: ("interface": "ae1.123" is not needed anymore. Just a helper to match everything) { "result_list": [ { "name": "n-x-123-Zone1", "destination": "10.12.123.0/24" }, { "name": "n-x-124-Zone2", "destination": "10.12.124.0/24" }, { "name": "n-x-125-Zone3", "destination": "10.12.125.0/24" }, { "name": "n-x-20-Zone4", "destination": "10.12.20.0/24" } ] } I tried many different ways but somehow I cant manage to get it to work as everything I've done, doesn't help me to create my needed list. Some input for what I've already tried: - name: DEBUG list with split and loop ansible.builtin.debug: # creates # n-x-01-Name # but no list(!), just messages, but could be useful to create a loop msg: "n-x-{% if item.split('_')[1].startswith('Client') %}{{ item[3:100] }}{% else %}{{ item.split('_')[1] | regex_replace('v','') }}-{% endif %}{% if item.split('_')[2] is defined and item.split('_')[2].startswith('Trans') %}{{ item[3:50] }}{% elif item.split('_')[1].startswith('Clients')%}{% else %}{{ item[9:100] | default('') }}{% endif %}" loop: '{{ ansible_facts["ansible_net_virtual-systems"][0].vsys_zonelist }}' delegate_to: 127.0.0.1 - name: create extract_interface ansible.builtin.set_fact: # creates (also see next task) # { # { # "interface": "ae1.123" # }, # { # "interface": "ae1.124" # } # } extract_interface: "{{ ansible_facts.ansible_net_routing_table | map(attribute='interface') | map('community.general.dict_kv', 'interface') | list }}" delegate_to: 127.0.0.1 - name: create map_destination_to_interface ansible.builtin.set_fact: # { # "ae1.123": "10.12.123.0/24", # "ae1.124": "10.12.124.0/24" # } map_destination_to_interface: "{{ ansible_facts.ansible_net_routing_table | zip(extract_interface) | map('combine') | items2dict(key_name='interface', value_name='destination') }}" delegate_to: 127.0.0.1 Maybe someone can understand what's needed. Thanks to everyone in advance! A: You've tagged this question with python, so I'm going to answer in python. Some string manipulation and a couple of loops can extract what you need. # not needed, but nice for printing out the result import json ansible_facts = { "ansible_facts": { "ansible_net_virtual-systems": [ { "vsys_zonelist": [ "L3_v0123_Zone1", "L3_v0124_Zone2", "L3_v0125_Zone3", "L3_Trans_v0020_Zone4", ] } ], "ansible_net_routing_table": [ { "virtual_router": "Internal", "destination": "10.12.123.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.123", "route_table": "unicast", }, { "virtual_router": "Internal", "destination": "10.12.124.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.124", "route_table": "unicast", }, { "virtual_router": "Internal", "destination": "10.12.125.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.125", "route_table": "unicast", }, { "virtual_router": "Internal", "destination": "10.12.20.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.20", "route_table": "unicast", }, ], } } result = {"result_list": []} for vs in ansible_facts["ansible_facts"]["ansible_net_virtual-systems"][0]["vsys_zonelist"]: # work from the last element as that's the consistent part # turn to int to remove leading zeros vs_vers = int(vs.split("_")[-2].replace("v", "")) for nrt in ansible_facts["ansible_facts"]["ansible_net_routing_table"]: nrt_vers = int(nrt["interface"].split(".")[-1]) # note that the 3rd octet of the destination IP address also seems to be the version # you can use that to compare as well, as such: # nrt_vers = int(nrt["destination"].split(".")[2]) if nrt_vers == vs_vers: # work from the last element as that's the consistent part vs_zone = vs.split("_")[-1] # f-strings to turn it into the correct name vs_name = f"n-x-{vs_vers}-{vs_zone}" nrt_destination = nrt["destination"] result["result_list"].append({"name": vs_name, "destination": nrt_destination}) # break to stop needless iteration break print(json.dumps(result, indent=4)) output { "result_list": [ { "name": "n-x-123-Zone1", "destination": "10.12.123.0/24" }, { "name": "n-x-124-Zone2", "destination": "10.12.124.0/24" }, { "name": "n-x-125-Zone3", "destination": "10.12.125.0/24" }, { "name": "n-x-20-Zone4", "destination": "10.12.20.0/24" } ] } A: Declare the variables. Zip the lists and create the structure lists: "{{ ansible_facts.ansible_net_routing_table| zip(ansible_facts['ansible_net_virtual-systems'][0].vsys_zonelist) }}" result_list_str: | {% for i in lists %} {% set arr=i.1|split('_') %} - destination: {{ i.0.destination }} name: n-x-{{ arr[-2][1:]|int }}-{{ arr[-1] }} {% endfor %} result_list: "{{ result_list_str|from_yaml }}" gives result_list: - destination: 10.12.123.0/24 name: n-x-123-Zone1 - destination: 10.12.124.0/24 name: n-x-124-Zone2 - destination: 10.12.125.0/24 name: n-x-125-Zone3 - destination: 10.12.20.0/24 name: n-x-20-Zone4 Example of a complete playbook for testing - hosts: localhost vars: ansible_facts: ansible_net_routing_table: - age: '3924798' destination: 10.12.123.0/24 flags: ' Oi ' interface: ae1.123 metric: '10' nexthop: 0.0.0.0 route_table: unicast virtual_router: Internal - age: '3924798' destination: 10.12.124.0/24 flags: ' Oi ' interface: ae1.124 metric: '10' nexthop: 0.0.0.0 route_table: unicast virtual_router: Internal - age: '3924798' destination: 10.12.125.0/24 flags: ' Oi ' interface: ae1.125 metric: '10' nexthop: 0.0.0.0 route_table: unicast virtual_router: Internal - age: '3924798' destination: 10.12.20.0/24 flags: ' Oi ' interface: ae1.20 metric: '10' nexthop: 0.0.0.0 route_table: unicast virtual_router: Internal ansible_net_virtual-systems: - vsys_zonelist: - L3_v0123_Zone1 - L3_v0124_Zone2 - L3_v0125_Zone3 - L3_Trans_v0020_Zone4 lists: "{{ ansible_facts.ansible_net_routing_table| zip(ansible_facts['ansible_net_virtual-systems'][0].vsys_zonelist) }}" result_list_str: | {% for i in lists %} {% set arr=i.1|split('_') %} - destination: {{ i.0.destination }} name: n-x-{{ arr[-2][1:]|int }}-{{ arr[-1] }} {% endfor %} result_list: "{{ result_list_str|from_yaml }}" tasks: - debug: var: result_list
Create new list from two lists and create "helper" key to match 2 keys
Weird title, but the question is pretty complex. (Please don't hesitate to change the title if you know a better one) I need to create a fresh new list with altered keys from other list, substrings from keys to check key name of other list and match these key substrings with another key from list. I hope it gets clear when I try to clarify what I need. First list named ansible_facts["ansible_net_virtual-systems"][0].vsys_zonelist outputs this: { "ansible_facts": { "ansible_net_virtual-systems": [ { "vsys_zonelist": [ "L3_v0123_Zone1", "L3_v0124_Zone2", "L3_v0125_Zone3", "L3_Trans_v0020_Zone4" ] } ] } } Second list ansible_facts.ansible_net_routing_table: { "ansible_facts": { "ansible_net_routing_table": [ { "virtual_router": "Internal", "destination": "10.12.123.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.123", "route_table": "unicast" }, { "virtual_router": "Internal", "destination": "10.12.124.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.124", "route_table": "unicast" }, { "virtual_router": "Internal", "destination": "10.12.125.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.125", "route_table": "unicast" }, { "virtual_router": "Internal", "destination": "10.12.20.0/24", "nexthop": "0.0.0.0", "metric": "10", "flags": " Oi ", "age": "3924798", "interface": "ae1.20", "route_table": "unicast" } ] } } Now I have the substring v0123 from first list and interface: ae1.123 from second list. That means that they belong together. I now need the destination from the second list for each matching lists and also alter the name I get from ansible_facts["ansible_net_virtual-systems"][0].vsys_zonelist. What I need: Create a list that should look like this: ("interface": "ae1.123" is not needed anymore. Just a helper to match everything) { "result_list": [ { "name": "n-x-123-Zone1", "destination": "10.12.123.0/24" }, { "name": "n-x-124-Zone2", "destination": "10.12.124.0/24" }, { "name": "n-x-125-Zone3", "destination": "10.12.125.0/24" }, { "name": "n-x-20-Zone4", "destination": "10.12.20.0/24" } ] } I tried many different ways but somehow I cant manage to get it to work as everything I've done, doesn't help me to create my needed list. Some input for what I've already tried: - name: DEBUG list with split and loop ansible.builtin.debug: # creates # n-x-01-Name # but no list(!), just messages, but could be useful to create a loop msg: "n-x-{% if item.split('_')[1].startswith('Client') %}{{ item[3:100] }}{% else %}{{ item.split('_')[1] | regex_replace('v','') }}-{% endif %}{% if item.split('_')[2] is defined and item.split('_')[2].startswith('Trans') %}{{ item[3:50] }}{% elif item.split('_')[1].startswith('Clients')%}{% else %}{{ item[9:100] | default('') }}{% endif %}" loop: '{{ ansible_facts["ansible_net_virtual-systems"][0].vsys_zonelist }}' delegate_to: 127.0.0.1 - name: create extract_interface ansible.builtin.set_fact: # creates (also see next task) # { # { # "interface": "ae1.123" # }, # { # "interface": "ae1.124" # } # } extract_interface: "{{ ansible_facts.ansible_net_routing_table | map(attribute='interface') | map('community.general.dict_kv', 'interface') | list }}" delegate_to: 127.0.0.1 - name: create map_destination_to_interface ansible.builtin.set_fact: # { # "ae1.123": "10.12.123.0/24", # "ae1.124": "10.12.124.0/24" # } map_destination_to_interface: "{{ ansible_facts.ansible_net_routing_table | zip(extract_interface) | map('combine') | items2dict(key_name='interface', value_name='destination') }}" delegate_to: 127.0.0.1 Maybe someone can understand what's needed. Thanks to everyone in advance!
[ "You've tagged this question with python, so I'm going to answer in python.\nSome string manipulation and a couple of loops can extract what you need.\n# not needed, but nice for printing out the result\nimport json\n\n\nansible_facts = {\n \"ansible_facts\": {\n \"ansible_net_virtual-systems\": [\n {\n \"vsys_zonelist\": [\n \"L3_v0123_Zone1\",\n \"L3_v0124_Zone2\",\n \"L3_v0125_Zone3\",\n \"L3_Trans_v0020_Zone4\",\n ]\n }\n ],\n \"ansible_net_routing_table\": [\n {\n \"virtual_router\": \"Internal\",\n \"destination\": \"10.12.123.0/24\",\n \"nexthop\": \"0.0.0.0\",\n \"metric\": \"10\",\n \"flags\": \" Oi \",\n \"age\": \"3924798\",\n \"interface\": \"ae1.123\",\n \"route_table\": \"unicast\",\n },\n {\n \"virtual_router\": \"Internal\",\n \"destination\": \"10.12.124.0/24\",\n \"nexthop\": \"0.0.0.0\",\n \"metric\": \"10\",\n \"flags\": \" Oi \",\n \"age\": \"3924798\",\n \"interface\": \"ae1.124\",\n \"route_table\": \"unicast\",\n },\n {\n \"virtual_router\": \"Internal\",\n \"destination\": \"10.12.125.0/24\",\n \"nexthop\": \"0.0.0.0\",\n \"metric\": \"10\",\n \"flags\": \" Oi \",\n \"age\": \"3924798\",\n \"interface\": \"ae1.125\",\n \"route_table\": \"unicast\",\n },\n {\n \"virtual_router\": \"Internal\",\n \"destination\": \"10.12.20.0/24\",\n \"nexthop\": \"0.0.0.0\",\n \"metric\": \"10\",\n \"flags\": \" Oi \",\n \"age\": \"3924798\",\n \"interface\": \"ae1.20\",\n \"route_table\": \"unicast\",\n },\n ],\n }\n}\n\nresult = {\"result_list\": []}\nfor vs in ansible_facts[\"ansible_facts\"][\"ansible_net_virtual-systems\"][0][\"vsys_zonelist\"]:\n # work from the last element as that's the consistent part\n # turn to int to remove leading zeros\n vs_vers = int(vs.split(\"_\")[-2].replace(\"v\", \"\"))\n for nrt in ansible_facts[\"ansible_facts\"][\"ansible_net_routing_table\"]:\n nrt_vers = int(nrt[\"interface\"].split(\".\")[-1])\n # note that the 3rd octet of the destination IP address also seems to be the version\n # you can use that to compare as well, as such:\n # nrt_vers = int(nrt[\"destination\"].split(\".\")[2])\n if nrt_vers == vs_vers:\n # work from the last element as that's the consistent part\n vs_zone = vs.split(\"_\")[-1]\n # f-strings to turn it into the correct name\n vs_name = f\"n-x-{vs_vers}-{vs_zone}\"\n nrt_destination = nrt[\"destination\"]\n result[\"result_list\"].append({\"name\": vs_name, \"destination\": nrt_destination})\n # break to stop needless iteration\n break \n\n\nprint(json.dumps(result, indent=4))\n\noutput\n{\n \"result_list\": [\n {\n \"name\": \"n-x-123-Zone1\",\n \"destination\": \"10.12.123.0/24\"\n },\n {\n \"name\": \"n-x-124-Zone2\",\n \"destination\": \"10.12.124.0/24\"\n },\n {\n \"name\": \"n-x-125-Zone3\",\n \"destination\": \"10.12.125.0/24\"\n },\n {\n \"name\": \"n-x-20-Zone4\",\n \"destination\": \"10.12.20.0/24\"\n }\n ]\n}\n\n", "Declare the variables. Zip the lists and create the structure\nlists: \"{{ ansible_facts.ansible_net_routing_table|\n zip(ansible_facts['ansible_net_virtual-systems'][0].vsys_zonelist) }}\"\nresult_list_str: |\n {% for i in lists %}\n {% set arr=i.1|split('_') %}\n - destination: {{ i.0.destination }}\n name: n-x-{{ arr[-2][1:]|int }}-{{ arr[-1] }}\n {% endfor %}\nresult_list: \"{{ result_list_str|from_yaml }}\"\n\ngives\nresult_list:\n - destination: 10.12.123.0/24\n name: n-x-123-Zone1\n - destination: 10.12.124.0/24\n name: n-x-124-Zone2\n - destination: 10.12.125.0/24\n name: n-x-125-Zone3\n - destination: 10.12.20.0/24\n name: n-x-20-Zone4\n\n\n\nExample of a complete playbook for testing\n- hosts: localhost\n\n vars:\n\n ansible_facts:\n ansible_net_routing_table:\n - age: '3924798'\n destination: 10.12.123.0/24\n flags: ' Oi '\n interface: ae1.123\n metric: '10'\n nexthop: 0.0.0.0\n route_table: unicast\n virtual_router: Internal\n - age: '3924798'\n destination: 10.12.124.0/24\n flags: ' Oi '\n interface: ae1.124\n metric: '10'\n nexthop: 0.0.0.0\n route_table: unicast\n virtual_router: Internal\n - age: '3924798'\n destination: 10.12.125.0/24\n flags: ' Oi '\n interface: ae1.125\n metric: '10'\n nexthop: 0.0.0.0\n route_table: unicast\n virtual_router: Internal\n - age: '3924798'\n destination: 10.12.20.0/24\n flags: ' Oi '\n interface: ae1.20\n metric: '10'\n nexthop: 0.0.0.0\n route_table: unicast\n virtual_router: Internal\n ansible_net_virtual-systems:\n - vsys_zonelist:\n - L3_v0123_Zone1\n - L3_v0124_Zone2\n - L3_v0125_Zone3\n - L3_Trans_v0020_Zone4\n\n lists: \"{{ ansible_facts.ansible_net_routing_table|\n zip(ansible_facts['ansible_net_virtual-systems'][0].vsys_zonelist) }}\"\n result_list_str: |\n {% for i in lists %}\n {% set arr=i.1|split('_') %}\n - destination: {{ i.0.destination }}\n name: n-x-{{ arr[-2][1:]|int }}-{{ arr[-1] }}\n {% endfor %}\n result_list: \"{{ result_list_str|from_yaml }}\"\n\n tasks:\n\n - debug:\n var: result_list\n\n\n" ]
[ 1, 1 ]
[]
[]
[ "ansible", "ansible_facts", "jinja2", "python" ]
stackoverflow_0074472849_ansible_ansible_facts_jinja2_python.txt
Q: Tkinter 2 Entries calculator in python the answer is always blank The calculator that I made has 2 entries , each one of them is supposed to hold a number and be stored in a variable , then when one of the buttons is pressed a window is supposed to pop out with the answer. The problem is it's giving me a blank window from tkinter import * from tkinter.messagebox import * def addition(): showinfo("Message",(num1+num2)) def soust(): showinfo("Message",(num1-num2)) def multi(): showinfo("Message",num1*num2) def divi(): showinfo("Message",num1/num2) def annuler(): e1.delete(0,END) e2.delete(0,END) root= Tk() root.title("Exemple 6") nombre1 =Label(root,text="nombre1",background="green") nombre1.pack() e1=Entry(root) e1.pack() num1=e1.get() nombre2= Label(root,text="nombre2",background="green") nombre2.pack() e2= Entry(root) e2.pack() num2=e2.get() button1=Button(root,text=" + ",command=addition,activebackground="red") button1.pack(side=LEFT) button2=Button(root,text=" - ",command=soust,activebackground="red") button2.pack(side= LEFT) button3=Button(root,text=" * ",command=multi,activebackground="red") button3.pack(side=LEFT) button4=Button(root,text=" / ",command=divi,activebackground="red") button4.pack(side=LEFT) button5=Button(root,text=" C ",command=annuler,activebackground="red") button5.pack(side=LEFT) root.mainloop() I tried printing num1 and num2 but it gives this {} A: Your problem is that you are not taking the numbers you want at the time you want. The moment you read the values to num1 and num2 variables the value of e1 and e2 are both empty, so you reading empty to the variables. You can easily solve your problem just reading the values of e1 and e2 right before make the operation. Follow my corrections to your code: from tkinter import * from tkinter.messagebox import * root= Tk() root.title("Exemple 6") nombre1 =Label(root,text="nombre1",background="green") nombre1.pack() e1=Entry(root) e1.pack() nombre2= Label(root,text="nombre2",background="green") nombre2.pack() e2= Entry(root) e2.pack() def get_nums(): return int(e1.get()),int(e2.get()) def addition(): num1,num2 = get_nums() showinfo("Message",num1 + num2) def soust(): num1,num2 = get_nums() showinfo("Message",num1-num2) def multi(): num1,num2 = get_nums() showinfo("Message",num1*num2) def divi(): num1,num2 = get_nums() showinfo("Message",num1/num2) def annuler(): e1.delete(0,END) e2.delete(0,END) button1=Button(root,text=" + ",command=addition,activebackground="red") button1.pack(side=LEFT) button2=Button(root,text=" - ",command=soust,activebackground="red") button2.pack(side= LEFT) button3=Button(root,text=" * ",command=multi,activebackground="red") button3.pack(side=LEFT) button4=Button(root,text=" / ",command=divi,activebackground="red") button4.pack(side=LEFT) button5=Button(root,text=" C ",command=annuler,activebackground="red") button5.pack(side=LEFT) root.mainloop() And I get the expected results in my messagebox.
Tkinter 2 Entries calculator in python the answer is always blank
The calculator that I made has 2 entries , each one of them is supposed to hold a number and be stored in a variable , then when one of the buttons is pressed a window is supposed to pop out with the answer. The problem is it's giving me a blank window from tkinter import * from tkinter.messagebox import * def addition(): showinfo("Message",(num1+num2)) def soust(): showinfo("Message",(num1-num2)) def multi(): showinfo("Message",num1*num2) def divi(): showinfo("Message",num1/num2) def annuler(): e1.delete(0,END) e2.delete(0,END) root= Tk() root.title("Exemple 6") nombre1 =Label(root,text="nombre1",background="green") nombre1.pack() e1=Entry(root) e1.pack() num1=e1.get() nombre2= Label(root,text="nombre2",background="green") nombre2.pack() e2= Entry(root) e2.pack() num2=e2.get() button1=Button(root,text=" + ",command=addition,activebackground="red") button1.pack(side=LEFT) button2=Button(root,text=" - ",command=soust,activebackground="red") button2.pack(side= LEFT) button3=Button(root,text=" * ",command=multi,activebackground="red") button3.pack(side=LEFT) button4=Button(root,text=" / ",command=divi,activebackground="red") button4.pack(side=LEFT) button5=Button(root,text=" C ",command=annuler,activebackground="red") button5.pack(side=LEFT) root.mainloop() I tried printing num1 and num2 but it gives this {}
[ "Your problem is that you are not taking the numbers you want at the time you want. The moment you read the values to num1 and num2 variables the value of e1 and e2 are both empty, so you reading empty to the variables.\nYou can easily solve your problem just reading the values of e1 and e2 right before make the operation.\nFollow my corrections to your code:\nfrom tkinter import *\nfrom tkinter.messagebox import *\n\nroot= Tk()\nroot.title(\"Exemple 6\")\nnombre1 =Label(root,text=\"nombre1\",background=\"green\")\nnombre1.pack()\ne1=Entry(root)\ne1.pack()\n\nnombre2= Label(root,text=\"nombre2\",background=\"green\")\nnombre2.pack()\ne2= Entry(root)\ne2.pack()\n\ndef get_nums():\n return int(e1.get()),int(e2.get())\n\ndef addition():\n num1,num2 = get_nums()\n showinfo(\"Message\",num1 + num2)\ndef soust():\n num1,num2 = get_nums()\n showinfo(\"Message\",num1-num2)\ndef multi():\n num1,num2 = get_nums()\n showinfo(\"Message\",num1*num2)\ndef divi():\n num1,num2 = get_nums()\n showinfo(\"Message\",num1/num2)\ndef annuler():\n e1.delete(0,END)\n e2.delete(0,END)\n\nbutton1=Button(root,text=\" + \",command=addition,activebackground=\"red\")\nbutton1.pack(side=LEFT)\nbutton2=Button(root,text=\" - \",command=soust,activebackground=\"red\")\nbutton2.pack(side= LEFT)\nbutton3=Button(root,text=\" * \",command=multi,activebackground=\"red\")\nbutton3.pack(side=LEFT)\nbutton4=Button(root,text=\" / \",command=divi,activebackground=\"red\")\nbutton4.pack(side=LEFT)\nbutton5=Button(root,text=\" C \",command=annuler,activebackground=\"red\")\nbutton5.pack(side=LEFT)\n\nroot.mainloop()\n\nAnd I get the expected results in my messagebox.\n" ]
[ 0 ]
[]
[]
[ "calculator", "messagebox", "python", "tkinter", "tkinter_entry" ]
stackoverflow_0074480578_calculator_messagebox_python_tkinter_tkinter_entry.txt
Q: No report was found for sonar.python.coverage.reportPaths using pattern coverage-reports/coverage.xml My sonar branch coverage results are not importing into sonarqube. coverage.xml are generating in jenkins workspace. following are the below jenkins and error details : WARN: No report was found for sonar.python.coverage.reportPaths using pattern coverage-reports/coverage.xml I have tried in my ways but nothing worked. withSonarQubeEnv('Sonarqube') { sh "${scannerHome}/bin/sonar-scanner -Dsonar.login=$USERNAME -Dsonar.password=$PASSWORD -Dsonar.projectKey=${params.PROJECT_KEY} -Dsonar.projectName=${params.PROJECT_NAME} -Dsonar.branch=${params.GIT_BRANCH} -Dsonar.projectVersion=${params.PROJECT_VERSION} -Dsonar.sources=. -Dsonar.language=${params.LANGUAGE} -Dsonar.python.pylint=${params.PYLINT_PATH} -Dsonar.python.pylint_config=${params.PYLINT} -Dsonar.python.pylint.reportPath=${params.PYLINT_REPORT} -Dsonar.sourceEncoding=${params.ENCODING} -Dsonar.python.xunit.reportPath=${params.NOSE} -Dsonar.python.coverage.reportPaths=${params.COVERAGE}" } I expect my coverage results to reflect on sonar A: You are having that error because you are specifying the coverage report path option wrong, and therefore sonar is using the default location coverage-reports/coverage.xml. The correct option is -Dsonar.python.coverage.reportPath (in singular). A: I still have this problem on Azure Pipelines. Tried many ways without success. WARN: No report was found for sonar.python.coverage.reportPaths using pattern coverage.xml
No report was found for sonar.python.coverage.reportPaths using pattern coverage-reports/coverage.xml
My sonar branch coverage results are not importing into sonarqube. coverage.xml are generating in jenkins workspace. following are the below jenkins and error details : WARN: No report was found for sonar.python.coverage.reportPaths using pattern coverage-reports/coverage.xml I have tried in my ways but nothing worked. withSonarQubeEnv('Sonarqube') { sh "${scannerHome}/bin/sonar-scanner -Dsonar.login=$USERNAME -Dsonar.password=$PASSWORD -Dsonar.projectKey=${params.PROJECT_KEY} -Dsonar.projectName=${params.PROJECT_NAME} -Dsonar.branch=${params.GIT_BRANCH} -Dsonar.projectVersion=${params.PROJECT_VERSION} -Dsonar.sources=. -Dsonar.language=${params.LANGUAGE} -Dsonar.python.pylint=${params.PYLINT_PATH} -Dsonar.python.pylint_config=${params.PYLINT} -Dsonar.python.pylint.reportPath=${params.PYLINT_REPORT} -Dsonar.sourceEncoding=${params.ENCODING} -Dsonar.python.xunit.reportPath=${params.NOSE} -Dsonar.python.coverage.reportPaths=${params.COVERAGE}" } I expect my coverage results to reflect on sonar
[ "You are having that error because you are specifying the coverage report path option wrong, and therefore sonar is using the default location coverage-reports/coverage.xml.\nThe correct option is -Dsonar.python.coverage.reportPath (in singular).\n", "I still have this problem on Azure Pipelines. Tried many ways without success.\nWARN: No report was found for sonar.python.coverage.reportPaths using pattern coverage.xml\n" ]
[ 1, 0 ]
[]
[]
[ "jenkins_pipeline", "python" ]
stackoverflow_0055317792_jenkins_pipeline_python.txt
Q: Extract Created date and last login from firebase authentication using Python Currently my python code gets the user id and email of all users from firebase authentication using the firebase admin SDK, however I am unable find the correct syntax to extract the user metadata such as the created and last login date/time (which according to the documentation is in milliseconds). There are a few documentation that shows ways to do this, but it is not working for me. My code: from firebase_admin import credentials from firebase_admin import auth cred=credentials.Certificate('firebasesdk.json') firebase_admin.initialize_app(cred, { "databaseURL": "myurl", }) page = auth.list_users() while page: for user in page.users: print('User email: ' + user.email) print('User id: ' + user.uid) I tried using user.metadata.creationTimeand user.madata.getLastSignInTimestamp and but it does not work with python. I was looking through this post regarding this issue. A: It looks like the ExportedUsers result for list_users does not include the metadata for the users. In that case you'll need to call get_user(...) for each UID in the result to get the full UserRecord, and then find the timestamps in the metadata. A: The correct syntax is user.metadata.creation_timestamp and user.metadata.last_sign_in_timestamp. See API reference: https://firebase.google.com/docs/reference/admin/python/firebase_admin.auth#firebase_admin.auth.UserMetadata A: To get creationTime and getLastSignInTimeStamp, you can access creation_timestamp and last_sign_in_timestamp from the user_metadata field. For example from firebase_admin import auth page = auth.list_users() while page: for user in page.users: print('User creation time: ' + user.user_metadata.creation_timestamp) print('User last sign in time: ' + user.user_metadata.last_sign_in_timestamp) page = page.get_next_page() or from firebase_admin import auth for user in auth.list_users().iterate_all(): print('User creation time: ' + user.user_metadata.creation_timestamp) print('User last sign in time: ' + user.user_metadata.last_sign_in_timestamp) Each timestamp is an integer representing milliseconds since the epoch. More info: user is a ExportedUserRecord which inherits from UserRecord user_metadata is UserMetadata
Extract Created date and last login from firebase authentication using Python
Currently my python code gets the user id and email of all users from firebase authentication using the firebase admin SDK, however I am unable find the correct syntax to extract the user metadata such as the created and last login date/time (which according to the documentation is in milliseconds). There are a few documentation that shows ways to do this, but it is not working for me. My code: from firebase_admin import credentials from firebase_admin import auth cred=credentials.Certificate('firebasesdk.json') firebase_admin.initialize_app(cred, { "databaseURL": "myurl", }) page = auth.list_users() while page: for user in page.users: print('User email: ' + user.email) print('User id: ' + user.uid) I tried using user.metadata.creationTimeand user.madata.getLastSignInTimestamp and but it does not work with python. I was looking through this post regarding this issue.
[ "It looks like the ExportedUsers result for list_users does not include the metadata for the users. In that case you'll need to call get_user(...) for each UID in the result to get the full UserRecord, and then find the timestamps in the metadata.\n", "The correct syntax is user.metadata.creation_timestamp and user.metadata.last_sign_in_timestamp. See API reference: https://firebase.google.com/docs/reference/admin/python/firebase_admin.auth#firebase_admin.auth.UserMetadata\n", "To get creationTime and getLastSignInTimeStamp, you can access creation_timestamp and last_sign_in_timestamp from the user_metadata field. For example\nfrom firebase_admin import auth\n\npage = auth.list_users()\n while page:\n for user in page.users:\n print('User creation time: ' + user.user_metadata.creation_timestamp)\n print('User last sign in time: ' + user.user_metadata.last_sign_in_timestamp)\n page = page.get_next_page()\n\nor\nfrom firebase_admin import auth\n\nfor user in auth.list_users().iterate_all():\n print('User creation time: ' + user.user_metadata.creation_timestamp)\n print('User last sign in time: ' + user.user_metadata.last_sign_in_timestamp)\n\nEach timestamp is an integer representing milliseconds since the epoch.\nMore info:\nuser is a ExportedUserRecord which inherits from UserRecord\nuser_metadata is UserMetadata\n" ]
[ 3, 2, 0 ]
[]
[]
[ "firebase_admin", "firebase_authentication", "python" ]
stackoverflow_0065535902_firebase_admin_firebase_authentication_python.txt
Q: How to convert value datatype in pandas column with JSON from big number to int64? I'm reading a nested Bigquery table with read_gbq and getting list of jsons with some big numbers data = pd.read_gbq(sql, project_id=project) Here is one of the cells with array with jsons in it [{'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': -2.047602554786245e+18, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1620765482.0, 'float_value': None, 'double_value': None}}] inside is 'int_value': -2.047602554786245e+18 but it should be -2047602554786245165 i tried to convert column to string with data['events'].astype(str) and to int then string data.astype("Int64").astype(str)) but it still an object with array and has modified big number in t how can i get full int inside this cells and how to apply this to column? [{'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': -2047602554786245165, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1620765482.0, 'float_value': None, 'double_value': None}}] A: with further investigation i found out that this value was float and come out with this function Not the best use of Exceptions but fine for one time def values_to_int(json_data): result = {} for c in json_data: value = [e for c, e in c['value'].items() if e or e == 0] result[c["key"]] = value try: if type(result["firebase_screen_id"][0]) == float: result["firebase_screen_id"][0] = int(result["firebase_screen_id"][0]) except Exception: continue return result data[col] = data[col].apply(lambda x: values_to_int(x))
How to convert value datatype in pandas column with JSON from big number to int64?
I'm reading a nested Bigquery table with read_gbq and getting list of jsons with some big numbers data = pd.read_gbq(sql, project_id=project) Here is one of the cells with array with jsons in it [{'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': -2.047602554786245e+18, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1620765482.0, 'float_value': None, 'double_value': None}}] inside is 'int_value': -2.047602554786245e+18 but it should be -2047602554786245165 i tried to convert column to string with data['events'].astype(str) and to int then string data.astype("Int64").astype(str)) but it still an object with array and has modified big number in t how can i get full int inside this cells and how to apply this to column? [{'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': -2047602554786245165, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1620765482.0, 'float_value': None, 'double_value': None}}]
[ "with further investigation i found out that this value was float and come out with this function\nNot the best use of Exceptions but fine for one time\ndef values_to_int(json_data):\n result = {}\n for c in json_data:\n value = [e for c, e in c['value'].items() if e or e == 0]\n result[c[\"key\"]] = value\n try:\n if type(result[\"firebase_screen_id\"][0]) == float:\n result[\"firebase_screen_id\"][0] = int(result[\"firebase_screen_id\"][0])\n except Exception:\n continue\n return result\n\ndata[col] = data[col].apply(lambda x: values_to_int(x))\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "pandas", "python" ]
stackoverflow_0074477306_arrays_pandas_python.txt
Q: Different reslults with np.searchsorted and np.argmin during finding nearest indexes I have a set of timestamp (arr) data and list with starts and ends (cuts), the purpose is to intercept the data of the timestamp between the start and end and generate a new array. I have tried with two methodes, with np.searchsorted() and np.argmin(), but they give the different results. Any explication for this? Thank you! Here is my code: import numpy as np # Initialization data arr = np.arange(761.55643, 1525.5704932002686, 1/ 1000) cuts = [[810.211186646, 899.102014549], [903.520741867, 982.000921478], [985.201032795, 993.400610844], [998.303881868, 1085.500698357], [1090.200656211, 1168.101925871], [1171.299249968, 1179.611318749], [1184.610645285, 1271.597569677], [1275.600586067, 1363.696138556], [1368.301122947, 1455.500707533]] # Function vector_validity = np.zeros(len(arr)) new_arr_with_argmin = np.zeros(0) for cut in cuts: vector_validity[int(np.searchsorted(arr, cut[0])) : int(np.searchsorted(arr, cut[1]))] = 1 print(f"np.searchsorted start: {np.searchsorted(arr, cut[0])}") print(f"np.argmin start: {np.argmin(abs(arr - cut[0]))}") print(f"np.searchsorted end: {np.searchsorted(arr, cut[1])}") print(f"np.argmin end: {np.argmin(abs(arr - cut[1]))}") new_arr_with_argmin = np.concatenate((new_arr_with_argmin, arr[np.argmin(abs(arr - cut[0])) : np.argmin(abs(arr - cut[1]))])) new_arr_with_searchsorted = arr[vector_validity == 1] The result of the print: > np.searchsorted start: 48655 > np.argmin start: 48655 > np.searchsorted end: 137546 > np.argmin end: 137546 > np.searchsorted start: 141965 > np.argmin start: 141964 > np.searchsorted end: 220445 > np.argmin end: 220444 > np.searchsorted start: 223645 > np.argmin start: 223645 > np.searchsorted end: 231845 > np.argmin end: 231844 > np.searchsorted start: 236748 > np.argmin start: 236747 > np.searchsorted end: 323945 > np.argmin end: 323944 > np.searchsorted start: 328645 > np.argmin start: 328644 > np.searchsorted end: 406546 > np.argmin end: 406545 > np.searchsorted start: 409743 > np.argmin start: 409743 > np.searchsorted end: 418055 > np.argmin end: 418055 > np.searchsorted start: 423055 > np.argmin start: 423054 > np.searchsorted end: 510042 > np.argmin end: 510041 > np.searchsorted start: 514045 > np.argmin start: 514044 > np.searchsorted end: 602140 > np.argmin end: 602140 > np.searchsorted start: 606745 > np.argmin start: 606745 > np.searchsorted end: 693945 > np.argmin end: 693944 So we can find that from interval 2, two methodes give different indexes. Any explication for this result? A: The argmin method finds the index of closest value, which is not what searchsorted does. Here's a simple example: In [130]: a = np.array([1, 2]) For inputs such as v=1.05 and v=1.95 (both between 1 and 2), the position returned by searchsorted(a, v) is 1: In [131]: np.searchsorted(a, [1.05, 1.95]) Out[131]: array([1, 1]) Your method based on argmin does not give the same result for input values that are closer to 1 than 2: In [137]: np.argmin(abs(a - 1.05)) Out[137]: 0 In [138]: np.argmin(abs(a - 1.5)) Out[138]: 0 In [139]: np.argmin(abs(a - 1.51)) Out[139]: 1 In [140]: np.argmin(abs(a - 1.95)) Out[140]: 1
Different reslults with np.searchsorted and np.argmin during finding nearest indexes
I have a set of timestamp (arr) data and list with starts and ends (cuts), the purpose is to intercept the data of the timestamp between the start and end and generate a new array. I have tried with two methodes, with np.searchsorted() and np.argmin(), but they give the different results. Any explication for this? Thank you! Here is my code: import numpy as np # Initialization data arr = np.arange(761.55643, 1525.5704932002686, 1/ 1000) cuts = [[810.211186646, 899.102014549], [903.520741867, 982.000921478], [985.201032795, 993.400610844], [998.303881868, 1085.500698357], [1090.200656211, 1168.101925871], [1171.299249968, 1179.611318749], [1184.610645285, 1271.597569677], [1275.600586067, 1363.696138556], [1368.301122947, 1455.500707533]] # Function vector_validity = np.zeros(len(arr)) new_arr_with_argmin = np.zeros(0) for cut in cuts: vector_validity[int(np.searchsorted(arr, cut[0])) : int(np.searchsorted(arr, cut[1]))] = 1 print(f"np.searchsorted start: {np.searchsorted(arr, cut[0])}") print(f"np.argmin start: {np.argmin(abs(arr - cut[0]))}") print(f"np.searchsorted end: {np.searchsorted(arr, cut[1])}") print(f"np.argmin end: {np.argmin(abs(arr - cut[1]))}") new_arr_with_argmin = np.concatenate((new_arr_with_argmin, arr[np.argmin(abs(arr - cut[0])) : np.argmin(abs(arr - cut[1]))])) new_arr_with_searchsorted = arr[vector_validity == 1] The result of the print: > np.searchsorted start: 48655 > np.argmin start: 48655 > np.searchsorted end: 137546 > np.argmin end: 137546 > np.searchsorted start: 141965 > np.argmin start: 141964 > np.searchsorted end: 220445 > np.argmin end: 220444 > np.searchsorted start: 223645 > np.argmin start: 223645 > np.searchsorted end: 231845 > np.argmin end: 231844 > np.searchsorted start: 236748 > np.argmin start: 236747 > np.searchsorted end: 323945 > np.argmin end: 323944 > np.searchsorted start: 328645 > np.argmin start: 328644 > np.searchsorted end: 406546 > np.argmin end: 406545 > np.searchsorted start: 409743 > np.argmin start: 409743 > np.searchsorted end: 418055 > np.argmin end: 418055 > np.searchsorted start: 423055 > np.argmin start: 423054 > np.searchsorted end: 510042 > np.argmin end: 510041 > np.searchsorted start: 514045 > np.argmin start: 514044 > np.searchsorted end: 602140 > np.argmin end: 602140 > np.searchsorted start: 606745 > np.argmin start: 606745 > np.searchsorted end: 693945 > np.argmin end: 693944 So we can find that from interval 2, two methodes give different indexes. Any explication for this result?
[ "The argmin method finds the index of closest value, which is not what searchsorted does.\nHere's a simple example:\nIn [130]: a = np.array([1, 2])\n\nFor inputs such as v=1.05 and v=1.95 (both between 1 and 2), the position returned by searchsorted(a, v) is 1:\nIn [131]: np.searchsorted(a, [1.05, 1.95])\nOut[131]: array([1, 1])\n\nYour method based on argmin does not give the same result for input values that are closer to 1 than 2:\nIn [137]: np.argmin(abs(a - 1.05))\nOut[137]: 0\n\nIn [138]: np.argmin(abs(a - 1.5))\nOut[138]: 0\n\nIn [139]: np.argmin(abs(a - 1.51))\nOut[139]: 1\n\nIn [140]: np.argmin(abs(a - 1.95))\nOut[140]: 1\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074479800_arrays_numpy_python.txt
Q: How to translate a letter into a specific word using the dictionaries I created? I want to write a program that asks the user for a message, then converts the message using the telephony codes, codes that translate each letter into a specific word. Here is sample output from the program: This program will translate a message using telephony codes. What is your message? I love you, mom! India Lima Oscar Victor Echo Yankee Oscar Uniform Mike Oscar Mike The solution I can think of is to replace the letter a to alfa and then b and then rest of the list but it is just time consuming; My question is: how can I use a for loop (maybe?) to set conditions and to convert all letters? Basically you need to convert every letter into a new word using the dictionaries "A": "Alfa", "B": "Bravo", "C": "Charlie", "D": "Delta", "E": "Echo", "F": "Foxtrot", "G": "Golf", "H": "Hotel", "I": "India", "J": "Juliett", "K": "Kilo", "L": "Lima", "M": "Mike", "N": "November", "O": "Oscar", "P": "Papa", "Q": "Quebec", "R": "Romeo", "S": "Sierra", "T": "Tango", "U": "Uniform", "V": "Victor", "W": "Whiskey", "X": "X-ray", "Y": "Yankee", "Z": "Zulu", A: output = '' for letter in list(word): if output == '': output = dictionary[letter] else: output = output + ' ' + dictionary[letter] I hope this helps. It checks if it is the first word added to the output, and then determines whether or not to add a space. word is the input, output is the result A: sample = "This program will translate a message using telephony codes. What is your message? I love you, mom! India Lima Oscar Victor Echo Yankee Oscar Uniform Mike Oscar Mike" for letter, word in {"A": "Alfa", "B": "Bravo", "C": "Charlie", "D": "Delta", "E": "Echo", "F": "Foxtrot", "G": "Golf", "H": "Hotel", "I": "India", "J": "Juliett", "K": "Kilo", "L": "Lima", "M": "Mike", "N": "November", "O": "Oscar", "P": "Papa", "Q": "Quebec", "R": "Romeo", "S": "Sierra", "T": "Tango", "U": "Uniform", "V": "Victor", "W": "Whiskey", "X": "X-ray", "Y": "Yankee", "Z": "Zulu"}.items(): sample = sample.replace(word, letter) sample >>> 'This program will translate a message using telephony codes. What is your message? I love you, mom! I L O V E Y O U M O M' A: so first before parsing the string I would suggest removing the spaces and special characters like punctuations. Either remove the punctuations or add that to the lookup dictionary lookup = {",": 'com',"A": "Alfa", "B": "Bravo", "C": "Charlie", "D": "Delta", "E": "Echo", "F": "Foxtrot", "G": "Golf", "H": "Hotel", "I": "India", "J": "Juliett", "K": "Kilo", "L": "Lima", "M": "Mike", "N": "November", "O": "Oscar", "P": "Papa", "Q": "Quebec", "R": "Romeo", "S": "Sierra", "T": "Tango", "U": "Uniform", "V": "Victor", "W": "Whiskey", "X": "X-ray", "Y": "Yankee", "Z": "Zulu"} #here punctuations have been removed before hand st = "This program will translate a message using telephony codes What is your message I love you mom India Lima Oscar Victor Echo Yankee Oscar Uniform Mike Oscar Mike".upper() st = st.replace(' ', ',') #replacing spaces with comas print(''.join(list(map(lambda x:lookup[x], st))))
How to translate a letter into a specific word using the dictionaries I created?
I want to write a program that asks the user for a message, then converts the message using the telephony codes, codes that translate each letter into a specific word. Here is sample output from the program: This program will translate a message using telephony codes. What is your message? I love you, mom! India Lima Oscar Victor Echo Yankee Oscar Uniform Mike Oscar Mike The solution I can think of is to replace the letter a to alfa and then b and then rest of the list but it is just time consuming; My question is: how can I use a for loop (maybe?) to set conditions and to convert all letters? Basically you need to convert every letter into a new word using the dictionaries "A": "Alfa", "B": "Bravo", "C": "Charlie", "D": "Delta", "E": "Echo", "F": "Foxtrot", "G": "Golf", "H": "Hotel", "I": "India", "J": "Juliett", "K": "Kilo", "L": "Lima", "M": "Mike", "N": "November", "O": "Oscar", "P": "Papa", "Q": "Quebec", "R": "Romeo", "S": "Sierra", "T": "Tango", "U": "Uniform", "V": "Victor", "W": "Whiskey", "X": "X-ray", "Y": "Yankee", "Z": "Zulu",
[ "output = ''\nfor letter in list(word):\n if output == '':\n output = dictionary[letter]\n else:\n output = output + ' ' + dictionary[letter]\n\n\nI hope this helps. It checks if it is the first word added to the output, and then determines whether or not to add a space.\nword is the input, output is the result\n", "sample = \"This program will translate a message using telephony codes. What is your message? I love you, mom! India Lima Oscar Victor Echo Yankee Oscar Uniform Mike Oscar Mike\"\n\nfor letter, word in {\"A\": \"Alfa\", \"B\": \"Bravo\", \"C\": \"Charlie\", \"D\": \"Delta\", \"E\": \"Echo\", \"F\": \"Foxtrot\", \"G\": \"Golf\", \"H\": \"Hotel\", \"I\": \"India\", \"J\": \"Juliett\", \"K\": \"Kilo\", \"L\": \"Lima\", \"M\": \"Mike\", \"N\": \"November\", \"O\": \"Oscar\", \"P\": \"Papa\", \"Q\": \"Quebec\", \"R\": \"Romeo\", \"S\": \"Sierra\", \"T\": \"Tango\", \"U\": \"Uniform\", \"V\": \"Victor\", \"W\": \"Whiskey\", \"X\": \"X-ray\", \"Y\": \"Yankee\", \"Z\": \"Zulu\"}.items():\n sample = sample.replace(word, letter)\nsample\n>>> 'This program will translate a message using telephony codes. What is your message? I love you, mom! I L O V E Y O U M O M'\n\n", "so first before parsing the string I would suggest removing the spaces and special characters like punctuations. Either remove the punctuations or add that to the lookup dictionary\nlookup = {\",\": 'com',\"A\": \"Alfa\", \"B\": \"Bravo\", \"C\": \"Charlie\", \"D\": \"Delta\", \"E\": \"Echo\", \"F\": \"Foxtrot\", \"G\": \"Golf\", \"H\": \"Hotel\", \"I\": \"India\", \"J\": \"Juliett\", \"K\": \"Kilo\", \"L\": \"Lima\", \"M\": \"Mike\", \"N\": \"November\", \"O\": \"Oscar\", \"P\": \"Papa\", \"Q\": \"Quebec\", \"R\": \"Romeo\", \"S\": \"Sierra\", \"T\": \"Tango\", \"U\": \"Uniform\", \"V\": \"Victor\", \"W\": \"Whiskey\", \"X\": \"X-ray\", \"Y\": \"Yankee\", \"Z\": \"Zulu\"}\n#here punctuations have been removed before hand\nst = \"This program will translate a message using telephony codes What is your message I love you mom India Lima Oscar Victor Echo Yankee Oscar Uniform Mike Oscar Mike\".upper()\nst = st.replace(' ', ',') #replacing spaces with comas\nprint(''.join(list(map(lambda x:lookup[x], st))))\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dictionary", "python", "telephony" ]
stackoverflow_0074480932_dictionary_python_telephony.txt
Q: DJANGO: QueryDict obj has no attribute 'status_code' I am a bit shy. This is my first question here and my English isn't great. So I made CreateAdvert CBV(CreateView) and overrode the 'post' method for it. I need to update QueryDict and append field 'user' to it. But when I am trying to return the context. It says the error in the title. Views: class CreateAdvert(CreateView): form_class = CreateAdvert template_name = 'marketapp/createadvert.html' context_object_name = 'advert' def post(self, request): #Because I don't want to give QueryDict 'user' field right from the form, I override the #post method here. user = User.objects.filter(email=self.request.user)[0].id context = self.request.POST.copy() context['user'] = user return context Forms: class CreateAdvert(forms.ModelForm): class Meta: model = Advertisment fields = ['category', 'title', 'description', 'image', ] I have tried to cover context with HttpRequest(). It didn't give me a lot of result. but I tried. A: See: https://docs.djangoproject.com/en/4.1/topics/http/views/ You should return a response object, and not the context dictionary from the CreateView: Like: from django.http import HttpResponse import datetime def current_datetime(request): now = datetime.datetime.now() html = "<html><body>It is now %s.</body></html>" % now return HttpResponse(html) From the error: obj has no attribute 'status_code' after the return happens django is trying to find status_code which is an expected property of an HttpResponse object. Or: def post(self, request): #Because I don't want to give QueryDict 'user' field right from the form, I override the #post method here. user = User.objects.filter(email=self.request.user)[0].id context = self.request.POST.copy() service.create_advert(context, user) context['user'] = user return HttpResponse('ok', status=200) In regards to where/how to set the context I think this happens in a different method usually. class CreateAdvert(CreateView): form_class = CreateAdvert template_name = 'marketapp/createadvert.html' context_object_name = 'advert' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) user = User.objects.filter(email=self.request.user)[0].id context = self.request.POST.copy() context['user'] = user return context
DJANGO: QueryDict obj has no attribute 'status_code'
I am a bit shy. This is my first question here and my English isn't great. So I made CreateAdvert CBV(CreateView) and overrode the 'post' method for it. I need to update QueryDict and append field 'user' to it. But when I am trying to return the context. It says the error in the title. Views: class CreateAdvert(CreateView): form_class = CreateAdvert template_name = 'marketapp/createadvert.html' context_object_name = 'advert' def post(self, request): #Because I don't want to give QueryDict 'user' field right from the form, I override the #post method here. user = User.objects.filter(email=self.request.user)[0].id context = self.request.POST.copy() context['user'] = user return context Forms: class CreateAdvert(forms.ModelForm): class Meta: model = Advertisment fields = ['category', 'title', 'description', 'image', ] I have tried to cover context with HttpRequest(). It didn't give me a lot of result. but I tried.
[ "See: https://docs.djangoproject.com/en/4.1/topics/http/views/\nYou should return a response object, and not the context dictionary from the CreateView:\nLike:\nfrom django.http import HttpResponse\nimport datetime\n\ndef current_datetime(request):\n now = datetime.datetime.now()\n html = \"<html><body>It is now %s.</body></html>\" % now\n return HttpResponse(html)\n\n\nFrom the error: obj has no attribute 'status_code' after the return happens django is trying to find status_code which is an expected property of an HttpResponse object.\nOr:\n def post(self, request):\n #Because I don't want to give QueryDict 'user' field right from the form, I override the\n #post method here.\n user = User.objects.filter(email=self.request.user)[0].id\n context = self.request.POST.copy()\n service.create_advert(context, user)\n context['user'] = user\n return HttpResponse('ok', status=200)\n\nIn regards to where/how to set the context I think this happens in a different method usually.\nclass CreateAdvert(CreateView):\n form_class = CreateAdvert\n template_name = 'marketapp/createadvert.html'\n context_object_name = 'advert'\n\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n user = User.objects.filter(email=self.request.user)[0].id\n context = self.request.POST.copy()\n context['user'] = user\n return context\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_forms", "django_views", "python" ]
stackoverflow_0074480990_django_django_forms_django_views_python.txt
Q: Images getting squished when embedded in Flask I am working on a project to get better at Flask, and I am using an image which is stored in my assets file (maindirectory>static>assets>myimage). While I got the images to appear just fine, they are weirdly squished, regardless of how I adjusted their heights. These heights worked just fine when I built the pages and embedded in HTML, so I am a bit confused. My embed: <img src="{{ url_for('static', filename='assets/t1.jpg' )}}" height="30%"> I embedded this image with <img src="{{ url_for('static', filename='assets/t1.jpg' )}}" height="30%"> I know it is properly grabbing the image since it's showing up, but it is squishing for reasons I cannot understand. I was expecting it to look like it did when I used <img src="../static/assets/t1.jpg"> (which looked fine) A: I hope you find one of these two options useful. write object-fit:cover to img tag; Write the width in percentages and the height:auto;
Images getting squished when embedded in Flask
I am working on a project to get better at Flask, and I am using an image which is stored in my assets file (maindirectory>static>assets>myimage). While I got the images to appear just fine, they are weirdly squished, regardless of how I adjusted their heights. These heights worked just fine when I built the pages and embedded in HTML, so I am a bit confused. My embed: <img src="{{ url_for('static', filename='assets/t1.jpg' )}}" height="30%"> I embedded this image with <img src="{{ url_for('static', filename='assets/t1.jpg' )}}" height="30%"> I know it is properly grabbing the image since it's showing up, but it is squishing for reasons I cannot understand. I was expecting it to look like it did when I used <img src="../static/assets/t1.jpg"> (which looked fine)
[ "I hope you find one of these two options useful.\n\nwrite object-fit:cover to img tag;\nWrite the width in percentages and the height:auto;\n\n" ]
[ 0 ]
[]
[]
[ "flask", "html", "jinja2", "python" ]
stackoverflow_0074481072_flask_html_jinja2_python.txt
Q: Periodically call a function in pygtk's main loop What's the pygtk equivalent for after method in tkinter? I want to periodically call a function in the main loop. What is the better way to achieve it? A: Use gobject.timeout_add: import gobject gobject.timeout_add(milliseconds, callback) For example here is a progress bar that uses timeout_add to update the progress (HScale) value: import gobject import gtk class Bar(object): def __init__(self,widget): self.val=0 self.scale = gtk.HScale() self.scale.set_range(0, 100) self.scale.set_update_policy(gtk.UPDATE_CONTINUOUS) self.scale.set_value(self.val) widget.add(self.scale) gobject.timeout_add(100, self.timeout) def timeout(self): self.val +=1 self.scale.set_value(self.val) return True if __name__=='__main__': win = gtk.Window() win.set_default_size(300,50) win.connect("destroy", gtk.main_quit) bar=Bar(win) win.show_all() gtk.main() A: If you're using the new Python GObject Introspection API, you should use GLib.timeout_add(). Note that the documentation seems to be incorrect. It is actually: timeout_add(interval, function, *user_data, **kwargs) Here's an example. Note that run is a callable object, but it could be any ordinary function or method. from gi.repository import GLib class Runner: def __init__(self, num_times): self.num_times = num_times self.count = 0 def __call__(self, *args): self.count += 1 print("Periodic timer [{}]: args={}".format(self.count, args)) return self.count < self.num_times run = Runner(5) interval_ms = 1000 GLib.timeout_add(interval_ms, run, 'foo', 123) loop = GLib.MainLoop() loop.run() Output: $ python3 glib_timeout.py Periodic timer [1]: args=('foo', 123) Periodic timer [2]: args=('foo', 123) Periodic timer [3]: args=('foo', 123) Periodic timer [4]: args=('foo', 123) Periodic timer [5]: args=('foo', 123) <messages stop but main loop keeps running> A: or the simplest test code // base on Jonathon Reinhart's answer from gi.repository import GLib i = 0 def test1(*args): global i i+=1 print('test1:', i, args) if i<3: return True # keep running else: return False # end timer # call test1 every 1000ms, until it return False GLib.timeout_add(1000, test1, 'foo', 123) loop = GLib.MainLoop() # just for test without UI loop.run() outputs: $ python3 ../test_gtk_timeout.py test1: 1 ('foo', 123) test1: 2 ('foo', 123) test1: 3 ('foo', 123)
Periodically call a function in pygtk's main loop
What's the pygtk equivalent for after method in tkinter? I want to periodically call a function in the main loop. What is the better way to achieve it?
[ "Use gobject.timeout_add:\nimport gobject\ngobject.timeout_add(milliseconds, callback)\n\nFor example here is a progress bar that uses timeout_add to update the progress (HScale) value:\nimport gobject\nimport gtk\n\nclass Bar(object):\n def __init__(self,widget):\n self.val=0\n self.scale = gtk.HScale()\n self.scale.set_range(0, 100)\n self.scale.set_update_policy(gtk.UPDATE_CONTINUOUS)\n self.scale.set_value(self.val)\n widget.add(self.scale)\n gobject.timeout_add(100, self.timeout)\n def timeout(self):\n self.val +=1\n self.scale.set_value(self.val)\n return True\n\nif __name__=='__main__':\n win = gtk.Window()\n win.set_default_size(300,50)\n win.connect(\"destroy\", gtk.main_quit)\n bar=Bar(win)\n win.show_all()\n gtk.main()\n\n", "If you're using the new Python GObject Introspection API, you should use GLib.timeout_add().\nNote that the documentation seems to be incorrect. It is actually:\ntimeout_add(interval, function, *user_data, **kwargs)\n\nHere's an example. Note that run is a callable object, but it could be any ordinary function or method.\nfrom gi.repository import GLib\n\nclass Runner:\n def __init__(self, num_times):\n self.num_times = num_times\n self.count = 0\n\n def __call__(self, *args):\n self.count += 1\n print(\"Periodic timer [{}]: args={}\".format(self.count, args))\n\n return self.count < self.num_times\n\nrun = Runner(5)\n\ninterval_ms = 1000\nGLib.timeout_add(interval_ms, run, 'foo', 123)\n\nloop = GLib.MainLoop()\nloop.run()\n\nOutput:\n$ python3 glib_timeout.py \nPeriodic timer [1]: args=('foo', 123)\nPeriodic timer [2]: args=('foo', 123)\nPeriodic timer [3]: args=('foo', 123)\nPeriodic timer [4]: args=('foo', 123)\nPeriodic timer [5]: args=('foo', 123)\n<messages stop but main loop keeps running>\n\n", "or the simplest test code\n// base on Jonathon Reinhart's answer\nfrom gi.repository import GLib\n\ni = 0\ndef test1(*args):\n global i\n i+=1\n print('test1:', i, args)\n if i<3:\n return True # keep running\n else:\n return False # end timer\n\n# call test1 every 1000ms, until it return False\nGLib.timeout_add(1000, test1, 'foo', 123)\n\nloop = GLib.MainLoop() # just for test without UI\nloop.run()\n\noutputs:\n$ python3 ../test_gtk_timeout.py\ntest1: 1 ('foo', 123)\ntest1: 2 ('foo', 123)\ntest1: 3 ('foo', 123)\n\n" ]
[ 17, 1, 0 ]
[]
[]
[ "pygtk", "python" ]
stackoverflow_0007309782_pygtk_python.txt
Q: Can't figure out why my list index is out of range i created a function to count the value of a blackjack hand with a for loop but it keep telling me that the index is out of range and i can't figure out why i tried switching from "for card in total_cards" to a "for card in range(0, len(total_cards))" hoping that that would solve my problem, but i keep getting the same error. Since both errors seems to originate from the function, what am i missing here? Thank you all in advance. import random def count_total(total_cards): total = 0 for card in total_cards: total += total_cards[card] return total cards = [11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 10, 10, 10] house_cards = [] player_cards = [] for i in range (1, 5): if i % 2 == 0: player_cards.append(cards[random.randint(0, len(cards) - 1)]) elif i % 2 != 0: house_cards.append(cards[random.randint(0, len(cards) - 1)]) print(house_cards) print(player_cards) should_continue = True while should_continue: action = input("Typr 'y' to ask for a card or 'n' to stop: ") if action == "n": should_continue = False break elif action == "y": player_cards.append(cards[random.randint(0, len(cards) - 1)]) count_total(player_cards) if count_total(player_cards) > 21: should_continue = False print("You have gone over 21, you lost!") break A: This is the problem: for card in total_cards: total += total_cards[card] You don't need to index into the collection - the for loop is doing that for you. Just change it to: for card in total_cards: total += card A: I'm relatively new, but I believe when you iterate through a list in python using the for loop, you have already "pulled" the data out of it. So: for card in total_cards: total += total_cards[card] should be: for card in total_cards: total += card A: player_cards contain values from 0 to len(cards)-1=12. At the first call to count_total, player_cards has 3 elements, one from for-loop i==2 and i==4 and one from line above call. In count_total you use "for card in total_cards" meaning that card take the values of total_cards which is player_cards. Then you try to retrieve elements from a list of length three with index "card" = values from 0 to 12. If your using range you need to subtract 1 from the upper limit: for card in range(0, len(total_cards)-1)
Can't figure out why my list index is out of range
i created a function to count the value of a blackjack hand with a for loop but it keep telling me that the index is out of range and i can't figure out why i tried switching from "for card in total_cards" to a "for card in range(0, len(total_cards))" hoping that that would solve my problem, but i keep getting the same error. Since both errors seems to originate from the function, what am i missing here? Thank you all in advance. import random def count_total(total_cards): total = 0 for card in total_cards: total += total_cards[card] return total cards = [11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 10, 10, 10] house_cards = [] player_cards = [] for i in range (1, 5): if i % 2 == 0: player_cards.append(cards[random.randint(0, len(cards) - 1)]) elif i % 2 != 0: house_cards.append(cards[random.randint(0, len(cards) - 1)]) print(house_cards) print(player_cards) should_continue = True while should_continue: action = input("Typr 'y' to ask for a card or 'n' to stop: ") if action == "n": should_continue = False break elif action == "y": player_cards.append(cards[random.randint(0, len(cards) - 1)]) count_total(player_cards) if count_total(player_cards) > 21: should_continue = False print("You have gone over 21, you lost!") break
[ "This is the problem:\nfor card in total_cards:\n total += total_cards[card]\n\nYou don't need to index into the collection - the for loop is doing that for you. Just change it to:\nfor card in total_cards:\n total += card\n\n", "I'm relatively new, but I believe when you iterate through a list in python using the for loop, you have already \"pulled\" the data out of it. So:\nfor card in total_cards:\n total += total_cards[card]\n\nshould be:\nfor card in total_cards:\n total += card\n\n", "player_cards contain values from 0 to len(cards)-1=12. At the first call to count_total, player_cards has 3 elements, one from for-loop i==2 and i==4 and one from line above call. In count_total you use \"for card in total_cards\" meaning that card take the values of total_cards which is player_cards. Then you try to retrieve elements from a list of length three with index \"card\" = values from 0 to 12.\nIf your using range you need to subtract 1 from the upper limit:\nfor card in range(0, len(total_cards)-1)\n" ]
[ 3, 2, 0 ]
[]
[]
[ "list", "python", "python_3.x" ]
stackoverflow_0074480966_list_python_python_3.x.txt
Q: Variable number of nested for loops in Python I am having trouble getting this to work and any help would be greatly appreciated. I want to have a variable number of nested for loops for the following code. The idea is to write every combination possible to a csv file. here is my code: ` ka = [0.217, 0.445] kb = [0.03, 0.05] kc = [10] kd = [0.15625, 0.7] ke = [1.02, 0.78] La = [0.15, 0.25] Lb = [0.025, 0.075] tc = [0.002, 0.007] Ld = [0.025, 0.115] Le = [0.07, 0.2] NUMBER_OF_VARIABLES = 10 with open('test.csv', 'w') as file: writer = csv.writer(file, lineterminator = '\n') row = [0] * len(NUMBER_OF_VARIABLES) for E in Le: for D in Ld: for C in tc: for B in Lb: for A in La: for e in ke: for d in kd: for c in kc: for b in kb: for a in ka: row[0] = a row[1] = b row[2] = c row[3] = d row[4] = e row[5] = A row[6] = B row[7] = C row[8] = D row[9] = E writer.writerow(row) ` the idea is I would like to be able to add more or remove variables. the k and L of each letter are related. For example to add another variable would include a Lf and kf. I would like to do it without manually adding more loops. The variable structure does not have to remain if it would be better to make it one list. I feel like I need to write a recursive function but am having trouble figuring this out, any help would be greatly appreciated. I have tried importing a csv file where each line has a variable but can not figure out the variable number of for loops. A: What you need is itertools.product. It will handle all of this for you. import itertools ka = [0.217, 0.445] kb = [0.03, 0.05] kc = [10] kd = [0.15625, 0.7] ke = [1.02, 0.78] La = [0.15, 0.25] Lb = [0.025, 0.075] tc = [0.002, 0.007] Ld = [0.025, 0.115] Le = [0.07, 0.2] for row in itertools.product(ka,kb,kc,kd,ke,La,Lb,tc,Ld,Le): writer.writerow(row) You can probably even do that in a single line: writer.writerows(itertools.product(ka,kb,kc,kd,ke,La,Lb,tc,Ld,Le)) A: Try using itertools.product: ka = [0.217, 0.445] kb = [0.03, 0.05] kc = [10] kd = [0.15625, 0.7] ke = [1.02, 0.78] La = [0.15, 0.25] Lb = [0.025, 0.075] tc = [0.002, 0.007] Ld = [0.025, 0.115] Le = [0.07, 0.2] from itertools import product with open('test.csv', 'w') as file: writer = csv.writer(file, lineterminator = '\n') writer.writerows(product(ka, kb, kc, kd, ke, La,Lb, tc, Ld, Le) As you can see - Python does have built-in tools to deal with this situations. Otherwise, if the iterttols package was not there, the way to do this kind of thing is by using functions, and calling then recursively - something like def product(*args): if not args: return [] remainder = product(*args[1:]) result = [] for item in args[0]: if remainder: for part in remainder: row = [item, *part] result.append(row) else: result.append([item,]) return result
Variable number of nested for loops in Python
I am having trouble getting this to work and any help would be greatly appreciated. I want to have a variable number of nested for loops for the following code. The idea is to write every combination possible to a csv file. here is my code: ` ka = [0.217, 0.445] kb = [0.03, 0.05] kc = [10] kd = [0.15625, 0.7] ke = [1.02, 0.78] La = [0.15, 0.25] Lb = [0.025, 0.075] tc = [0.002, 0.007] Ld = [0.025, 0.115] Le = [0.07, 0.2] NUMBER_OF_VARIABLES = 10 with open('test.csv', 'w') as file: writer = csv.writer(file, lineterminator = '\n') row = [0] * len(NUMBER_OF_VARIABLES) for E in Le: for D in Ld: for C in tc: for B in Lb: for A in La: for e in ke: for d in kd: for c in kc: for b in kb: for a in ka: row[0] = a row[1] = b row[2] = c row[3] = d row[4] = e row[5] = A row[6] = B row[7] = C row[8] = D row[9] = E writer.writerow(row) ` the idea is I would like to be able to add more or remove variables. the k and L of each letter are related. For example to add another variable would include a Lf and kf. I would like to do it without manually adding more loops. The variable structure does not have to remain if it would be better to make it one list. I feel like I need to write a recursive function but am having trouble figuring this out, any help would be greatly appreciated. I have tried importing a csv file where each line has a variable but can not figure out the variable number of for loops.
[ "What you need is itertools.product. It will handle all of this for you.\nimport itertools\nka = [0.217, 0.445]\nkb = [0.03, 0.05]\nkc = [10]\nkd = [0.15625, 0.7]\nke = [1.02, 0.78]\nLa = [0.15, 0.25]\nLb = [0.025, 0.075]\ntc = [0.002, 0.007]\nLd = [0.025, 0.115]\nLe = [0.07, 0.2]\n\nfor row in itertools.product(ka,kb,kc,kd,ke,La,Lb,tc,Ld,Le):\n writer.writerow(row)\n\nYou can probably even do that in a single line:\nwriter.writerows(itertools.product(ka,kb,kc,kd,ke,La,Lb,tc,Ld,Le))\n\n", "Try using itertools.product:\nka = [0.217, 0.445]\nkb = [0.03, 0.05]\nkc = [10]\nkd = [0.15625, 0.7]\nke = [1.02, 0.78]\nLa = [0.15, 0.25]\nLb = [0.025, 0.075]\ntc = [0.002, 0.007]\nLd = [0.025, 0.115]\nLe = [0.07, 0.2]\n\nfrom itertools import product\nwith open('test.csv', 'w') as file:\n writer = csv.writer(file, lineterminator = '\\n')\n writer.writerows(product(ka, kb, kc, kd, ke, La,Lb, tc, Ld, Le)\n\nAs you can see - Python does have built-in tools to deal with this situations. Otherwise, if the iterttols package was not there, the way to do this kind of thing is by using functions, and calling then recursively - something like\ndef product(*args):\n if not args: return []\n remainder = product(*args[1:])\n result = []\n for item in args[0]:\n if remainder:\n for part in remainder:\n row = [item, *part]\n result.append(row)\n else:\n result.append([item,])\n return result\n\n" ]
[ 1, 0 ]
[]
[]
[ "csv", "nested", "python" ]
stackoverflow_0074481137_csv_nested_python.txt
Q: Cvlib not showing boxes, labels and confidence I am trying to replicate a simple object detection that I found in on website. import cv2 import matplotlib.pyplot as plt import cvlib as cv from cvlib.object_detection import draw_bbox im = cv2.imread('downloads.jpeg') bbox, label, conf = cv.detect_common_objects(im) output_image = draw_bbox(im, bbox, label, conf) plt.imshow(output_image) plt.show() All required libraries are installed and there are no errors running the code. However, it does not show the output image with the boxes, labels and confidence. How do I fix it? A: #After loading an image use an assert: img = cv2.imread('downloads.jpeg') assert not isinstance(img,type(None)), 'image not found'
Cvlib not showing boxes, labels and confidence
I am trying to replicate a simple object detection that I found in on website. import cv2 import matplotlib.pyplot as plt import cvlib as cv from cvlib.object_detection import draw_bbox im = cv2.imread('downloads.jpeg') bbox, label, conf = cv.detect_common_objects(im) output_image = draw_bbox(im, bbox, label, conf) plt.imshow(output_image) plt.show() All required libraries are installed and there are no errors running the code. However, it does not show the output image with the boxes, labels and confidence. How do I fix it?
[ "#After loading an image use an assert:\nimg = cv2.imread('downloads.jpeg')\n\nassert not isinstance(img,type(None)), 'image not found'\n\n" ]
[ 0 ]
[]
[]
[ "cvlib", "opencv", "python", "python_3.x" ]
stackoverflow_0062566552_cvlib_opencv_python_python_3.x.txt
Q: Is it possible to initialise more than one array at a time? I have this array: A = array([[450., 0., 509., 395., 0., 0., 449.], [490., 0., 572., 357., 0., 0., 489.], [568., 0., 506., 227., 0., 0., 567.]]) A.shape = (3, 7) I want to create 3 distinct empty arrays with names that progressively increase and append each row of the original one. example: array1 = [450., 0., 509., 395., 0., 0., 449.] array2 = [490., 0., 572., 357., 0., 0., 489.] array3 = [568., 0., 506., 227., 0., 0., 567.] this is just an example, consider that I'm supposed to work with an initial array with many more rows than this one. So, let's say that I need to create as many arrays as the number of rows of my original one and append each row in order. Hope I was clear. update: I have a swc file with 500 rows with are similar to the one of the array A. With a for loop I have to find the rows that in the 1st column have the same value of one of the 6th column of the array A. If this condition is met that column should be appended to the appropriate array. example: in my swc file I have this 3 rows that meet my condition: [449. 0. 510. 394. 0. 0. 448.] [489. 0. 571. 357. 0. 0. 488.] [567. 0. 505. 228. 0. 0. 566.] so I need to append them to the appropriate array. Let's suppose that I have manually created my arrays: array1 = [450., 0., 509., 395., 0., 0., 449.] array2 = [490., 0., 572., 357., 0., 0., 489.] array3 = [568., 0., 506., 227., 0., 0., 567.], I would expect this: array1 = [450., 0., 509., 395., 0., 0., 449.], [449. 0. 510. 394. 0. 0. 448.] array2 = [490., 0., 572., 357., 0., 0., 489.], [489. 0. 571. 357. 0. 0. 488.] array3 = [568., 0., 506., 227., 0., 0., 567.], [567. 0. 505. 228. 0. 0. 566.] A: The Short Answer The following 2-liner will give you the the desired result cond = A[:,0]-1 == A[:,6] # compare columns 1 and 7, find ones that match (well, with -1...check details) np.concatenate((A[cond,:],A[cond,:]),axis=0) # append rows that meet the condition to respective rows The Details and Explanation This can be fairly efficiently done in the following way (note, you said the 1st and 6th columns must match but in the question, you show results where the value in the 7th column is -1 of the value in the 1st and the value of the 2nd column is equal to the 6th column. It isn't clear what you want so I'll assume it's the former (doesn't matter, really, it can easily be updated)). Find rows that meet the condition # find which rows satisfy your condition in your swc array # compare columns 1 and 7 (note, Python indexes from 0, not 1) cond = A[:,0]-1==A[:,6] # can also do A[:,1]==A[:,5] ..you get the point # return the rows that meet the condition of equal column values within rows A[cond,:] # this is just for illustrative purposes, not really needed for the task Append rows that meet the condition to rows that meet the condition For the final part appending part, it isn't exactly clear from your question how you'd like to append. Row-wise appending: Since your desired result is not actually a valid assignment array1 = [450., 0., 509., 395., 0., 0., 449.], [449. 0. 510. 394. 0. 0. 448.] I can only assume you actually meant array1 = np.array([[450., 0., 509., 395., 0., 0., 449.], [449. , 0., 510. ,394. , 0. , 0. ,448.]]) which would mean that array1 is a 2x7 array so the desired appending is row-wise. (column-wise would give you a 1x14 array instead) Assuming my inference is right then you just concatenate the matrix as such: # append (row-wise, as your operation of appending to `array1` would suggest) to relevant subset B = np.concatenate((A[cond,:],A[cond,:]),axis=0) B.shape So the B matrix is of shape 6x7 (3x7 original plus 3x7 copy), as desired.
Is it possible to initialise more than one array at a time?
I have this array: A = array([[450., 0., 509., 395., 0., 0., 449.], [490., 0., 572., 357., 0., 0., 489.], [568., 0., 506., 227., 0., 0., 567.]]) A.shape = (3, 7) I want to create 3 distinct empty arrays with names that progressively increase and append each row of the original one. example: array1 = [450., 0., 509., 395., 0., 0., 449.] array2 = [490., 0., 572., 357., 0., 0., 489.] array3 = [568., 0., 506., 227., 0., 0., 567.] this is just an example, consider that I'm supposed to work with an initial array with many more rows than this one. So, let's say that I need to create as many arrays as the number of rows of my original one and append each row in order. Hope I was clear. update: I have a swc file with 500 rows with are similar to the one of the array A. With a for loop I have to find the rows that in the 1st column have the same value of one of the 6th column of the array A. If this condition is met that column should be appended to the appropriate array. example: in my swc file I have this 3 rows that meet my condition: [449. 0. 510. 394. 0. 0. 448.] [489. 0. 571. 357. 0. 0. 488.] [567. 0. 505. 228. 0. 0. 566.] so I need to append them to the appropriate array. Let's suppose that I have manually created my arrays: array1 = [450., 0., 509., 395., 0., 0., 449.] array2 = [490., 0., 572., 357., 0., 0., 489.] array3 = [568., 0., 506., 227., 0., 0., 567.], I would expect this: array1 = [450., 0., 509., 395., 0., 0., 449.], [449. 0. 510. 394. 0. 0. 448.] array2 = [490., 0., 572., 357., 0., 0., 489.], [489. 0. 571. 357. 0. 0. 488.] array3 = [568., 0., 506., 227., 0., 0., 567.], [567. 0. 505. 228. 0. 0. 566.]
[ "The Short Answer\nThe following 2-liner will give you the the desired result\ncond = A[:,0]-1 == A[:,6] # compare columns 1 and 7, find ones that match (well, with -1...check details)\nnp.concatenate((A[cond,:],A[cond,:]),axis=0) # append rows that meet the condition to respective rows\n\nThe Details and Explanation\nThis can be fairly efficiently done in the following way (note, you said the 1st and 6th columns must match but in the question, you show results where the value in the 7th column is -1 of the value in the 1st and the value of the 2nd column is equal to the 6th column. It isn't clear what you want so I'll assume it's the former (doesn't matter, really, it can easily be updated)).\nFind rows that meet the condition\n# find which rows satisfy your condition in your swc array\n# compare columns 1 and 7 (note, Python indexes from 0, not 1)\ncond = A[:,0]-1==A[:,6] # can also do A[:,1]==A[:,5] ..you get the point\n\n# return the rows that meet the condition of equal column values within rows\nA[cond,:] # this is just for illustrative purposes, not really needed for the task\n\nAppend rows that meet the condition to rows that meet the condition\nFor the final part appending part, it isn't exactly clear from your question how you'd like to append.\nRow-wise appending: Since your desired result is not actually a valid assignment\narray1 = [450., 0., 509., 395., 0., 0., 449.], [449. 0. 510. 394. 0. 0. 448.]\n\nI can only assume you actually meant\narray1 = np.array([[450., 0., 509., 395., 0., 0., 449.], [449. , 0., 510. ,394. , 0. , 0. ,448.]])\n\nwhich would mean that array1 is a 2x7 array so the desired appending is row-wise. (column-wise would give you a 1x14 array instead)\nAssuming my inference is right then you just concatenate the matrix as such:\n# append (row-wise, as your operation of appending to `array1` would suggest) to relevant subset\nB = np.concatenate((A[cond,:],A[cond,:]),axis=0)\nB.shape\n\nSo the B matrix is of shape 6x7 (3x7 original plus 3x7 copy), as desired.\n" ]
[ 0 ]
[]
[]
[ "append", "arrays", "python" ]
stackoverflow_0074480535_append_arrays_python.txt
Q: Summarising pandas data frame by multiple fields and collapsing into a single column I am trying to group and summarise a pandas dataframe into a single column ID LayerName Name Count A SC B 2 A SC R 8 A BLD S 7 A BLD K 6 I will like the resulting table to be summarised by the LayerName, Name and Count into a single output field like thi ID Output A 10 - SC : (B,R) ; 13 - BLD : (S,K) A: You need a double groupby.agg: (df.groupby(['ID', 'LayerName'], as_index=False, sort=False) .agg({'Name': ','.join, 'Count': 'sum'}) .assign(Output=lambda d: d['Count'].astype(str) +' - '+d['LayerName'] +' : ('+d['Name']+')') .groupby('ID', as_index=False, sort=False) .agg({'Output': ' ; '.join}) ) Output: ID Output 0 A 10 - SC : (B,R) ; 13 - BLD : (S,K) A: df.groupby(["ID", "LayerName"], sort=False).\ apply(lambda x: f"{x.Count.sum()} - {x.LayerName.iloc[0]}: ({','.join(x.Name.to_list())})").\ str.cat(sep="; ") # '10 - SC: (B,R); 13 - BLD: (S,K)'
Summarising pandas data frame by multiple fields and collapsing into a single column
I am trying to group and summarise a pandas dataframe into a single column ID LayerName Name Count A SC B 2 A SC R 8 A BLD S 7 A BLD K 6 I will like the resulting table to be summarised by the LayerName, Name and Count into a single output field like thi ID Output A 10 - SC : (B,R) ; 13 - BLD : (S,K)
[ "You need a double groupby.agg:\n(df.groupby(['ID', 'LayerName'],\n as_index=False, sort=False)\n .agg({'Name': ','.join, 'Count': 'sum'})\n .assign(Output=lambda d: d['Count'].astype(str)\n +' - '+d['LayerName']\n +' : ('+d['Name']+')')\n .groupby('ID', as_index=False, sort=False)\n .agg({'Output': ' ; '.join})\n)\n\nOutput:\n ID Output\n0 A 10 - SC : (B,R) ; 13 - BLD : (S,K)\n\n", "df.groupby([\"ID\", \"LayerName\"], sort=False).\\\napply(lambda x: f\"{x.Count.sum()} - {x.LayerName.iloc[0]}: ({','.join(x.Name.to_list())})\").\\\nstr.cat(sep=\"; \")\n# '10 - SC: (B,R); 13 - BLD: (S,K)'\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074480957_pandas_python.txt
Q: i am un able to get the script to print the output dynamicaly when running the following script nothing gets printed but when i press ctrl+c and end the task the complete output is printed. i want to make it in a way that each line of list is dynamicaly printed as the script is running itself the function i am trying to run is... ` `def passive_scan(interface): result=[] ip=[] packets = [] mode="p" sniff(iface="eth0", prn=filter_packets(packets),timeout= 10000) for p in packets: if p[0]["ARP"].op==2: src_mac=p[0]["ARP"].hwsrc src_ip=p[0]["ARP"].psrc ip.append(src_ip) dict={"mac":src_mac,"ip":src_ip} result.append(dict) for client in result: client["count"]=countOf(ip,client["ip"]) print_result(result,mode)` ` the output printing function i am using is ..... ``` `print("Interface: "+interface+"\t\tMode: Passive\t\t\tFound "+str(len(list))+" hosts") print("----------------------------------------------------------------------------") print("MAC\t\t\t\tIP\t\t\tHost Activity") print("----------------------------------------------------------------------------") for client in list: print(client["mac"] + "\t\t" + client["ip"]+"\t\t"+str(client["count"]))` ``` ``` ` i was able to properly format the output but to get that to print i have to end the task than only it prints the required output should look like.. [enter image description here](https://i.stack.imgur.com/4g14d.png) A: That's the expected behavior of sniff() which is a blocking function. "The sniff() function listens for an infinite period of time until the user interrupts." You should use the AsyncSniffer : https://scapy.readthedocs.io/en/latest/usage.html#asynchronous-sniffing
i am un able to get the script to print the output dynamicaly
when running the following script nothing gets printed but when i press ctrl+c and end the task the complete output is printed. i want to make it in a way that each line of list is dynamicaly printed as the script is running itself the function i am trying to run is... ` `def passive_scan(interface): result=[] ip=[] packets = [] mode="p" sniff(iface="eth0", prn=filter_packets(packets),timeout= 10000) for p in packets: if p[0]["ARP"].op==2: src_mac=p[0]["ARP"].hwsrc src_ip=p[0]["ARP"].psrc ip.append(src_ip) dict={"mac":src_mac,"ip":src_ip} result.append(dict) for client in result: client["count"]=countOf(ip,client["ip"]) print_result(result,mode)` ` the output printing function i am using is ..... ``` `print("Interface: "+interface+"\t\tMode: Passive\t\t\tFound "+str(len(list))+" hosts") print("----------------------------------------------------------------------------") print("MAC\t\t\t\tIP\t\t\tHost Activity") print("----------------------------------------------------------------------------") for client in list: print(client["mac"] + "\t\t" + client["ip"]+"\t\t"+str(client["count"]))` ``` ``` ` i was able to properly format the output but to get that to print i have to end the task than only it prints the required output should look like.. [enter image description here](https://i.stack.imgur.com/4g14d.png)
[ "That's the expected behavior of sniff() which is a blocking function.\n\"The sniff() function listens for an infinite period of time until the user interrupts.\"\nYou should use the AsyncSniffer : https://scapy.readthedocs.io/en/latest/usage.html#asynchronous-sniffing\n" ]
[ 0 ]
[]
[]
[ "packet_sniffers", "python", "scapy", "security" ]
stackoverflow_0074480955_packet_sniffers_python_scapy_security.txt
Q: groupby: opitimizing code for multiple operations in single line I have three lines need to convert in one line how can I do this with pandas and python .. ml= 1000 1.line: agg_2 = main_df.groupby(['id_1','id_2'])['value'].agg(['min','max']) 2.line: tot = agg_2['max'].sub(agg_2['min']).shift(1) 3.line: main_df['hos_eve'] = (145 - (main_df.groupby(['id_1','id_2'])['vio_eve'].sum()* ml)/ tot) main_df.shape: (11065065, 14) main_df.groupby(['id_1','id_2'])['vio_eve'].sum().shape: (2013,) agg_2['max'].sub(agg_2['min']).shift(1).shape: (2013,) can I optimize first line and put in the divide section of third line other wise showing error. or someone can tell me why this is error ? because of attaching output of 3rd line to the main_df which has different shape? if it is true then how can append the result to the main_df. the error is A: If you are concerned about the multiple .groupby calls you can refactor the common expression out to an intermediary variable. The group_by is only performed once and you help keep the code readable. main_gb = main_df.groupby(['id_1','id_2']) agg_2 = main_gb['value'].agg(['min','max']) tot = agg_2['max'].sub(agg_2['min']).shift(1) main_df['hos_eve'] = (145 - (main_gb['vio_eve'].sum()* ml)/ tot)
groupby: opitimizing code for multiple operations in single line
I have three lines need to convert in one line how can I do this with pandas and python .. ml= 1000 1.line: agg_2 = main_df.groupby(['id_1','id_2'])['value'].agg(['min','max']) 2.line: tot = agg_2['max'].sub(agg_2['min']).shift(1) 3.line: main_df['hos_eve'] = (145 - (main_df.groupby(['id_1','id_2'])['vio_eve'].sum()* ml)/ tot) main_df.shape: (11065065, 14) main_df.groupby(['id_1','id_2'])['vio_eve'].sum().shape: (2013,) agg_2['max'].sub(agg_2['min']).shift(1).shape: (2013,) can I optimize first line and put in the divide section of third line other wise showing error. or someone can tell me why this is error ? because of attaching output of 3rd line to the main_df which has different shape? if it is true then how can append the result to the main_df. the error is
[ "If you are concerned about the multiple .groupby calls you can refactor the common expression out to an intermediary variable.\nThe group_by is only performed once and you help keep the code readable.\nmain_gb = main_df.groupby(['id_1','id_2'])\nagg_2 = main_gb['value'].agg(['min','max'])\ntot = agg_2['max'].sub(agg_2['min']).shift(1)\nmain_df['hos_eve'] = (145 - (main_gb['vio_eve'].sum()* ml)/ tot)\n\n" ]
[ 0 ]
[]
[]
[ "group_by", "pandas", "python" ]
stackoverflow_0074474021_group_by_pandas_python.txt
Q: GCP: Allow Service Account to Impersonate a User Account with Google Analytics Scopes I am trying to create a script that enables a Service Account ga@googleanalytics.iam.gserviceaccount.com to impersonate a user account ga@domain.tld with the following GA scopes: target_scopes = ['https://www.googleapis.com/auth/analytics', 'https://www.googleapis.com/auth/analytics.edit', 'https://www.googleapis.com/auth/analytics.manage.users', 'https://www.googleapis.com/auth/analytics.provision', 'https://www.googleapis.com/auth/analytics.user.deletion'] So it can add properties to other GA accounts that the user account (ga@domain.tld) has previously been given access to. This is the code I've written that includes impersonation: from google.auth import impersonated_credentials from google.oauth2 import service_account target_scopes = ['https://www.googleapis.com/auth/analytics','https://www.googleapis.com/auth/analytics.edit','https://www.googleapis.com/auth/analytics.manage.users','https://www.googleapis.com/auth/analytics.provision','https://www.googleapis.com/auth/analytics.user.deletion'] source_credentials = service_account.Credentials.from_service_account_file( 'ga-1234567890.json', scopes=target_scopes) target_credentials = impersonated_credentials.Credentials( source_credentials=source_credentials, target_principal='ga@domain.tld', target_scopes=target_scopes, lifetime=500) client = AnalyticsAdminServiceClient(credentials=target_credentials) Which returns the exception: >Oops! <class 'google.api_core.exceptions.ServiceUnavailable'> occurred. grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Getting metadata from plugin failed with error: ('Unable to acquire impersonated credentials', '{\n "error": {\n"code": 404,\n"message": "Not found; Gaia id not found for email ga@domain.tld",\n "status": "NOT_FOUND"\n }\n}\n')" debug_error_string = "UNKNOWN:Error received from peer analyticsadmin.googleapis.com:443 {created_time:"2022-11-17T15:28:49.7504959+00:00", grpc_status:14, grpc_message:"Getting metadata from plugin failed with error: (\'Unable to acquire impersonated credentials\', \'{\\n \"error\": {\\n\"code\": 404,\\n\"message\": \"Not found; Gaia id not found for email ga@domain.tld\",\\n\"status\": \"NOT_FOUND\"\\n }\\n}\\n\')"}" When I attempt to run the below code without impersonation: from google.auth import impersonated_credentials from google.oauth2 import service_account target_scopes = ['https://www.googleapis.com/auth/analytics','https://www.googleapis.com/auth/analytics.edit','https://www.googleapis.com/auth/analytics.manage.users','https://www.googleapis.com/auth/analytics.provision','https://www.googleapis.com/auth/analytics.user.deletion'] source_credentials = service_account.Credentials.from_service_account_file( 'ga-1234567890.json', scopes=target_scopes) client = AnalyticsAdminServiceClient(credentials=source_credentials) It returns the exception: Oops! <class 'google.api_core.exceptions.Unauthenticated'> occurred. grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Getting metadata from plugin failed with error: ('Unable to acquire impersonated credentials', '{\n "error": {\n"code": 404,\n"message": "Not found; Gaia id not found for email ga@domain.tld",\n "status": "NOT_FOUND"\n }\n}\n')" debug_error_string = "UNKNOWN:Error received from peer analyticsadmin.googleapis.com:443 {created_time:"2022-11-17T15:28:49.7504959+00:00", grpc_status:14, grpc_message:"Getting metadata from plugin failed with error: ('Unable to acquire impersonated credentials', '{\n "error": {\n"code": 404,\n"message": "Not found; Gaia id not found for email ga@domain.tld",\n"status": "NOT_FOUND"\n }\n}\n')"}" The service account ga@googleanalytics.iam.gserviceaccount.com has principal ga@domain.tld with roles Service Account Token Creator and Service Account User The service account ga@googleanalytics.iam.gserviceaccount.com has Domain-wide Delegation configured in Google Workspace Admin with scopes https://www.googleapis.com/auth/analytics https://www.googleapis.com/auth/analytics.edit https://www.googleapis.com/auth/analytics.manage.users https://www.googleapis.com/auth/analytics.provision https://www.googleapis.com/auth/analytics.user.deletion Not sure what I am missing here, any advice would be greatly appreciated. A: Assuming that you configured domain wide delegation to the service account though your google workspace. And configured it to a user who has access to the google analytics account. The same code used to delegate to the other apis should work as well. credentials = service_account.Credentials.from_service_account_file('my_json.json', scopes=['https://www.googleapis.com/auth/adwords']) delegated_credentials = credentials.with_subject("user@yourdomain.com") client = AnalyticsAdminServiceClient(credentials=delegated_credentials) Now by the looks of your error messages im wondering if the system even supports it. Im going to send an email off to the team, before we start chasing this lets check delegation is supported.
GCP: Allow Service Account to Impersonate a User Account with Google Analytics Scopes
I am trying to create a script that enables a Service Account ga@googleanalytics.iam.gserviceaccount.com to impersonate a user account ga@domain.tld with the following GA scopes: target_scopes = ['https://www.googleapis.com/auth/analytics', 'https://www.googleapis.com/auth/analytics.edit', 'https://www.googleapis.com/auth/analytics.manage.users', 'https://www.googleapis.com/auth/analytics.provision', 'https://www.googleapis.com/auth/analytics.user.deletion'] So it can add properties to other GA accounts that the user account (ga@domain.tld) has previously been given access to. This is the code I've written that includes impersonation: from google.auth import impersonated_credentials from google.oauth2 import service_account target_scopes = ['https://www.googleapis.com/auth/analytics','https://www.googleapis.com/auth/analytics.edit','https://www.googleapis.com/auth/analytics.manage.users','https://www.googleapis.com/auth/analytics.provision','https://www.googleapis.com/auth/analytics.user.deletion'] source_credentials = service_account.Credentials.from_service_account_file( 'ga-1234567890.json', scopes=target_scopes) target_credentials = impersonated_credentials.Credentials( source_credentials=source_credentials, target_principal='ga@domain.tld', target_scopes=target_scopes, lifetime=500) client = AnalyticsAdminServiceClient(credentials=target_credentials) Which returns the exception: >Oops! <class 'google.api_core.exceptions.ServiceUnavailable'> occurred. grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Getting metadata from plugin failed with error: ('Unable to acquire impersonated credentials', '{\n "error": {\n"code": 404,\n"message": "Not found; Gaia id not found for email ga@domain.tld",\n "status": "NOT_FOUND"\n }\n}\n')" debug_error_string = "UNKNOWN:Error received from peer analyticsadmin.googleapis.com:443 {created_time:"2022-11-17T15:28:49.7504959+00:00", grpc_status:14, grpc_message:"Getting metadata from plugin failed with error: (\'Unable to acquire impersonated credentials\', \'{\\n \"error\": {\\n\"code\": 404,\\n\"message\": \"Not found; Gaia id not found for email ga@domain.tld\",\\n\"status\": \"NOT_FOUND\"\\n }\\n}\\n\')"}" When I attempt to run the below code without impersonation: from google.auth import impersonated_credentials from google.oauth2 import service_account target_scopes = ['https://www.googleapis.com/auth/analytics','https://www.googleapis.com/auth/analytics.edit','https://www.googleapis.com/auth/analytics.manage.users','https://www.googleapis.com/auth/analytics.provision','https://www.googleapis.com/auth/analytics.user.deletion'] source_credentials = service_account.Credentials.from_service_account_file( 'ga-1234567890.json', scopes=target_scopes) client = AnalyticsAdminServiceClient(credentials=source_credentials) It returns the exception: Oops! <class 'google.api_core.exceptions.Unauthenticated'> occurred. grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Getting metadata from plugin failed with error: ('Unable to acquire impersonated credentials', '{\n "error": {\n"code": 404,\n"message": "Not found; Gaia id not found for email ga@domain.tld",\n "status": "NOT_FOUND"\n }\n}\n')" debug_error_string = "UNKNOWN:Error received from peer analyticsadmin.googleapis.com:443 {created_time:"2022-11-17T15:28:49.7504959+00:00", grpc_status:14, grpc_message:"Getting metadata from plugin failed with error: ('Unable to acquire impersonated credentials', '{\n "error": {\n"code": 404,\n"message": "Not found; Gaia id not found for email ga@domain.tld",\n"status": "NOT_FOUND"\n }\n}\n')"}" The service account ga@googleanalytics.iam.gserviceaccount.com has principal ga@domain.tld with roles Service Account Token Creator and Service Account User The service account ga@googleanalytics.iam.gserviceaccount.com has Domain-wide Delegation configured in Google Workspace Admin with scopes https://www.googleapis.com/auth/analytics https://www.googleapis.com/auth/analytics.edit https://www.googleapis.com/auth/analytics.manage.users https://www.googleapis.com/auth/analytics.provision https://www.googleapis.com/auth/analytics.user.deletion Not sure what I am missing here, any advice would be greatly appreciated.
[ "Assuming that you configured domain wide delegation to the service account though your google workspace. And configured it to a user who has access to the google analytics account.\nThe same code used to delegate to the other apis should work as well.\ncredentials = service_account.Credentials.from_service_account_file('my_json.json', scopes=['https://www.googleapis.com/auth/adwords'])\n\ndelegated_credentials = credentials.with_subject(\"user@yourdomain.com\")\n\nclient = AnalyticsAdminServiceClient(credentials=delegated_credentials)\n\nNow by the looks of your error messages im wondering if the system even supports it.\nIm going to send an email off to the team, before we start chasing this lets check delegation is supported.\n" ]
[ 0 ]
[]
[]
[ "google_analytics_api", "google_api_python_client", "google_workspace", "python", "service_accounts" ]
stackoverflow_0074479630_google_analytics_api_google_api_python_client_google_workspace_python_service_accounts.txt
Q: Analysis on most popular product combination I would need your help with the following Our goal is to increase our overall share in the market - To do this, we would like to know whether introducing a specific combination of products to different countries would have an impact on our market share. Following is a mockup data over a period of August and September of 2021 and 2022 Year Country Product Aug_Sales_Euros Sept_Sales_Euros 2022 Kenya 20MB_Internet 12000 8000 2022 Kenya 200min_Call 7000 9000 2022 Kenya 10MB_100min 6000 5000 2021 USA 10MB_100min 9000 10000 2022 USA 20MB_Internet 60000 50000 2022 USA 900MB_Internet 12000 8000 2022 USA 400min_Call 70000 8000 2022 USA 200min_Call 12000 8000 2021 USA 400min_Call 50000 8000 2021 USA 200min_Call 12000 8000 2022 FRANCE 200min_Call 12000 8000 2021 FRANCE 200min_Call 12000 8000 We would like to know, for instance, which product should be introduced with 200min_call in France such that our overall market share is increased? or which existing product combination has the best results? FYI: we use python for our analysis. There is a lot more data, with lot more combination of products and countries How should I approach this problem, or even better, is there an example that I can refer to? Thanks, Justin A: I believe that your question should be a technical question, you are asking about analytical work as I long as I understood, from a python/pandas point of view that is how you analyse a dataset with the kinda data you have, the code below will allow you to answer a lot of the analytical question you have asked above. #your data dfS = pd.read_csv('salesData.csv') #groupby year, country, product, apply sum to the other columns dfS = dfS.groupby(['Year','Country','Product']).agg({'Aug_Sales_Euros':'sum','Sept_Sales_Euros':'sum'}) #then you can filter by year in this case I did 2021 and by country #so you can see the best sales for a specific country per year, you can change for any country you have in your data set #using this very same filter dfS[(dfS.index.get_level_values(0) == 2021) & (dfS.index.get_level_values(1) == 'FRANCE')] #here you can select the year and check the most sold products. 3 largest product sold in the month of August dfS[dfS.index.get_level_values(0) == 2022].nlargest(3, 'Aug_Sales_Euros') #here you can select the country and check the most sold products. 3 largest product sold in the month of September dfS[dfS.index.get_level_values(1) == 'USA'].nlargest(3, 'Sept_Sales_Euros') #you can also filter country, product as you please dfS[(dfS.index.get_level_values(0) == 2021) & (dfS.index.get_level_values(1) == 'FRANCE') & (dfS.index.get_level_values(2) == '200min_Call')] A: Its very subjective question. One thing you can try is cluster the regions based on few relevant variables and observe the regions which have introduced new product along with 200min and change in sales A: Broad recommendation Many approaches can be taken to do the analysis. I will recommend some materials and libraries to help: Towards data science has very comprehensive material for data analysis and data science in general. Example. Online courses are another alternative to learning how to approach data analysis. Example. If you are interested in new techniques and state-of-the-art, I recommend looking directly into the conference proceedings such as this. Keep in mind that this has a more These are a few of the important libraries in data analysis (considering you are using Python): Numpy, Pandas, Matplotlib, Seaborn, Scikit-learn, and Beutiful Soup. These are just a few recommendations. The question is very open, with many possible suggestions. You should first have a basic understanding of the tools and methods. Directly answer: To give you a direct answer to the question, I would first group by country and by-product the value. To do that, you just have to: In [11]: df.groupby(['col5', 'col2']).size() Out[11]: col5 col2 1 A 1 D 3 2 B 2 3 A 3 C 1 4 B 1 5 B 2 6 B 1 dtype: int64 with the result as follow: In [12]: df.groupby(['col5', 'col2']).size().groupby(level=1).max() Out[12]: col2 A 3 B 2 C 1 D 3 dtype: int64 This would give a first understanding of which product in which country has the most relevance in terms of value. From that result, a deeper analysis needs to be done to understand if it is possible to enter a specific market. A: The mockup data that you show didn't convey a lot of information about the dynamic between different products, which is what you try to find. If you have more granular data such as transactional data, you can consider Market Basket Analysis (MBA). A popular algorithm is Apriori algorithm. It is commonly used by large retailers to identify the association between different products (the purchase pattern of the customer), which helps to increase sales. Resource: example, blog, library A: This is a simple python3 script that extracts the data from your csv you could call mysheets = [('hello.csv', '<number_of_columns>'),('mom.csv', '<number_of_columns>')] sheets = Sheets(mysheets) import csv # This is a mock script i drew up to extract csv excel data to file in class form. #Sheets class class CSV: def __init__(self, filename, number_of_columns): self.data = {} with open(filename, 'r') as csvfile: self.document = csv.reader(csvfile, delimiter=' ', quotechar='|') for row in self.document: try: test = row[number_of_columns-1] except: raise Exception('csv incorrectly formatted') year = row[0] self.data[year] = {} self.data[year]['Country'] = row[1] self.data[year]['Product'] = row[2] self.data[year]['Jan_Sales_Euros'] = row[3] self.data[year]['Feb_Sales_Euros'] = row[4] self.data[year]['Mar_Sales_Euros'] = row[5] self.data[year]['Apr_Sales_Euros'] = row[6] self.data[year]['May_Sales_Euros'] = row[7] self.data[year]['Jun_Sales_Euros'] = row[8] self.data[year]['Jul_Sales_Euros'] = row[9] self.data[year]['Aug_Sales_Euros'] = row[10] self.data[year]['Sep_Sales_Euros'] = row[11] self.data[year]['Oct_Sales_Euros'] = row[12] self.data[year]['Nov_Sales_Euros'] = row[13] self.data[year]['Dec_Sales_Euros'] = row[14] class Sheets: def __init__(self, sheet_filenames: list): self.statistics = ['sales', 'product', 'august_sales', 'sept_sales'] self.information = {} self.sheets = set(x[0] for x in sheet_filenames) self.columns = set(x[1] for x in sheet_filenames) for x in self.sheets: self.information[x] = CSV(x) def compareSheet(self, sheet1, sheet2, to_compare): sheet1_data = self.information[sheet1].data[to_compare] sheet2_data = self.information[sheet1].data[to_compare] if sheet1_data>sheet2_data: print("this is the what you would see") sheets = Sheets([('hello.csv',15), ('world.csv',15), ('hi.csv',15), ('mom.csv',15)])
Analysis on most popular product combination
I would need your help with the following Our goal is to increase our overall share in the market - To do this, we would like to know whether introducing a specific combination of products to different countries would have an impact on our market share. Following is a mockup data over a period of August and September of 2021 and 2022 Year Country Product Aug_Sales_Euros Sept_Sales_Euros 2022 Kenya 20MB_Internet 12000 8000 2022 Kenya 200min_Call 7000 9000 2022 Kenya 10MB_100min 6000 5000 2021 USA 10MB_100min 9000 10000 2022 USA 20MB_Internet 60000 50000 2022 USA 900MB_Internet 12000 8000 2022 USA 400min_Call 70000 8000 2022 USA 200min_Call 12000 8000 2021 USA 400min_Call 50000 8000 2021 USA 200min_Call 12000 8000 2022 FRANCE 200min_Call 12000 8000 2021 FRANCE 200min_Call 12000 8000 We would like to know, for instance, which product should be introduced with 200min_call in France such that our overall market share is increased? or which existing product combination has the best results? FYI: we use python for our analysis. There is a lot more data, with lot more combination of products and countries How should I approach this problem, or even better, is there an example that I can refer to? Thanks, Justin
[ "I believe that your question should be a technical question, you are asking about analytical work as I long as I understood, from a python/pandas point of view that is how you analyse a dataset with the kinda data you have, the code below will allow you to answer a lot of the analytical question you have asked above.\n#your data\ndfS = pd.read_csv('salesData.csv')\n\n#groupby year, country, product, apply sum to the other columns\ndfS = dfS.groupby(['Year','Country','Product']).agg({'Aug_Sales_Euros':'sum','Sept_Sales_Euros':'sum'})\n\n#then you can filter by year in this case I did 2021 and by country\n#so you can see the best sales for a specific country per year, you can change for any country you have in your data set\n#using this very same filter\ndfS[(dfS.index.get_level_values(0) == 2021) & (dfS.index.get_level_values(1) == 'FRANCE')]\n\n#here you can select the year and check the most sold products. 3 largest product sold in the month of August\ndfS[dfS.index.get_level_values(0) == 2022].nlargest(3, 'Aug_Sales_Euros')\n\n#here you can select the country and check the most sold products. 3 largest product sold in the month of September\ndfS[dfS.index.get_level_values(1) == 'USA'].nlargest(3, 'Sept_Sales_Euros')\n\n#you can also filter country, product as you please\ndfS[(dfS.index.get_level_values(0) == 2021) & (dfS.index.get_level_values(1) == 'FRANCE') & (dfS.index.get_level_values(2) == '200min_Call')]\n\n", "Its very subjective question. One thing you can try is cluster the regions based on few relevant variables and observe the regions which have introduced new product along with 200min and change in sales\n", "Broad recommendation\nMany approaches can be taken to do the analysis.\nI will recommend some materials and libraries to help:\n\nTowards data science has very comprehensive material for data analysis and data science in general. Example.\nOnline courses are another alternative to learning how to approach data analysis. Example.\nIf you are interested in new techniques and state-of-the-art, I recommend looking directly into the conference proceedings such as this. Keep in mind that this has a more\nThese are a few of the important libraries in data analysis (considering you are using Python): Numpy, Pandas, Matplotlib, Seaborn, Scikit-learn, and Beutiful Soup.\n\nThese are just a few recommendations. The question is very open, with many possible suggestions. You should first have a basic understanding of the tools and methods.\nDirectly answer:\nTo give you a direct answer to the question, I would first group by country and by-product the value.\nTo do that, you just have to:\nIn [11]: df.groupby(['col5', 'col2']).size()\nOut[11]:\ncol5 col2\n1 A 1\n D 3\n2 B 2\n3 A 3\n C 1\n4 B 1\n5 B 2\n6 B 1\ndtype: int64\n\nwith the result as follow:\nIn [12]: df.groupby(['col5', 'col2']).size().groupby(level=1).max()\nOut[12]:\ncol2\nA 3\nB 2\nC 1\nD 3\ndtype: int64\n\nThis would give a first understanding of which product in which country has the most relevance in terms of value. From that result, a deeper analysis needs to be done to understand if it is possible to enter a specific market.\n", "The mockup data that you show didn't convey a lot of information about the dynamic between different products, which is what you try to find.\nIf you have more granular data such as transactional data, you can consider Market Basket Analysis (MBA). A popular algorithm is Apriori algorithm. It is commonly used by large retailers to identify the association between different products (the purchase pattern of the customer), which helps to increase sales.\nResource: example, blog, library\n", "This is a simple python3 script that extracts the data from your csv\nyou could call\n\nmysheets = [('hello.csv', '<number_of_columns>'),('mom.csv', '<number_of_columns>')]\n\nsheets = Sheets(mysheets)\n\n\n\nimport csv\n\n# This is a mock script i drew up to extract csv excel data to file in class form. \n\n#Sheets class \n\n\nclass CSV:\n \n def __init__(self, filename, number_of_columns):\n \n self.data = {}\n \n with open(filename, 'r') as csvfile:\n self.document = csv.reader(csvfile, delimiter=' ', quotechar='|')\n for row in self.document:\n try:\n test = row[number_of_columns-1]\n except:\n raise Exception('csv incorrectly formatted')\n \n year = row[0]\n self.data[year] = {}\n self.data[year]['Country'] = row[1]\n self.data[year]['Product'] = row[2]\n self.data[year]['Jan_Sales_Euros'] = row[3]\n self.data[year]['Feb_Sales_Euros'] = row[4]\n self.data[year]['Mar_Sales_Euros'] = row[5]\n self.data[year]['Apr_Sales_Euros'] = row[6]\n self.data[year]['May_Sales_Euros'] = row[7]\n self.data[year]['Jun_Sales_Euros'] = row[8]\n self.data[year]['Jul_Sales_Euros'] = row[9]\n self.data[year]['Aug_Sales_Euros'] = row[10]\n self.data[year]['Sep_Sales_Euros'] = row[11]\n self.data[year]['Oct_Sales_Euros'] = row[12]\n self.data[year]['Nov_Sales_Euros'] = row[13]\n self.data[year]['Dec_Sales_Euros'] = row[14]\n\nclass Sheets:\n \n def __init__(self, sheet_filenames: list):\n \n self.statistics = ['sales', 'product', 'august_sales', 'sept_sales']\n \n self.information = {}\n self.sheets = set(x[0] for x in sheet_filenames)\n self.columns = set(x[1] for x in sheet_filenames)\n\n\n for x in self.sheets:\n self.information[x] = CSV(x)\n \n def compareSheet(self, sheet1, sheet2, to_compare):\n \n sheet1_data = self.information[sheet1].data[to_compare]\n \n sheet2_data = self.information[sheet1].data[to_compare]\n \n if sheet1_data>sheet2_data:\n \n print(\"this is the what you would see\")\n \n \n \nsheets = Sheets([('hello.csv',15), ('world.csv',15), ('hi.csv',15), ('mom.csv',15)]) \n \n\n\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "analytics", "linear_regression", "logistic_regression", "machine_learning", "python" ]
stackoverflow_0074373967_analytics_linear_regression_logistic_regression_machine_learning_python.txt
Q: Intensity normalization of image using Python+PIL - Speed issues I'm working on a little problem in my sparetime involving analysis of some images obtained through a microscope. It is a wafer with some stuff here and there, and ultimately I want to make a program to detect when certain materials show up. Anyways, first step is to normalize the intensity across the image, since the lens does not give uniform lightning. Currently I use an image, with no stuff on, only the substrate, as a background, or reference, image. I find the maximum of the three (intensity) values for RGB. from PIL import Image from PIL import ImageDraw rmax = 0;gmax = 0;bmax = 0;rmin = 300;gmin = 300;bmin = 300 im_old = Image.open("test_image.png") im_back = Image.open("background.png") maxx = im_old.size[0] #Import the size of the image maxy = im_old.size[1] im_new = Image.new("RGB", (maxx,maxy)) pixback = im_back.load() for x in range(maxx): for y in range(maxy): if pixback[x,y][0] > rmax: rmax = pixback[x,y][0] if pixback[x,y][1] > gmax: gmax = pixback[x,y][1] if pixback[x,y][2] > bmax: bmax = pixback[x,y][2] pixnew = im_new.load() pixold = im_old.load() for x in range(maxx): for y in range(maxy): r = float(pixold[x,y][0]) / ( float(pixback[x,y][0])*rmax ) g = float(pixold[x,y][1]) / ( float(pixback[x,y][1])*gmax ) b = float(pixold[x,y][2]) / ( float(pixback[x,y][2])*bmax ) pixnew[x,y] = (r,g,b) The first part of the code determines the maximum intensity of the RED, GREEN and BLUE channels, pixel by pixel, of the background image, but needs only be done once. The second part takes the "real" image (with stuff on it), and normalizes the RED, GREEN and BLUE channels, pixel by pixel, according to the background. This takes some time, 5-10 seconds for an 1280x960 image, which is way too slow if I need to do this to several images. What can I do to improve the speed? I thought of moving all the images to numpy arrays, but I can't seem to find a fast way to do that for RGB images. I'd rather not move away from python, since my C++ is quite low-level, and getting a working FORTRAN code would probably take longer than I could ever save in terms of speed :P A: import numpy as np from PIL import Image def normalize(arr): """ Linear normalization http://en.wikipedia.org/wiki/Normalization_%28image_processing%29 """ arr = arr.astype('float') # Do not touch the alpha channel for i in range(3): minval = arr[...,i].min() maxval = arr[...,i].max() if minval != maxval: arr[...,i] -= minval arr[...,i] *= (255.0/(maxval-minval)) return arr def demo_normalize(): img = Image.open(FILENAME).convert('RGBA') arr = np.array(img) new_img = Image.fromarray(normalize(arr).astype('uint8'),'RGBA') new_img.save('/tmp/normalized.png') A: See http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.fromimage.html#scipy.misc.fromimage You can say databack = scipy.misc.fromimage(pixback) rmax = numpy.max(databack[:,:,0]) gmax = numpy.max(databack[:,:,1]) bmax = numpy.max(databack[:,:,2]) which should be much faster than looping over all (r,g,b) triplets of your image. Then you can do dataold = scip.misc.fromimage(pixold) r = dataold[:,:,0] / (pixback[:,:,0] * rmax ) g = dataold[:,:,1] / (pixback[:,:,1] * gmax ) b = dataold[:,:,2] / (pixback[:,:,2] * bmax ) datanew = numpy.array((r,g,b)) imnew = scipy.misc.toimage(datanew) The code is not tested, but should work somehow with minor modifications. A: This is partially from FolksTalk webpage: from PIL import Image import numpy as np # Read image file in_file = "my_image.png" # convert('RGB') for PNG file type image = Image.open(in_file).convert('RGB') pixels = np.asarray(image) # Convert from integers to floats pixels = pixels.astype('float32') # Normalize to the range 0-1 pixels /= 255.0
Intensity normalization of image using Python+PIL - Speed issues
I'm working on a little problem in my sparetime involving analysis of some images obtained through a microscope. It is a wafer with some stuff here and there, and ultimately I want to make a program to detect when certain materials show up. Anyways, first step is to normalize the intensity across the image, since the lens does not give uniform lightning. Currently I use an image, with no stuff on, only the substrate, as a background, or reference, image. I find the maximum of the three (intensity) values for RGB. from PIL import Image from PIL import ImageDraw rmax = 0;gmax = 0;bmax = 0;rmin = 300;gmin = 300;bmin = 300 im_old = Image.open("test_image.png") im_back = Image.open("background.png") maxx = im_old.size[0] #Import the size of the image maxy = im_old.size[1] im_new = Image.new("RGB", (maxx,maxy)) pixback = im_back.load() for x in range(maxx): for y in range(maxy): if pixback[x,y][0] > rmax: rmax = pixback[x,y][0] if pixback[x,y][1] > gmax: gmax = pixback[x,y][1] if pixback[x,y][2] > bmax: bmax = pixback[x,y][2] pixnew = im_new.load() pixold = im_old.load() for x in range(maxx): for y in range(maxy): r = float(pixold[x,y][0]) / ( float(pixback[x,y][0])*rmax ) g = float(pixold[x,y][1]) / ( float(pixback[x,y][1])*gmax ) b = float(pixold[x,y][2]) / ( float(pixback[x,y][2])*bmax ) pixnew[x,y] = (r,g,b) The first part of the code determines the maximum intensity of the RED, GREEN and BLUE channels, pixel by pixel, of the background image, but needs only be done once. The second part takes the "real" image (with stuff on it), and normalizes the RED, GREEN and BLUE channels, pixel by pixel, according to the background. This takes some time, 5-10 seconds for an 1280x960 image, which is way too slow if I need to do this to several images. What can I do to improve the speed? I thought of moving all the images to numpy arrays, but I can't seem to find a fast way to do that for RGB images. I'd rather not move away from python, since my C++ is quite low-level, and getting a working FORTRAN code would probably take longer than I could ever save in terms of speed :P
[ "import numpy as np\nfrom PIL import Image\n\ndef normalize(arr):\n \"\"\"\n Linear normalization\n http://en.wikipedia.org/wiki/Normalization_%28image_processing%29\n \"\"\"\n arr = arr.astype('float')\n # Do not touch the alpha channel\n for i in range(3):\n minval = arr[...,i].min()\n maxval = arr[...,i].max()\n if minval != maxval:\n arr[...,i] -= minval\n arr[...,i] *= (255.0/(maxval-minval))\n return arr\n\ndef demo_normalize():\n img = Image.open(FILENAME).convert('RGBA')\n arr = np.array(img)\n new_img = Image.fromarray(normalize(arr).astype('uint8'),'RGBA')\n new_img.save('/tmp/normalized.png')\n\n", "See http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.fromimage.html#scipy.misc.fromimage\nYou can say\ndataback = scipy.misc.fromimage(pixback)\nrmax = numpy.max(databack[:,:,0])\ngmax = numpy.max(databack[:,:,1])\nbmax = numpy.max(databack[:,:,2])\n\nwhich should be much faster than looping over all (r,g,b) triplets of your image.\nThen you can do\ndataold = scip.misc.fromimage(pixold)\nr = dataold[:,:,0] / (pixback[:,:,0] * rmax )\ng = dataold[:,:,1] / (pixback[:,:,1] * gmax )\nb = dataold[:,:,2] / (pixback[:,:,2] * bmax )\n\ndatanew = numpy.array((r,g,b))\nimnew = scipy.misc.toimage(datanew)\n\nThe code is not tested, but should work somehow with minor modifications.\n", "This is partially from FolksTalk webpage:\nfrom PIL import Image\nimport numpy as np\n\n# Read image file\nin_file = \"my_image.png\"\n# convert('RGB') for PNG file type\nimage = Image.open(in_file).convert('RGB')\npixels = np.asarray(image)\n\n# Convert from integers to floats\npixels = pixels.astype('float32')\n\n# Normalize to the range 0-1\npixels /= 255.0\n\n" ]
[ 18, 2, 0 ]
[]
[]
[ "normalization", "python", "python_imaging_library" ]
stackoverflow_0007422204_normalization_python_python_imaging_library.txt
Q: How to replicate conda environment from windows desktop linux server not connected to internet? I have created a conda environment on my windows desktop. I am trying to move it to windows server and Linux server. I created specification file like below which has all internal URL Using this spec file I could create environment on windows server not connected to internet. For Linux server I created .yml file like below. When I try to create environment using this .yml file on Linux server (not connected to internet) I get error like below. Fetching package metadata ...Could not connect to https://repo.continuum.io/pkgs/free/noarch/ Could not connect to https://repo.continuum.io/pkgs/free/linux-64/ My understanding is conda is trying to install python and other packages as .yml file is not telling it internal path. Not certain what different to do with .yml file. A: The issue seems pretty simple: the .yml file is just a list of the packages that are contained in the conda environment. When you try to create an environment based on the .yml file, conda reads all the names and versions of the packages listed, and then fetches, downloads and installs them. Since your Linux server is not connected to the Internet it fails when it's trying to fetch them, since it isn't able to reach the repo in which those packages are contained. So it's not able to download and add them to the virtual environment.
How to replicate conda environment from windows desktop linux server not connected to internet?
I have created a conda environment on my windows desktop. I am trying to move it to windows server and Linux server. I created specification file like below which has all internal URL Using this spec file I could create environment on windows server not connected to internet. For Linux server I created .yml file like below. When I try to create environment using this .yml file on Linux server (not connected to internet) I get error like below. Fetching package metadata ...Could not connect to https://repo.continuum.io/pkgs/free/noarch/ Could not connect to https://repo.continuum.io/pkgs/free/linux-64/ My understanding is conda is trying to install python and other packages as .yml file is not telling it internal path. Not certain what different to do with .yml file.
[ "The issue seems pretty simple: the .yml file is just a list of the packages that are contained in the conda environment. When you try to create an environment based on the .yml file, conda reads all the names and versions of the packages listed, and then fetches, downloads and installs them.\nSince your Linux server is not connected to the Internet it fails when it's trying to fetch them, since it isn't able to reach the repo in which those packages are contained. So it's not able to download and add them to the virtual environment.\n" ]
[ 0 ]
[]
[]
[ "anaconda3", "conda", "linux", "python", "yaml" ]
stackoverflow_0074421643_anaconda3_conda_linux_python_yaml.txt
Q: Running Tkinter to produce a typing counter. Can't type into entry box while countdown timer is going I'm trying to put together my own script for a typing counter. # --------------------------------------------------Import Modules-----------------------------------------------------# from tkinter import * import time import random from threading import Thread # --------------------------------------------------Set CONSTANTS------------------------------------------------------# TIMER = 10 FONT_NAME = "Arial" FONT_HEIGHT = 12 FONT_TYPE = "bold" BACKGROUND_COLOR = "#A5C9CA" # --------------------------------------------------Random Text.-------------------------------------------------------# example_text = [ "This is going to be a lot of sample text.", "This should hopefully be even more sample text." ] def start_timer(): t = TIMER while t > 0: minutes, seconds = divmod(t, 60) timer = "{:02d}:{:02d}".format(minutes, seconds) print(timer, end=f"\r{timer}") canvas.itemconfig(timer_text, text=timer) time.sleep(1) t -= 1 # Add function to read the inputted text after timer runs out and print results to user for correct wpm. global generated_text inputted_user_text = typed_entry_box.get() check_wpm(generated_text, inputted_user_text) def generate_text(): random_text = random.choice(example_text) return random_text def check_wpm(initial_text, inputted_text): initial_list = initial_text.split() inputted_list = inputted_text.split() del initial_list[len(inputted_list):] compared_list = [i == j for i, j in zip(initial_list, inputted_list)] wpm = 0 for n in compared_list: if n: wpm += 1 wpm_label.config(text=f"You typed at {wpm} words per minute.") # Open Tk Window Box. window = Tk() window.title("WPM Test") window.config(pady=20, padx=20, bg=BACKGROUND_COLOR) # Create a canvas for the window. canvas = Canvas(width=500, height=500, bg=BACKGROUND_COLOR, highlightthickness=0) # Create a text object for the timer. timer_text = canvas.create_text(250, 25, text="60 Seconds", fill="black", font=(FONT_NAME, 18, FONT_TYPE)) canvas.grid(column=0, row=5) # Instructions Label at the top. label_1 = "Test Your Typing Speed. Click the button below and type out the text string shown below. Good Luck!\n" \ "---------------------------------------------------------------------------------------------------\n\n" instructions_label = Label(text=label_1, font=(FONT_NAME, 14, FONT_TYPE), bg=BACKGROUND_COLOR) instructions_label.config(padx=5, pady=5) instructions_label.grid(row=0, column=0) # Random text label. generated_text = generate_text() random_text_label = Label(text=generated_text, font=(FONT_NAME, 18, FONT_TYPE), bg=BACKGROUND_COLOR) random_text_label.config(padx=5, pady=5) random_text_label.grid(row=2, column=0) # Add label for entry box below. label_3 = "\n\n-----------------------------------------------------------------------\n\n\n" \ "Click the button below to start the timer and immediately start typing.\n" start_typing_label = Label(text=label_3, font=(FONT_NAME, 14, FONT_TYPE), bg=BACKGROUND_COLOR) start_typing_label.grid(row=3, column=0) # Add start timer button. start_timer_button = Button(text="Start Timer", command=start_timer, font=(FONT_NAME, 16, FONT_TYPE)) start_timer_button.config(padx=2, pady=2) start_timer_button.grid(row=4, column=0) # Add entry box for the typed text. typed_entry_box = Entry(width=100, font=(FONT_NAME, 16, FONT_TYPE), bd=5) typed_entry_box.grid(row=5, column=0) wpm_label = Label(text="", font=(FONT_NAME, 14, FONT_TYPE), bg=BACKGROUND_COLOR) wpm_label.grid(row=6, column=0) # Keeps window open. window.mainloop() Besides it looking ugly, I have everything running but when I click the start timer button, I can't type anything into the entry field I created. I'm assuming it's because it is running the timer function but seeing if anyone has an idea for fixing it. Thanks. A: Avoid using while loops with time.sleep() in a tkinter app, as it will block the main (UI) thread. Instead, look into the tkinter.after() method, which is useful for situations like this! t = TIMER def start_timer(): global t if t: minutes, seconds = divmod(t, 60) timer = f'{minutes:02d}:{seconds:02d}' canvas.itemconfig(timer_text, text=timer) t -= 1 # call this function again aftre 1000mS after_id = window.after(1000, start_timer) else: window.after_cancel(after_id) # cancel the countdown
Running Tkinter to produce a typing counter. Can't type into entry box while countdown timer is going
I'm trying to put together my own script for a typing counter. # --------------------------------------------------Import Modules-----------------------------------------------------# from tkinter import * import time import random from threading import Thread # --------------------------------------------------Set CONSTANTS------------------------------------------------------# TIMER = 10 FONT_NAME = "Arial" FONT_HEIGHT = 12 FONT_TYPE = "bold" BACKGROUND_COLOR = "#A5C9CA" # --------------------------------------------------Random Text.-------------------------------------------------------# example_text = [ "This is going to be a lot of sample text.", "This should hopefully be even more sample text." ] def start_timer(): t = TIMER while t > 0: minutes, seconds = divmod(t, 60) timer = "{:02d}:{:02d}".format(minutes, seconds) print(timer, end=f"\r{timer}") canvas.itemconfig(timer_text, text=timer) time.sleep(1) t -= 1 # Add function to read the inputted text after timer runs out and print results to user for correct wpm. global generated_text inputted_user_text = typed_entry_box.get() check_wpm(generated_text, inputted_user_text) def generate_text(): random_text = random.choice(example_text) return random_text def check_wpm(initial_text, inputted_text): initial_list = initial_text.split() inputted_list = inputted_text.split() del initial_list[len(inputted_list):] compared_list = [i == j for i, j in zip(initial_list, inputted_list)] wpm = 0 for n in compared_list: if n: wpm += 1 wpm_label.config(text=f"You typed at {wpm} words per minute.") # Open Tk Window Box. window = Tk() window.title("WPM Test") window.config(pady=20, padx=20, bg=BACKGROUND_COLOR) # Create a canvas for the window. canvas = Canvas(width=500, height=500, bg=BACKGROUND_COLOR, highlightthickness=0) # Create a text object for the timer. timer_text = canvas.create_text(250, 25, text="60 Seconds", fill="black", font=(FONT_NAME, 18, FONT_TYPE)) canvas.grid(column=0, row=5) # Instructions Label at the top. label_1 = "Test Your Typing Speed. Click the button below and type out the text string shown below. Good Luck!\n" \ "---------------------------------------------------------------------------------------------------\n\n" instructions_label = Label(text=label_1, font=(FONT_NAME, 14, FONT_TYPE), bg=BACKGROUND_COLOR) instructions_label.config(padx=5, pady=5) instructions_label.grid(row=0, column=0) # Random text label. generated_text = generate_text() random_text_label = Label(text=generated_text, font=(FONT_NAME, 18, FONT_TYPE), bg=BACKGROUND_COLOR) random_text_label.config(padx=5, pady=5) random_text_label.grid(row=2, column=0) # Add label for entry box below. label_3 = "\n\n-----------------------------------------------------------------------\n\n\n" \ "Click the button below to start the timer and immediately start typing.\n" start_typing_label = Label(text=label_3, font=(FONT_NAME, 14, FONT_TYPE), bg=BACKGROUND_COLOR) start_typing_label.grid(row=3, column=0) # Add start timer button. start_timer_button = Button(text="Start Timer", command=start_timer, font=(FONT_NAME, 16, FONT_TYPE)) start_timer_button.config(padx=2, pady=2) start_timer_button.grid(row=4, column=0) # Add entry box for the typed text. typed_entry_box = Entry(width=100, font=(FONT_NAME, 16, FONT_TYPE), bd=5) typed_entry_box.grid(row=5, column=0) wpm_label = Label(text="", font=(FONT_NAME, 14, FONT_TYPE), bg=BACKGROUND_COLOR) wpm_label.grid(row=6, column=0) # Keeps window open. window.mainloop() Besides it looking ugly, I have everything running but when I click the start timer button, I can't type anything into the entry field I created. I'm assuming it's because it is running the timer function but seeing if anyone has an idea for fixing it. Thanks.
[ "Avoid using while loops with time.sleep() in a tkinter app, as it will block the main (UI) thread. Instead, look into the tkinter.after() method, which is useful for situations like this!\nt = TIMER\n\ndef start_timer():\n global t\n if t:\n minutes, seconds = divmod(t, 60)\n timer = f'{minutes:02d}:{seconds:02d}'\n canvas.itemconfig(timer_text, text=timer)\n t -= 1\n # call this function again aftre 1000mS\n after_id = window.after(1000, start_timer) \n else:\n window.after_cancel(after_id) # cancel the countdown\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter", "tkinter_entry" ]
stackoverflow_0074481108_python_tkinter_tkinter_entry.txt
Q: Returning value in new column based on other columns pandas I am trying to mirror vlookup function into python script: If value from GPN column in analysis_sheet is in GPN column in whitelist_sheet I want to return value from column SOURCE in whitelist_sheet DataFrame to column RCL in analysis_sheet. Here are some of my trials, but non worked: analysis_sheet['RCL'] = analysis_sheet['GPN'].isin(whitelist_sheet['GPN']) and analysis_sheet['RCL'] = ((analysis_sheet['GPN'].loc[analysis_sheet['GPN'].isin(whitelist_sheet['GPN']), analysis_sheet['RCL']]) = whitelist_sheet['SOURCE']) and analysis_sheet['RCL'] = analysis_sheet.merge(whitelist_sheet, right_on='SOURCE') and analysis_sheet['RCL'] = analysis_sheet.loc[analysis_sheet['GPN'].isin(whitelist_sheet['GPN']), whitelist_sheet['SOURCE']] Here is example how it should work: RESULT TABLE A: import pandas as pd data1 = {'GPN': [111, 222, 333, 444], 'col2': ['fsgd', 'sdg', 'sfgf', 'sfgf'], 'col3':['bgg', 'gd', 'gbg', 'gbg']} analysis_sheet = pd.DataFrame(data1) data2 = {'GPN': [111, 222, 333, 555], 'col2': ['as', 'df', 'dd', 'sd'], 'Source':['HH', 'BB', 'CD', 'GK']} whitelist_sheet = pd.DataFrame(data2).rename(columns={'col2': 'to_be_droped'}) analysis_sheet.merge(whitelist_sheet, on=['GPN'], how='left').rename(columns={'Source': 'RCL'}).drop('to_be_droped', axis=1)
Returning value in new column based on other columns pandas
I am trying to mirror vlookup function into python script: If value from GPN column in analysis_sheet is in GPN column in whitelist_sheet I want to return value from column SOURCE in whitelist_sheet DataFrame to column RCL in analysis_sheet. Here are some of my trials, but non worked: analysis_sheet['RCL'] = analysis_sheet['GPN'].isin(whitelist_sheet['GPN']) and analysis_sheet['RCL'] = ((analysis_sheet['GPN'].loc[analysis_sheet['GPN'].isin(whitelist_sheet['GPN']), analysis_sheet['RCL']]) = whitelist_sheet['SOURCE']) and analysis_sheet['RCL'] = analysis_sheet.merge(whitelist_sheet, right_on='SOURCE') and analysis_sheet['RCL'] = analysis_sheet.loc[analysis_sheet['GPN'].isin(whitelist_sheet['GPN']), whitelist_sheet['SOURCE']] Here is example how it should work: RESULT TABLE
[ "import pandas as pd\ndata1 = {'GPN': [111, 222, 333, 444], 'col2': ['fsgd', 'sdg', 'sfgf', 'sfgf'], 'col3':['bgg', 'gd', 'gbg', 'gbg']}\nanalysis_sheet = pd.DataFrame(data1) \ndata2 = {'GPN': [111, 222, 333, 555], 'col2': ['as', 'df', 'dd', 'sd'], 'Source':['HH', 'BB', 'CD', 'GK']}\nwhitelist_sheet = pd.DataFrame(data2).rename(columns={'col2': 'to_be_droped'}) \nanalysis_sheet.merge(whitelist_sheet, on=['GPN'], how='left').rename(columns={'Source': 'RCL'}).drop('to_be_droped', axis=1)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074481146_dataframe_pandas_python.txt
Q: How to make python click squares on memory game someone know how to make python click squares on memory game? EX: I have this puzzle to memorize(The red squares are random): https://i.imgur.com/IP54Qef.png How do i make python to click red squares after they dissapear? I managed to find if there is a red square on the screen. from pyautogui import * import pyautogui import time from playsound import playsound while 0: if pyautogui.locateOnScreen('model_square.png', confidence=1) != None: print("There is a red square") playsound('audio.mp3') time.sleep(1) else: print("No squares") time.sleep(1) A: pyautogui.locateOnScreen('model_square.png', confidence=1) will return (x,y) values of the given image if found on the screen. pyautogui.click(x,y) will click on the given x,y. So to code what you want to do we can simply declare a variable that will store x,y of the red squares found on the screen and then pass the variable in pyautogui.click(variable) to click the x,y coordinates of the red squares So your code for that would be: while 0: #This variable will return x,y of the image found on the screen red_square = pyautogui.locateOnScreen('model_square.png', confidence=1) if red_square != None: #click the x,y where the image is found on the screen pyautogui.click(red_square)
How to make python click squares on memory game
someone know how to make python click squares on memory game? EX: I have this puzzle to memorize(The red squares are random): https://i.imgur.com/IP54Qef.png How do i make python to click red squares after they dissapear? I managed to find if there is a red square on the screen. from pyautogui import * import pyautogui import time from playsound import playsound while 0: if pyautogui.locateOnScreen('model_square.png', confidence=1) != None: print("There is a red square") playsound('audio.mp3') time.sleep(1) else: print("No squares") time.sleep(1)
[ "pyautogui.locateOnScreen('model_square.png', confidence=1) will return (x,y) values of the given image if found on the screen.\npyautogui.click(x,y) will click on the given x,y.\nSo to code what you want to do we can simply declare a variable that will store x,y of the red squares found on the screen and then pass the variable in pyautogui.click(variable) to click the x,y coordinates of the red squares\nSo your code for that would be:\nwhile 0:\n\n #This variable will return x,y of the image found on the screen\n red_square = pyautogui.locateOnScreen('model_square.png', confidence=1)\n if red_square != None:\n #click the x,y where the image is found on the screen\n pyautogui.click(red_square)\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074424681_python.txt
Q: how can I represent tuple as a 2D array in python? Imagine a NxN chess board, I have a tuple t = (0,3,2,1) which represents chess pieces location at each column (col = index), and each number represents the row, starting at 0 from bottom. For this example, it has 4 columns, first piece is at row=0 (bottom row), second piece is on row=3 (fourth/highest row), third piece is on row=2 (third row from bottom), fourth piece is on second row from bottom. I would like to represent it as a 2D array as follows: [[0,1,0,0], [0,0,1,0], [0,0,0,1], [1,0,0,0]] I was able to generate the 2D array using this code pieces_locations = (0,3,2,1) pieces_locations = list(pieces_locations) table_size = len(pieces_locations) arr = [[0 for col in range(table_size)] for row in range(table_size)] However, I was not able to assign the 1's in their correct locations. I was able to understand this: arr[row][col], but the rows are inverted (0 is top to N is bottom). A: First create the 2-d list of zeroes. arr = [[0] * table_size for _ in range(table_size)] Then loop over the locations, replacing the appropriate elements with 1. for col, row in enumerate(pieces_location, 1): arr[-row][col] = 1 A: Use this after you've made the list (A matrix of 0s) ** If the locations list is not as long as the number of rows, the program will crash (use try and except to counter) for x, i in enumerate(range(1, len(arr))): arr[-i][pieces_locations[x]] = 1 This should give you your desired output, I hope this helps A: I was able to figure it out, although I'm sure there is a move convenient way. pieces_locations = (0,3,2,1) pieces_locations = list(pieces_locations) table_size = len(pieces_locations) arr = [[0 for col in range(table_size)] for row in range(table_size)] for row in range(0, table_size): arr[row][pieces_locations.index(row)] = 1 res = arr[::-1] print (res)
how can I represent tuple as a 2D array in python?
Imagine a NxN chess board, I have a tuple t = (0,3,2,1) which represents chess pieces location at each column (col = index), and each number represents the row, starting at 0 from bottom. For this example, it has 4 columns, first piece is at row=0 (bottom row), second piece is on row=3 (fourth/highest row), third piece is on row=2 (third row from bottom), fourth piece is on second row from bottom. I would like to represent it as a 2D array as follows: [[0,1,0,0], [0,0,1,0], [0,0,0,1], [1,0,0,0]] I was able to generate the 2D array using this code pieces_locations = (0,3,2,1) pieces_locations = list(pieces_locations) table_size = len(pieces_locations) arr = [[0 for col in range(table_size)] for row in range(table_size)] However, I was not able to assign the 1's in their correct locations. I was able to understand this: arr[row][col], but the rows are inverted (0 is top to N is bottom).
[ "First create the 2-d list of zeroes.\narr = [[0] * table_size for _ in range(table_size)]\n\nThen loop over the locations, replacing the appropriate elements with 1.\nfor col, row in enumerate(pieces_location, 1):\n arr[-row][col] = 1\n\n", "Use this after you've made the list (A matrix of 0s)\n** If the locations list is not as long as the number of rows, the program will crash (use try and except to counter)\nfor x, i in enumerate(range(1, len(arr))):\n arr[-i][pieces_locations[x]] = 1\n\nThis should give you your desired output, I hope this helps\n", "I was able to figure it out, although I'm sure there is a move convenient way.\npieces_locations = (0,3,2,1)\npieces_locations = list(pieces_locations)\n\ntable_size = len(pieces_locations)\n\narr = [[0 for col in range(table_size)] for row in range(table_size)]\n\n\nfor row in range(0, table_size):\n arr[row][pieces_locations.index(row)] = 1\n\n\nres = arr[::-1]\nprint (res)\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "arrays", "chess", "list", "python", "tuples" ]
stackoverflow_0074481205_arrays_chess_list_python_tuples.txt
Q: How to get rid of the "\n" at the end of each line while writing to a variable? I have the following code to read data import sys data = sys.stdin.readlines() id = 0 while id < len(data) - 1: n = int(data[id]) id += 1 some_list = [] for _ in range(n): x1, y1, x2, y2 = map(str, data[id].split(" ")) some_list.append([x1, y1, x2, y2]) id += 1 print(some_list) Input: 2 0 3 1 2 2 1 3 1 4 3 1 1 0 0 0 2 1 1 1 2 0 3 0 3 1 Its output: [['0', '3', '1', '2\n'], ['2', '1', '3', '1\n']] [['3', '1', '1', '0\n'], ['0', '0', '2', '1\n'], ['1', '1', '2', '0\n'], ['3', '0', '3', '1']] You can see that "\n" is also written. How can I ignore "\n" (or remove it) without losing data read speed? I need numbers to remain in string format. The construction sys.stdin.readlines() is also needed since I don't know how many lines (how many m-s) will be in the input. A: You may use rstrip from the string package. For example here, just use: y2.rstrip() It will remove the \n at the end of y2 if there is one.
How to get rid of the "\n" at the end of each line while writing to a variable?
I have the following code to read data import sys data = sys.stdin.readlines() id = 0 while id < len(data) - 1: n = int(data[id]) id += 1 some_list = [] for _ in range(n): x1, y1, x2, y2 = map(str, data[id].split(" ")) some_list.append([x1, y1, x2, y2]) id += 1 print(some_list) Input: 2 0 3 1 2 2 1 3 1 4 3 1 1 0 0 0 2 1 1 1 2 0 3 0 3 1 Its output: [['0', '3', '1', '2\n'], ['2', '1', '3', '1\n']] [['3', '1', '1', '0\n'], ['0', '0', '2', '1\n'], ['1', '1', '2', '0\n'], ['3', '0', '3', '1']] You can see that "\n" is also written. How can I ignore "\n" (or remove it) without losing data read speed? I need numbers to remain in string format. The construction sys.stdin.readlines() is also needed since I don't know how many lines (how many m-s) will be in the input.
[ "You may use rstrip from the string package.\nFor example here, just use:\ny2.rstrip()\n\nIt will remove the \\n at the end of y2 if there is one.\n" ]
[ 2 ]
[]
[]
[ "python", "stdin" ]
stackoverflow_0074481235_python_stdin.txt
Q: replace nested for loops combined with conditions to boost performance In order to speed up my code I want to exchange my for loops by vectorization or other recommended tools. I found plenty of examples with replacing simple for loops but nothing for replacing nested for loops in combination with conditions, which I was able to comprehend / would have helped me... With my code I want to check if points (X, Y coordinates) can be connected by lineaments (linear structures). I started pretty simple but over time the code outgrew itself and is now exhausting slow... Here is an working example of the part taking the most time: import numpy as np import matplotlib.pyplot as plt from shapely.geometry import MultiLineString, LineString, Point from shapely.affinity import rotate from math import sqrt from tqdm import tqdm import random as rng # creating random array of points xys = rng.sample(range(201 * 201), 100) points = [list(divmod(xy, 201)) for xy in xys] # plot points plt.scatter(*zip(*points)) # calculate length for rotating lines -> diagonal of bounds so all points able to be reached length = sqrt(2)*200 # calculate angles to rotate lines angles = [] for a in range(0, 360, 1): angle = np.deg2rad(a) angles.append(angle) # copy points array to helper array (points_list) so original array is not manipulated points_list = points.copy() # array to save final lines lines = [] # iterate over every point in points array to search for connecting lines for point in tqdm(points): # delete point from helper array to speed up iteration -> so points do not get # double, triple, ... checked if len(points_list) > 0: points_list.remove(point) else: break # create line from original point to point at end of line (x+length) - this line # gets rotated at calculated angles start = Point(point) end = Point(start.x+length, start.y) line = LineString([start,end]) # iterate over angle Array to rotate line by each angle for angle in angles: rot_line = rotate(line, angle, origin=start, use_radians=True) lst = list(rot_line.coords) # save starting point (a) and ending point(b) of rotated line for np.cross() # (cross product to check if points on/near rotated line) a = np.asarray(lst[0]) b = np.asarray(lst[1]) # counter to count number of points on/near line count = 0 line_list = [] # iterate manipulated points_list array (only points left for which there has # not been a line rotated yet) for poi in points_list: # check whether point (pio) is on/near rotated line by calculating cross # product (np.corss()) p = np.asarray(poi) cross = np.cross(p-a,b-a) # check if poi is inside accepted deviation from cross product if cross > -750 and cross < 750: # check if more than 5 points (poi) are on/near the rotated line if count < 5: line_list.append(poi) count += 1 # if 5 points are connected by the rotated line sort the coordinates # of the points and check if the length of the line meets the criteria else: line_list = sorted(line_list , key=lambda k: [k[1], k[0]]) line_length = LineString(line_list) if line_length.length >= 10 and line_length.length <= 150: lines.append(line_list) break # use shapeplys' MultiLineString to create lines from coordinates and plot them # afterwards multiLines = MultiLineString(lines) fig, ax = plt.subplots() ax.set_title("Lines") for multiLine in MultiLineString(multiLines).geoms: # print(multiLine) plt.plot(*multiLine.xy) As mentioned above it was thinking about using pandas or numpy vectorization and therefore build a pandas df for the points and lines (gdf) and one with the different angles (angles) to rotate the lines: Name Type Size Value gdf DataFrame (122689, 6) Column name: x, y, value, start, end, line angles DataFrame (360, 1) Column name: angle But I ran out of ideas to replace this nested for loops with conditions with pandas vectorization. I found this article on medium and halfway through the article there are conditions for vectorization mentioned and I was wondering if my code maybe is not suitbale for vectorization because of dependencies within the loops... If this is right, it does not necessarily needs to be vectoriation everything boosting the performance is welcome! A: You can quite easily vectorize the most computationally intensive part: the innermost loop. The idea is to compute the points_list all at once. np.cross can be applied on each lines, np.where can be used to filter the result (and get the IDs). Here is the (barely tested) modified main loop: for point in tqdm(points): if len(points_list) > 0: points_list.remove(point) else: break start = Point(point) end = Point(start.x+length, start.y) line = LineString([start,end]) # CHANGED PART if len(points_list) == 0: continue p = np.asarray(points_list) for angle in angles: rot_line = rotate(line, angle, origin=start, use_radians=True) a, b = np.asarray(rot_line.coords) cross = np.cross(p-a,b-a) foundIds = np.where((cross > -750) & (cross < 750))[0] if foundIds.size > 5: # Similar to the initial part, not efficient, but rarely executed line_list = p[foundIds][:5].tolist() line_list = sorted(line_list, key=lambda k: [k[1], k[0]]) line_length = LineString(line_list) if line_length.length >= 10 and line_length.length <= 150: lines.append(line_list) This is about 15 times faster on my machine. Most of the time is spent in the shapely module which is very inefficient (especially rotate and even np.asarray(rot_line.coords)). Indeed, each call to rotate takes about 50 microseconds which is simply insane: it should take no more than 50 nanoseconds, that is, 1000 time faster (actually, an optimized native code should be able to to that in less than 20 ns on my machine). If you want a faster code, then please consider not using this package (or improving its performance).
replace nested for loops combined with conditions to boost performance
In order to speed up my code I want to exchange my for loops by vectorization or other recommended tools. I found plenty of examples with replacing simple for loops but nothing for replacing nested for loops in combination with conditions, which I was able to comprehend / would have helped me... With my code I want to check if points (X, Y coordinates) can be connected by lineaments (linear structures). I started pretty simple but over time the code outgrew itself and is now exhausting slow... Here is an working example of the part taking the most time: import numpy as np import matplotlib.pyplot as plt from shapely.geometry import MultiLineString, LineString, Point from shapely.affinity import rotate from math import sqrt from tqdm import tqdm import random as rng # creating random array of points xys = rng.sample(range(201 * 201), 100) points = [list(divmod(xy, 201)) for xy in xys] # plot points plt.scatter(*zip(*points)) # calculate length for rotating lines -> diagonal of bounds so all points able to be reached length = sqrt(2)*200 # calculate angles to rotate lines angles = [] for a in range(0, 360, 1): angle = np.deg2rad(a) angles.append(angle) # copy points array to helper array (points_list) so original array is not manipulated points_list = points.copy() # array to save final lines lines = [] # iterate over every point in points array to search for connecting lines for point in tqdm(points): # delete point from helper array to speed up iteration -> so points do not get # double, triple, ... checked if len(points_list) > 0: points_list.remove(point) else: break # create line from original point to point at end of line (x+length) - this line # gets rotated at calculated angles start = Point(point) end = Point(start.x+length, start.y) line = LineString([start,end]) # iterate over angle Array to rotate line by each angle for angle in angles: rot_line = rotate(line, angle, origin=start, use_radians=True) lst = list(rot_line.coords) # save starting point (a) and ending point(b) of rotated line for np.cross() # (cross product to check if points on/near rotated line) a = np.asarray(lst[0]) b = np.asarray(lst[1]) # counter to count number of points on/near line count = 0 line_list = [] # iterate manipulated points_list array (only points left for which there has # not been a line rotated yet) for poi in points_list: # check whether point (pio) is on/near rotated line by calculating cross # product (np.corss()) p = np.asarray(poi) cross = np.cross(p-a,b-a) # check if poi is inside accepted deviation from cross product if cross > -750 and cross < 750: # check if more than 5 points (poi) are on/near the rotated line if count < 5: line_list.append(poi) count += 1 # if 5 points are connected by the rotated line sort the coordinates # of the points and check if the length of the line meets the criteria else: line_list = sorted(line_list , key=lambda k: [k[1], k[0]]) line_length = LineString(line_list) if line_length.length >= 10 and line_length.length <= 150: lines.append(line_list) break # use shapeplys' MultiLineString to create lines from coordinates and plot them # afterwards multiLines = MultiLineString(lines) fig, ax = plt.subplots() ax.set_title("Lines") for multiLine in MultiLineString(multiLines).geoms: # print(multiLine) plt.plot(*multiLine.xy) As mentioned above it was thinking about using pandas or numpy vectorization and therefore build a pandas df for the points and lines (gdf) and one with the different angles (angles) to rotate the lines: Name Type Size Value gdf DataFrame (122689, 6) Column name: x, y, value, start, end, line angles DataFrame (360, 1) Column name: angle But I ran out of ideas to replace this nested for loops with conditions with pandas vectorization. I found this article on medium and halfway through the article there are conditions for vectorization mentioned and I was wondering if my code maybe is not suitbale for vectorization because of dependencies within the loops... If this is right, it does not necessarily needs to be vectoriation everything boosting the performance is welcome!
[ "You can quite easily vectorize the most computationally intensive part: the innermost loop. The idea is to compute the points_list all at once. np.cross can be applied on each lines, np.where can be used to filter the result (and get the IDs).\nHere is the (barely tested) modified main loop:\nfor point in tqdm(points):\n if len(points_list) > 0:\n points_list.remove(point)\n else:\n break\n\n start = Point(point)\n end = Point(start.x+length, start.y)\n line = LineString([start,end])\n\n # CHANGED PART\n\n if len(points_list) == 0:\n continue\n\n p = np.asarray(points_list)\n\n for angle in angles:\n rot_line = rotate(line, angle, origin=start, use_radians=True)\n a, b = np.asarray(rot_line.coords)\n cross = np.cross(p-a,b-a)\n foundIds = np.where((cross > -750) & (cross < 750))[0]\n\n if foundIds.size > 5:\n # Similar to the initial part, not efficient, but rarely executed\n line_list = p[foundIds][:5].tolist()\n line_list = sorted(line_list, key=lambda k: [k[1], k[0]])\n line_length = LineString(line_list)\n if line_length.length >= 10 and line_length.length <= 150:\n lines.append(line_list)\n\nThis is about 15 times faster on my machine.\nMost of the time is spent in the shapely module which is very inefficient (especially rotate and even np.asarray(rot_line.coords)). Indeed, each call to rotate takes about 50 microseconds which is simply insane: it should take no more than 50 nanoseconds, that is, 1000 time faster (actually, an optimized native code should be able to to that in less than 20 ns on my machine). If you want a faster code, then please consider not using this package (or improving its performance).\n" ]
[ 2 ]
[]
[]
[ "for_loop", "numpy", "pandas", "performance", "python" ]
stackoverflow_0074479770_for_loop_numpy_pandas_performance_python.txt
Q: How to change annotation features for Vision OCR? I'm trying to extract text from a local image with Python and Vision, based off Cloud Vision API: Detect text in images. This is the function to extract text: def detect_text(path):     """Detects text in the file."""     from google.cloud import vision     import io     client = vision.ImageAnnotatorClient() with io.open(path, 'rb') as image_file:         content = image_file.read() image = vision.Image(content=content) response = client.text_detection(image=image)     texts = response.text_annotations It works, but I'd like to specify the use of features like TEXT_DETECTION instead of the default DOCUMENT_TEXT_DETECTION feature, as well as specify language hints. How would I do that? The text_detection function doesn't seem to take such parameters. A: Alternatively you can request language hints by adding image_context object: response = client.text_detection(image=image, image_context={"language_hints": ["en"]}) A: The following article explains it, scroll down to the 'Creating the Application' section. You need to add a request object to your code request = { "image": { "source": { "image_uri": "IMAGE_URL" } }, "features": [ { "type": "TEXT_DETECTION" } ] "imageContext": { "languageHints": ["en-t-i0-handwrit"] } } Then past it in to the request. response = client.annotate_image(request)
How to change annotation features for Vision OCR?
I'm trying to extract text from a local image with Python and Vision, based off Cloud Vision API: Detect text in images. This is the function to extract text: def detect_text(path):     """Detects text in the file."""     from google.cloud import vision     import io     client = vision.ImageAnnotatorClient() with io.open(path, 'rb') as image_file:         content = image_file.read() image = vision.Image(content=content) response = client.text_detection(image=image)     texts = response.text_annotations It works, but I'd like to specify the use of features like TEXT_DETECTION instead of the default DOCUMENT_TEXT_DETECTION feature, as well as specify language hints. How would I do that? The text_detection function doesn't seem to take such parameters.
[ "Alternatively you can request language hints by adding image_context object:\nresponse = client.text_detection(image=image,\nimage_context={\"language_hints\": [\"en\"]})\n\n", "The following article explains it, scroll down to the 'Creating the Application' section.\nYou need to add a request object to your code\nrequest = {\n \"image\": {\n \"source\": {\n \"image_uri\": \"IMAGE_URL\"\n }\n }, \n \"features\": [\n {\n \"type\": \"TEXT_DETECTION\"\n }\n ]\n \"imageContext\": {\n \"languageHints\": [\"en-t-i0-handwrit\"]\n }\n}\n\nThen past it in to the request.\nresponse = client.annotate_image(request)\n\n" ]
[ 1, 0 ]
[]
[]
[ "google_cloud_platform", "google_vision", "python" ]
stackoverflow_0074480775_google_cloud_platform_google_vision_python.txt
Q: Closing RabbitMQ connection blocks thread, using Pika I'm connecting to RabbitMQ from a separate thread but want to allow the thread to be stopped from another thread. class JobListener(threading.Thread): """Listens for jobs""" connection = None channel = None consuming = False def run(self): try: """Start listening for jobs""" self.connection = pika.BlockingConnection( pika.ConnectionParameters(host=CONN_HOST, credentials=CONN_CREDENTIALS)) print("[JobListener] AMQP connection established.") self.channel = self.connection.channel() print("[JobListener] Channel opened.") self.channel.queue_declare(queue=QUEUE_NAME) print("[JobListener] Queue declared") self.channel.basic_consume(self.on_message, queue=QUEUE_NAME, no_ack=True) self.consuming = True print("[JobListener] Starting consumption...") self.channel.start_consuming() print("[JobListener] start_consuming() got interrupted externally.") finally: self.consuming = False print("[JobListener] JobListener thread finished.") def stop(self): """Stop listening for jobs""" self.channel.stop_consuming() print("[JobListener] Message consumption stopped.") self.channel.close() print("[JobListener] Channel closed.") self.connection.close() print("[JobListener] AMQP connection closed.") self.consuming = False def on_message(self, channel, method, properties, body): """Handle incoming message""" print("[x] Received %r " % body) From a different thread, I'm calling either job_listener.start() or job_listener.stop(). However, when I call job_listener.stop(), the inner call of self.connection.close() is blocked. Why? A: Pika is not thread safe (see the FAQ). Is Pika thread safe? Pika does not have any notion of threading in the code. If you want to use Pika with threading, make sure you have a Pika connection per thread, created in that thread. It is not safe to share one Pika connection across threads, with one exception: you may call the connection method add_callback_threadsafe from another thread to schedule a callback within an active pika connection. You need to use add_callback_threadsafe eg. class JobListener(threading.Thread): ... def stop(self): """Stop listening for jobs""" self.connection.add_callback_threadsafe(self._stop) self.join() def _stop(self): self.channel.stop_consuming() print("[JobListener] Message consumption stopped.") self.channel.close() print("[JobListener] Channel closed.") self.connection.close() print("[JobListener] AMQP connection closed.") self.consuming = False
Closing RabbitMQ connection blocks thread, using Pika
I'm connecting to RabbitMQ from a separate thread but want to allow the thread to be stopped from another thread. class JobListener(threading.Thread): """Listens for jobs""" connection = None channel = None consuming = False def run(self): try: """Start listening for jobs""" self.connection = pika.BlockingConnection( pika.ConnectionParameters(host=CONN_HOST, credentials=CONN_CREDENTIALS)) print("[JobListener] AMQP connection established.") self.channel = self.connection.channel() print("[JobListener] Channel opened.") self.channel.queue_declare(queue=QUEUE_NAME) print("[JobListener] Queue declared") self.channel.basic_consume(self.on_message, queue=QUEUE_NAME, no_ack=True) self.consuming = True print("[JobListener] Starting consumption...") self.channel.start_consuming() print("[JobListener] start_consuming() got interrupted externally.") finally: self.consuming = False print("[JobListener] JobListener thread finished.") def stop(self): """Stop listening for jobs""" self.channel.stop_consuming() print("[JobListener] Message consumption stopped.") self.channel.close() print("[JobListener] Channel closed.") self.connection.close() print("[JobListener] AMQP connection closed.") self.consuming = False def on_message(self, channel, method, properties, body): """Handle incoming message""" print("[x] Received %r " % body) From a different thread, I'm calling either job_listener.start() or job_listener.stop(). However, when I call job_listener.stop(), the inner call of self.connection.close() is blocked. Why?
[ "Pika is not thread safe (see the FAQ).\n\n\nIs Pika thread safe?\nPika does not have any notion of threading in the code. If you want to use Pika with threading, make sure you have a Pika connection per thread, created in that thread. It is not safe to share one Pika connection across threads, with one exception: you may call the connection method add_callback_threadsafe from another thread to schedule a callback within an active pika connection.\n\n\n\nYou need to use add_callback_threadsafe\neg.\nclass JobListener(threading.Thread):\n\n ...\n\n def stop(self):\n \"\"\"Stop listening for jobs\"\"\"\n self.connection.add_callback_threadsafe(self._stop)\n self.join()\n\n\n def _stop(self):\n self.channel.stop_consuming()\n print(\"[JobListener] Message consumption stopped.\")\n self.channel.close()\n print(\"[JobListener] Channel closed.\")\n self.connection.close()\n print(\"[JobListener] AMQP connection closed.\")\n self.consuming = False \n\n" ]
[ 0 ]
[]
[]
[ "multithreading", "pika", "python", "python_3.x", "rabbitmq" ]
stackoverflow_0043769873_multithreading_pika_python_python_3.x_rabbitmq.txt
Q: reading dataframe from csv and array problems The application I use generates data in a dataframe which I need to use upon request. It looks similar to this. <class 'pandas.core.frame.DataFrame'> E Gg gnx2 J chs lwave J_ID 0 27.572025 82.308581 7.078391 3.0 1 [0] 1 1 46.387728 77.029548 58.112338 3.0 1 [0] 1 2 75.007554 82.087407 0.535442 3.0 1 [0] 1 Everything worked perfectly while I didn't try to use dataframes saved in separate files before. Because when I am trying to use the data after loading - I got errors about data types for the columns which contain arrays. (lvawe for example) is an array and when saved in csv the information about data type is lost. #saving the data to csv csv_filename = "ladder.csv" ladder.to_csv(csv_filename) So when loading a dataframe next time to use the data I can't get access to array elements like it should. Because as I understand data in this column is loaded like string. After loading the data through load_csv I get this for the data types: Unnamed: 0 int64 E float64 Gg float64 gnx2 float64 J float64 chs int64 lwave object J_ID int64 dtype: object How can I resolve this issue? How can I correctly load the data with the correct data type or maybe explicitly assign a data type to a column after loading? A: In the read_csv function, you can manually assign data types to your new columns. Pass in a dictionary of column name --> preferred data type. data_type_mapping = {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’} my_df = pd.read_csv('myfile.csv', dtypes = data_type_mapping) From pandas documentation: Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’} Use str or object together with suitable na_values settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion. A: Question was resolved by the use of json.loads feature. #modifying the ladder using json modified = ladder_df.lwave.apply(json.loads) ladder_df['lwave'] = modified
reading dataframe from csv and array problems
The application I use generates data in a dataframe which I need to use upon request. It looks similar to this. <class 'pandas.core.frame.DataFrame'> E Gg gnx2 J chs lwave J_ID 0 27.572025 82.308581 7.078391 3.0 1 [0] 1 1 46.387728 77.029548 58.112338 3.0 1 [0] 1 2 75.007554 82.087407 0.535442 3.0 1 [0] 1 Everything worked perfectly while I didn't try to use dataframes saved in separate files before. Because when I am trying to use the data after loading - I got errors about data types for the columns which contain arrays. (lvawe for example) is an array and when saved in csv the information about data type is lost. #saving the data to csv csv_filename = "ladder.csv" ladder.to_csv(csv_filename) So when loading a dataframe next time to use the data I can't get access to array elements like it should. Because as I understand data in this column is loaded like string. After loading the data through load_csv I get this for the data types: Unnamed: 0 int64 E float64 Gg float64 gnx2 float64 J float64 chs int64 lwave object J_ID int64 dtype: object How can I resolve this issue? How can I correctly load the data with the correct data type or maybe explicitly assign a data type to a column after loading?
[ "In the read_csv function, you can manually assign data types to your new columns. Pass in a dictionary of column name --> preferred data type.\ndata_type_mapping = {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’}\nmy_df = pd.read_csv('myfile.csv', dtypes = data_type_mapping)\n\nFrom pandas documentation:\n\nData type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’} Use str or object together with suitable na_values settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.\n\n", "Question was resolved by the use of json.loads feature.\n#modifying the ladder using json\n\nmodified = ladder_df.lwave.apply(json.loads)\nladder_df['lwave'] = modified\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "dataframe", "python" ]
stackoverflow_0074476182_arrays_dataframe_python.txt
Q: How do I make a bot say something when someone enter my discord server I'm trying to make a discord bot say a certain message when it first joins a Discord Server, so when the bot first joins a Discord Server, it will say something along the lines of "Hello everyone....". I looked at a lot of sources but none seem to be,Can anyone help? make a bot say a certain message when it first joins a server A: Heyo. To make your bot say something in a channel, you just use a client event. Code example: @client.event async def on_guild_join(guild): await channel.send("Wassup!") Please keep in mind that you need to define the channel variable. Remember that this requires Intents.guilds to be enabled! You can check all of this on the discord.py docs. https://discordpy.readthedocs.io/en/stable/api.html?highlight=event#discord.on_guild_join
How do I make a bot say something when someone enter my discord server
I'm trying to make a discord bot say a certain message when it first joins a Discord Server, so when the bot first joins a Discord Server, it will say something along the lines of "Hello everyone....". I looked at a lot of sources but none seem to be,Can anyone help? make a bot say a certain message when it first joins a server
[ "Heyo.\nTo make your bot say something in a channel, you just use a client event.\nCode example:\n@client.event\nasync def on_guild_join(guild):\n await channel.send(\"Wassup!\")\n\nPlease keep in mind that you need to define the channel variable.\nRemember that this requires Intents.guilds to be enabled!\nYou can check all of this on the discord.py docs. https://discordpy.readthedocs.io/en/stable/api.html?highlight=event#discord.on_guild_join\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python", "python_3.8" ]
stackoverflow_0074481446_discord_discord.py_python_python_3.8.txt
Q: How to import 2 separate files with the same name in the same python script Let's say that I have the following files with given paths: /home/project/folder1/common.py /home/project/folder2/common.py So, these files have the same name but they are in different folders. And I need to import both of these files in the same python script that is located in a separate path, as following: /home/project/folder3/abc.py If it was only 1 file I needed to import, I could do the following: import sys sys.path.append(r'/home/project/folder1') import common as c And then I could access e.g. constants from /home/project/folder1/common.py as c.MY_CONSTANT. But how can I import both /home/project/folder1/common.py and /home/project/folder1/common.py in the file /home/project/folder3/abc.py ? Please note that constants that I would like to access from abc.py may have the same names. With other words, MY_CONSTANT may exist in both common.py files. What I would like to achieve is the following (though I know that this is wrong syntax in Python): import /home/project/folder1/common as c1 import /home/project/folder2/common as c2 ... so that I can access both files with c1. and c2.. So, how can I import both files in the same Python script? A: It's a bit odd calling both files common.py, by their naming and placement they're anything but common :). But here you go. You need to make your abc.py script "see" your top-level project directory. Since it is in /home, adding the path to your home directory achieves that. import sys sys.path.append(r"/home") from project.folder1 import common as c1 from project.folder2 import common as c2
How to import 2 separate files with the same name in the same python script
Let's say that I have the following files with given paths: /home/project/folder1/common.py /home/project/folder2/common.py So, these files have the same name but they are in different folders. And I need to import both of these files in the same python script that is located in a separate path, as following: /home/project/folder3/abc.py If it was only 1 file I needed to import, I could do the following: import sys sys.path.append(r'/home/project/folder1') import common as c And then I could access e.g. constants from /home/project/folder1/common.py as c.MY_CONSTANT. But how can I import both /home/project/folder1/common.py and /home/project/folder1/common.py in the file /home/project/folder3/abc.py ? Please note that constants that I would like to access from abc.py may have the same names. With other words, MY_CONSTANT may exist in both common.py files. What I would like to achieve is the following (though I know that this is wrong syntax in Python): import /home/project/folder1/common as c1 import /home/project/folder2/common as c2 ... so that I can access both files with c1. and c2.. So, how can I import both files in the same Python script?
[ "It's a bit odd calling both files common.py, by their naming and placement they're anything but common :). But here you go. You need to make your abc.py script \"see\" your top-level project directory. Since it is in /home, adding the path to your home directory achieves that.\n import sys\n sys.path.append(r\"/home\")\n\n from project.folder1 import common as c1\n from project.folder2 import common as c2\n\n" ]
[ 2 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074480614_python_python_3.x.txt
Q: sentry sdk custom performance integration for python app Sentry can track performance for celery tasks and API endpoints https://docs.sentry.io/product/performance/ I have custom script that are lunching by crone and do set of similar tasks I want to incorporated sentry_sdk into my script to get performance tracing of my tasks Any advise how to do it with https://getsentry.github.io/sentry-python/api.html#sentry_sdk.capture_event A: You don't need use capture_event I would suggest to use sentry_sdk.start_transaction instead. It also allows track your function performance. Look at my example from time import sleep from sentry_sdk import Hub, init, start_transaction init( dsn="dsn", traces_sample_rate=1.0, ) def sentry_trace(func): def wrapper(*args, **kwargs): transaction = Hub.current.scope.transaction if transaction: with transaction.start_child(op=func.__name__): return func(*args, **kwargs) else: with start_transaction(op=func.__name__, name=func.__name__): return func(*args, **kwargs) return wrapper @sentry_trace def b(): for i in range(1000): print(i) @sentry_trace def c(): sleep(2) print(1) @sentry_trace def a(): sleep(1) b() c() if __name__ == '__main__': a() After starting this code you can see basic info of transaction a with childs b and c
sentry sdk custom performance integration for python app
Sentry can track performance for celery tasks and API endpoints https://docs.sentry.io/product/performance/ I have custom script that are lunching by crone and do set of similar tasks I want to incorporated sentry_sdk into my script to get performance tracing of my tasks Any advise how to do it with https://getsentry.github.io/sentry-python/api.html#sentry_sdk.capture_event
[ "You don't need use capture_event\nI would suggest to use sentry_sdk.start_transaction instead. It also allows track your function performance.\nLook at my example\nfrom time import sleep\nfrom sentry_sdk import Hub, init, start_transaction\n\ninit(\n dsn=\"dsn\",\n traces_sample_rate=1.0,\n)\n\n\ndef sentry_trace(func):\n def wrapper(*args, **kwargs):\n transaction = Hub.current.scope.transaction\n if transaction:\n with transaction.start_child(op=func.__name__):\n return func(*args, **kwargs)\n else:\n with start_transaction(op=func.__name__, name=func.__name__):\n return func(*args, **kwargs)\n\n return wrapper\n\n\n@sentry_trace\ndef b():\n for i in range(1000):\n print(i)\n\n\n@sentry_trace\ndef c():\n sleep(2)\n print(1)\n\n\n@sentry_trace\ndef a():\n sleep(1)\n b()\n c()\n\n\nif __name__ == '__main__':\n a()\n\nAfter starting this code you can see basic info of transaction a with childs b and c\n\n" ]
[ 3 ]
[]
[]
[ "performance", "python", "sentry" ]
stackoverflow_0074454587_performance_python_sentry.txt
Q: Unable to allocate array with shape and data type I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS. I am trying to allocate memory for a numpy array with shape (156816, 36, 53806) with np.zeros((156816, 36, 53806), dtype='uint8') and while I'm getting an error on Ubuntu OS >>> import numpy as np >>> np.zeros((156816, 36, 53806), dtype='uint8') Traceback (most recent call last): File "<stdin>", line 1, in <module> numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8 I'm not getting it on MacOS: >>> import numpy as np >>> np.zeros((156816, 36, 53806), dtype='uint8') array([[[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], ..., [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]], dtype=uint8) I've read somewhere that np.zeros shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb. versions: Ubuntu os -> ubuntu mate 18 python -> 3.6.8 numpy -> 1.17.0 mac os -> 10.14.6 python -> 3.6.4 numpy -> 1.17.0 PS: also failed on Google Colab A: This is likely due to your system's overcommit handling mode. In the default mode, 0, Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. The root is allowed to allocate slightly more memory in this mode. This is the default. The exact heuristic used is not well explained here, but this is discussed more on Linux over commit heuristic and on this page. You can check your current overcommit mode by running $ cat /proc/sys/vm/overcommit_memory 0 In this case, you're allocating >>> 156816 * 36 * 53806 / 1024.0**3 282.8939827680588 ~282 GB and the kernel is saying well obviously there's no way I'm going to be able to commit that many physical pages to this, and it refuses the allocation. If (as root) you run: $ echo 1 > /proc/sys/vm/overcommit_memory This will enable the "always overcommit" mode, and you'll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least). I tested this myself on a machine with 32 GB of RAM. With overcommit mode 0 I also got a MemoryError, but after changing it back to 1 it works: >>> import numpy as np >>> a = np.zeros((156816, 36, 53806), dtype='uint8') >>> a.nbytes 303755101056 You can then go ahead and write to any location within the array, and the system will only allocate physical pages when you explicitly write to that page. So you can use this, with care, for sparse arrays. A: I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the pagefile size, as it was a Memory overcommitment problem for me too. Windows 8 On the Keyboard Press the WindowsKey + X then click System in the popup menu Tap or click Advanced system settings. You might be asked for an admin password or to confirm your choice On the Advanced tab, under Performance, tap or click Settings. Tap or click the Advanced tab, and then, under Virtual memory, tap or click Change Clear the Automatically manage paging file size for all drives check box. Under Drive [Volume Label], tap or click the drive that contains the paging file you want to change Tap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK Reboot your system Windows 10 Press the Windows key Type SystemPropertiesAdvanced Click Run as administrator Under Performance, click Settings Select the Advanced tab Select Change... Uncheck Automatically managing paging file size for all drives Then select Custom size and fill in the appropriate size Press Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog Reboot your system Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked. EDIT From here the suggested recommendations for page file size: There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB. Some things to keep in mind from here: However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer. Also: Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer. A: I came across this problem on Windows too. The solution for me was to switch from a 32-bit to a 64-bit version of Python. Indeed, a 32-bit software, like a 32-bit CPU, can adress a maximum of 4 GB of RAM (2^32). So if you have more than 4 GB of RAM, a 32-bit version cannot take advantage of it. With a 64-bit version of Python (the one labeled x86-64 in the download page), the issue disappears. You can check which version you have by entering the interpreter. I, with a 64-bit version, now have: Python 3.7.5rc1 (tags/v3.7.5rc1:4082f600a5, Oct 1 2019, 20:28:14) [MSC v.1916 64 bit (AMD64)], where [MSC v.1916 64 bit (AMD64)] means "64-bit Python". Sources : Quora - memory error generated by large numpy array Stackoverflow : 32 or 64-bit version of Python A: In my case, adding a dtype attribute changed dtype of the array to a smaller type(from float64 to uint8), decreasing array size enough to not throw MemoryError in Windows(64 bit). from mask = np.zeros(edges.shape) to mask = np.zeros(edges.shape,dtype='uint8') A: Sometimes, this error pops up because of the kernel has reached its limit. Try to restart the kernel redo the necessary steps. A: change the data type to another one which uses less memory works. For me, I change the data type to numpy.uint8: data['label'] = data['label'].astype(np.uint8) A: I faced the same issue running pandas in a docker contain on EC2. I tried the above solution of allowing overcommit memory allocation via sysctl -w vm.overcommit_memory=1 (more info on this here), however this still didn't solve the issue. Rather than digging deeper into the memory allocation internals of Ubuntu/EC2, I started looking at options to parallelise the DataFrame, and discovered that using dask worked in my case: import dask.dataframe as dd df = dd.read_csv('path_to_large_file.csv') ... Your mileage may vary, and note that the dask API is very similar but not a complete like to like for pandas/numpy (e.g. you may need to make some code changes in places depending on what you're doing with the data). A: I was having this issue with numpy by trying to have image sizes of 600x600 (360K), I decided to reduce to 224x224 (~50k), a reduction in memory usage by a factor of 7. X_set = np.array(X_set).reshape(-1 , 600 * 600 * 3) is now X_set = np.array(X_set).reshape(-1 , 224 * 224 * 3) hope this helps
Unable to allocate array with shape and data type
I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS. I am trying to allocate memory for a numpy array with shape (156816, 36, 53806) with np.zeros((156816, 36, 53806), dtype='uint8') and while I'm getting an error on Ubuntu OS >>> import numpy as np >>> np.zeros((156816, 36, 53806), dtype='uint8') Traceback (most recent call last): File "<stdin>", line 1, in <module> numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8 I'm not getting it on MacOS: >>> import numpy as np >>> np.zeros((156816, 36, 53806), dtype='uint8') array([[[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], ..., [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]], dtype=uint8) I've read somewhere that np.zeros shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb. versions: Ubuntu os -> ubuntu mate 18 python -> 3.6.8 numpy -> 1.17.0 mac os -> 10.14.6 python -> 3.6.4 numpy -> 1.17.0 PS: also failed on Google Colab
[ "This is likely due to your system's overcommit handling mode.\nIn the default mode, 0,\n\nHeuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. The root is allowed to allocate slightly more memory in this mode. This is the default.\n\nThe exact heuristic used is not well explained here, but this is discussed more on Linux over commit heuristic and on this page.\nYou can check your current overcommit mode by running\n$ cat /proc/sys/vm/overcommit_memory\n0\n\nIn this case, you're allocating\n>>> 156816 * 36 * 53806 / 1024.0**3\n282.8939827680588\n\n~282 GB and the kernel is saying well obviously there's no way I'm going to be able to commit that many physical pages to this, and it refuses the allocation.\nIf (as root) you run:\n$ echo 1 > /proc/sys/vm/overcommit_memory\n\nThis will enable the \"always overcommit\" mode, and you'll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least).\nI tested this myself on a machine with 32 GB of RAM. With overcommit mode 0 I also got a MemoryError, but after changing it back to 1 it works:\n>>> import numpy as np\n>>> a = np.zeros((156816, 36, 53806), dtype='uint8')\n>>> a.nbytes\n303755101056\n\nYou can then go ahead and write to any location within the array, and the system will only allocate physical pages when you explicitly write to that page. So you can use this, with care, for sparse arrays.\n", "I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the pagefile size, as it was a Memory overcommitment problem for me too.\nWindows 8\n\nOn the Keyboard Press the WindowsKey + X then click System in the popup menu\nTap or click Advanced system settings. You might be asked for an admin password or to confirm your choice\nOn the Advanced tab, under Performance, tap or click Settings.\nTap or click the Advanced tab, and then, under Virtual memory, tap or click Change\nClear the Automatically manage paging file size for all drives check box.\nUnder Drive [Volume Label], tap or click the drive that contains the paging file you want to change\nTap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK\nReboot your system\n\nWindows 10\n\nPress the Windows key\nType SystemPropertiesAdvanced\nClick Run as administrator\nUnder Performance, click Settings\nSelect the Advanced tab\nSelect Change...\nUncheck Automatically managing paging file size for all drives\nThen select Custom size and fill in the appropriate size\nPress Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog\nReboot your system\n\nNote: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.\nEDIT\nFrom here the suggested recommendations for page file size:\n\nThere is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.\n\nSome things to keep in mind from here:\n\nHowever, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.\n\nAlso:\n\nIncreasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.\n\n", "I came across this problem on Windows too. The solution for me was to switch from a 32-bit to a 64-bit version of Python. Indeed, a 32-bit software, like a 32-bit CPU, can adress a maximum of 4 GB of RAM (2^32). So if you have more than 4 GB of RAM, a 32-bit version cannot take advantage of it.\nWith a 64-bit version of Python (the one labeled x86-64 in the download page), the issue disappears.\nYou can check which version you have by entering the interpreter. I, with a 64-bit version, now have:\nPython 3.7.5rc1 (tags/v3.7.5rc1:4082f600a5, Oct 1 2019, 20:28:14) [MSC v.1916 64 bit (AMD64)], where [MSC v.1916 64 bit (AMD64)] means \"64-bit Python\".\nSources :\n\nQuora - memory error generated by large numpy array\n\nStackoverflow : 32 or 64-bit version of Python\n\n\n", "In my case, adding a dtype attribute changed dtype of the array to a smaller type(from float64 to uint8), decreasing array size enough to not throw MemoryError in Windows(64 bit).\nfrom \nmask = np.zeros(edges.shape)\n\nto\nmask = np.zeros(edges.shape,dtype='uint8')\n\n", "Sometimes, this error pops up because of the kernel has reached its limit. Try to restart the kernel redo the necessary steps.\n", "change the data type to another one which uses less memory works. For me, I change the data type to numpy.uint8:\ndata['label'] = data['label'].astype(np.uint8)\n\n", "I faced the same issue running pandas in a docker contain on EC2. I tried the above solution of allowing overcommit memory allocation via sysctl -w vm.overcommit_memory=1 (more info on this here), however this still didn't solve the issue.\nRather than digging deeper into the memory allocation internals of Ubuntu/EC2, I started looking at options to parallelise the DataFrame, and discovered that using dask worked in my case:\nimport dask.dataframe as dd\ndf = dd.read_csv('path_to_large_file.csv')\n...\n\nYour mileage may vary, and note that the dask API is very similar but not a complete like to like for pandas/numpy (e.g. you may need to make some code changes in places depending on what you're doing with the data).\n", "I was having this issue with numpy by trying to have image sizes of 600x600 (360K), I decided to reduce to 224x224 (~50k), a reduction in memory usage by a factor of 7.\nX_set = np.array(X_set).reshape(-1 , 600 * 600 * 3)\nis now\nX_set = np.array(X_set).reshape(-1 , 224 * 224 * 3)\nhope this helps\n" ]
[ 179, 121, 44, 11, 11, 5, 0, 0 ]
[]
[]
[ "data_science", "numpy", "python" ]
stackoverflow_0057507832_data_science_numpy_python.txt
Q: How to label groups conditionally? I'm new to pandas and would like to know how to do the following: Given specific conditions, I would like to mark the whole group with a specific label rather than just the rows that meet the conditions. For example, if I have a DataFrame like this: import numpy as np import pandas as pd df = pd.DataFrame({"id": [1, 2, 3, 4, 5, 6, 7, 8], "process": ["pending", "finished", "finished", "finished", "finished", "finished", "finished", "pending"], "working_group": ["a", "a", "c", "d", "d", "f", "g", "g"], "size": [2, 2, 1, 2, 2, 1, 2, 2]}) conditions = [(df['size'] >= 2) & (df['process'].isin(["pending"]))] choices = ["not_done"] df['state'] = df['state'] = np.select(conditions, choices, default = "something_else") df: id process working_group size state 0 1 pending a 2 not_done 1 2 finished a 2 something_else 2 3 finished c 1 something_else 3 4 finished d 2 something_else 4 5 finished d 2 something_else 5 6 finished f 1 something_else 6 7 finished g 2 something_else 7 8 pending g 2 not_done However I would like the whole working_group marked as not_done when a individual task is pending, so for instance a & g should be marked as not_done. id process working_group size state 0 1 pending a 2 not_done 1 2 finished a 2 not_done 2 3 finished c 1 something_else 3 4 finished d 2 something_else 4 5 finished d 2 something_else 5 6 finished f 1 something_else 6 7 finished g 2 not_done 7 8 pending g 2 not_done A: A simple solution would be after you use np.select and create your 'state' column, to forward fill / backward fill per group? df['state'] = df.groupby(['working_group'])['state'].transform(lambda x: x.bfill().ffill()) id process working_group size state 0 1 pending a 2 not_done 1 2 finished a 2 not_done 2 3 finished c 1 NaN 3 4 finished d 2 NaN 4 5 finished d 2 NaN 5 6 finished f 1 NaN 6 7 finished g 2 not_done 7 8 pending g 2 not_done A: You can use: condition = df['size'].ge(2) & df['process'].isin(["pending"]) df['state'] = np.where(condition.groupby(df['working_group']).transform('any'), 'not_done', 'something_else') Or: condition = df['size'].ge(2) & df['process'].isin(["pending"]) df['state'] = np.where(df['working_group'].isin(df.loc[condition, 'working_group']), 'not_done', 'something_else') Output: id process working_group size state 0 1 pending a 2 not_done 1 2 finished a 2 not_done 2 3 finished c 1 something_else 3 4 finished d 2 something_else 4 5 finished d 2 something_else 5 6 finished f 1 something_else 6 7 finished g 2 not_done 7 8 pending g 2 not_done
How to label groups conditionally?
I'm new to pandas and would like to know how to do the following: Given specific conditions, I would like to mark the whole group with a specific label rather than just the rows that meet the conditions. For example, if I have a DataFrame like this: import numpy as np import pandas as pd df = pd.DataFrame({"id": [1, 2, 3, 4, 5, 6, 7, 8], "process": ["pending", "finished", "finished", "finished", "finished", "finished", "finished", "pending"], "working_group": ["a", "a", "c", "d", "d", "f", "g", "g"], "size": [2, 2, 1, 2, 2, 1, 2, 2]}) conditions = [(df['size'] >= 2) & (df['process'].isin(["pending"]))] choices = ["not_done"] df['state'] = df['state'] = np.select(conditions, choices, default = "something_else") df: id process working_group size state 0 1 pending a 2 not_done 1 2 finished a 2 something_else 2 3 finished c 1 something_else 3 4 finished d 2 something_else 4 5 finished d 2 something_else 5 6 finished f 1 something_else 6 7 finished g 2 something_else 7 8 pending g 2 not_done However I would like the whole working_group marked as not_done when a individual task is pending, so for instance a & g should be marked as not_done. id process working_group size state 0 1 pending a 2 not_done 1 2 finished a 2 not_done 2 3 finished c 1 something_else 3 4 finished d 2 something_else 4 5 finished d 2 something_else 5 6 finished f 1 something_else 6 7 finished g 2 not_done 7 8 pending g 2 not_done
[ "A simple solution would be after you use np.select and create your 'state' column, to forward fill / backward fill per group?\ndf['state'] = df.groupby(['working_group'])['state'].transform(lambda x: x.bfill().ffill())\n\n id process working_group size state\n0 1 pending a 2 not_done\n1 2 finished a 2 not_done\n2 3 finished c 1 NaN\n3 4 finished d 2 NaN\n4 5 finished d 2 NaN\n5 6 finished f 1 NaN\n6 7 finished g 2 not_done\n7 8 pending g 2 not_done\n\n", "You can use:\ncondition = df['size'].ge(2) & df['process'].isin([\"pending\"])\n\ndf['state'] = np.where(condition.groupby(df['working_group']).transform('any'), 'not_done', 'something_else')\n\nOr:\ncondition = df['size'].ge(2) & df['process'].isin([\"pending\"])\n\ndf['state'] = np.where(df['working_group'].isin(df.loc[condition, 'working_group']), 'not_done', 'something_else')\n\nOutput:\n id process working_group size state\n0 1 pending a 2 not_done\n1 2 finished a 2 not_done\n2 3 finished c 1 something_else\n3 4 finished d 2 something_else\n4 5 finished d 2 something_else\n5 6 finished f 1 something_else\n6 7 finished g 2 not_done\n7 8 pending g 2 not_done\n\n" ]
[ 1, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074481307_pandas_python.txt
Q: Best way to read and process parquet files stored in GCP using pyspark I am new to using GCS. I am using it to store some parquet data files. Previously before GCS, I was storing all of my parquet files locally on my machine to test some code to read all of the parquet files into a data frame using Spark. Here is an example of what I had setup to work locally in python: source_path = '/mylocal/directory/files' appName = "PySpark Parquet Example" master = "local" # Create Spark session spark = SparkSession.builder \ .appName(appName) \ .master(master) \ .getOrCreate() # Read parquet files df = spark.read.parquet( source_path) Now that I have moved to storing all of the source data into a bucket in GCS, I am a little lost as to where to start with an equivalent method to accessing the files that are now stored in a folder within my GCS bucket. I have looked into gsutil and other libraries but am open to any suggestions as to the easiest way to go about this. Any suggestions? A: In my understanding you are trying to access the parquet files stored in gcs bucket from your local spark. If that's the case, please follow the below sequence of steps Download the gcs-hadoop-connector.jar and place it inside your jars folder in your local spark. Note: Please download the correct matching version from the below link (https://mvnrepository.com/artifact/com.google.cloud.bigdataoss/gcs-connector). Create and download the service account json file with storage access to read/write the data into the gcs bucket. Update your hadoop configuration in spark code as below import pyspark from pyspark.sql import SparkSession spark = SparkSession.builder.master("local[1]") \ .appName('readParquetData') \ .getOrCreate() conf =spark.sparkContext._jsc.hadoopConfiguration() conf.set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") conf.set("fs.AbstractFileSystem.gs.impl","com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS") conf.set("fs.gs.project.id", projectId) conf.set("fs.gs.auth.service.account.enable", "true") conf.set("fs.gs.auth.service.account.json.keyfile", secretLocation) conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false") conf.set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") Now you can read the data in gcs using spark using the below code df=spark.read.option("header",True).parquet(location) Complete Code : import pyspark from pyspark.sql import SparkSession spark = SparkSession.builder.master("local[1]") \ .appName('readParquetData') \ .getOrCreate() conf =spark.sparkContext._jsc.hadoopConfiguration() conf.set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") conf.set("fs.AbstractFileSystem.gs.impl","com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS") conf.set("fs.gs.project.id", projectId) conf.set("fs.gs.auth.service.account.enable", "true") conf.set("fs.gs.auth.service.account.json.keyfile", secretLocation) conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false") conf.set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") df=spark.read.option("header",True).parquet("gs://bucketName/folderName") df.show() Please approve the answer, if it helps to resolve your issue. Thanks
Best way to read and process parquet files stored in GCP using pyspark
I am new to using GCS. I am using it to store some parquet data files. Previously before GCS, I was storing all of my parquet files locally on my machine to test some code to read all of the parquet files into a data frame using Spark. Here is an example of what I had setup to work locally in python: source_path = '/mylocal/directory/files' appName = "PySpark Parquet Example" master = "local" # Create Spark session spark = SparkSession.builder \ .appName(appName) \ .master(master) \ .getOrCreate() # Read parquet files df = spark.read.parquet( source_path) Now that I have moved to storing all of the source data into a bucket in GCS, I am a little lost as to where to start with an equivalent method to accessing the files that are now stored in a folder within my GCS bucket. I have looked into gsutil and other libraries but am open to any suggestions as to the easiest way to go about this. Any suggestions?
[ "In my understanding you are trying to access the parquet files stored in gcs bucket from your local spark. If that's the case, please follow the below sequence of steps\n\nDownload the gcs-hadoop-connector.jar and place it inside your jars folder in your local spark. Note: Please download the correct matching version from the below link (https://mvnrepository.com/artifact/com.google.cloud.bigdataoss/gcs-connector).\nCreate and download the service account json file with storage access to read/write the data into the gcs bucket.\nUpdate your hadoop configuration in spark code as below\n\nimport pyspark\nfrom pyspark.sql import SparkSession\nspark = SparkSession.builder.master(\"local[1]\") \\\n .appName('readParquetData') \\\n .getOrCreate()\nconf =spark.sparkContext._jsc.hadoopConfiguration()\nconf.set(\"fs.gs.impl\", \"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem\")\nconf.set(\"fs.AbstractFileSystem.gs.impl\",\"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS\")\nconf.set(\"fs.gs.project.id\", projectId)\nconf.set(\"fs.gs.auth.service.account.enable\", \"true\")\nconf.set(\"fs.gs.auth.service.account.json.keyfile\", secretLocation)\nconf.set(\"mapreduce.fileoutputcommitter.marksuccessfuljobs\", \"false\")\nconf.set(\"fs.gs.impl\", \"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem\")\n\n\n\nNow you can read the data in gcs using spark using the below code\n\ndf=spark.read.option(\"header\",True).parquet(location)\n\nComplete Code :\nimport pyspark\nfrom pyspark.sql import SparkSession\nspark = SparkSession.builder.master(\"local[1]\") \\\n .appName('readParquetData') \\\n .getOrCreate()\nconf =spark.sparkContext._jsc.hadoopConfiguration()\nconf.set(\"fs.gs.impl\", \"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem\")\nconf.set(\"fs.AbstractFileSystem.gs.impl\",\"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS\")\nconf.set(\"fs.gs.project.id\", projectId)\nconf.set(\"fs.gs.auth.service.account.enable\", \"true\")\nconf.set(\"fs.gs.auth.service.account.json.keyfile\", secretLocation)\nconf.set(\"mapreduce.fileoutputcommitter.marksuccessfuljobs\", \"false\")\nconf.set(\"fs.gs.impl\", \"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem\")\n\ndf=spark.read.option(\"header\",True).parquet(\"gs://bucketName/folderName\")\ndf.show()\n\nPlease approve the answer, if it helps to resolve your issue. Thanks\n" ]
[ 0 ]
[]
[]
[ "gcs", "pyspark", "python" ]
stackoverflow_0074189863_gcs_pyspark_python.txt
Q: Django per-model authorization permissions Im facing a problem in Django with authorization permissions (a bit new to Django). I have a teacher, student and manager models. When a teacher sends a request to my API they should get different permissions than a student (ie, a student will see all of his own test grades, while a teacher can see all of its own class's students, and a manager can see everything). My questions are as follows: How do I make all of my models valid system users? I've tried adding models.OneToOneField(User, on_delete=models.CASCADE) But this requires creating a user, and then assigning it to the teacher. What I want is for the actual teacher "instance" to be the used user. How do I check which "type" is my user ? if they are a teacher, student or manager? do I need to go over all 3 tables every time a user sends a request, and figure out which they belong to ? doesnt sound right. I thought about creating a global 'user' table with a "type" column, but then I wont be able to add specific columns to my models (ie a student should have an avg grade while a teacher shouldn't) . Would appreciate any pointers in the right direction. A: When you need multiple user types, for example, in your case multiple roles are needed like a student, teacher, manager, etc… then you need a different role for all the persons to categorize. To have these roles you need to extend AbstractUser(for simple case) in your models.py for your User model also You can specify permissions in your models. Attaching permissions is done on the model's class Meta using the permissions field. You will be able to specify as many permissions as you need, but it must be in a tuple like below: from django.db import models from django.contrib.auth.models import AbstractUser from django.db.models.fields.related import ForeignKey from django.utils.translation import gettext as _ class Role(models.Model): STUDENT = 1 TEACHER = 2 MANAGER = 3 ROLE_CHOICES = ( (STUDENT, 'student'), (TEACHER, 'teacher'), (MANAGER, 'manager'), ) id = models.PositiveSmallIntegerField(choices=ROLE_CHOICES, primary_key=True) def __str__(self): return self.get_id_display() class User(AbstractUser): roles = models.ManyToManyField(Role) username = models.CharField(max_length = 50, blank = True, null = True, unique = True) email = models.EmailField(_('email address'), unique = True) native_name = models.CharField(max_length = 5) phone_no = models.CharField(max_length = 10) USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['username', 'first_name', 'last_name'] def __str__(self): return "{}".format(self.email) class Student(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True, related_name='students') sample_field_name = models.CharField(max_length = 50, blank = True, null = True) class Meta: permissions = (("sample_permission", "can change sth of sth"),) class Teacher(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True, related_name='teachers') sample_field_name = models.CharField(max_length = 50, blank = True, null = True) class Meta: permissions = (("sample_permission", "can change sth in sth"),) class Manager(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True, related_name='managers') sample_field_name = models.CharField(max_length = 50, blank = True, null = True) class Meta: permissions = (("sample_permission", "can change sth in sth"),) After that you should have your permissions for your views and Adding permissions to restrict a function to only users that have that particular permission can be done by using a Django built-in decorator, permission_required for function-based views:: from django.contrib.auth.decorators import permission_required @permission_required('students.sample_permission') def student_sample_view(request): """Raise permission denied exception or redirect user""" And if you are using a class-based view, you just need to use a mixin, PermissionRequiredMixin: from django.contrib.auth.mixins import PermissionRequiredMixin from django.views.generic import ListView class SampleListView(PermissionRequiredMixin, ListView): permission_required = 'students.sample_permission' # Or multiple permissions permission_required = ('students.sample_permission', 'teachers.other_sample_permission') This was one way you can manage multiple roles in your Django project, you can also find more ways in below blogs and references: How to Implement Multiple User Types with Django Managing User Permissions in Django Supporting Multiple Roles Using Django’s User Model Django Roles, Groups and Permissions Introduction django-multiple-user-types-example GitHub repository
Django per-model authorization permissions
Im facing a problem in Django with authorization permissions (a bit new to Django). I have a teacher, student and manager models. When a teacher sends a request to my API they should get different permissions than a student (ie, a student will see all of his own test grades, while a teacher can see all of its own class's students, and a manager can see everything). My questions are as follows: How do I make all of my models valid system users? I've tried adding models.OneToOneField(User, on_delete=models.CASCADE) But this requires creating a user, and then assigning it to the teacher. What I want is for the actual teacher "instance" to be the used user. How do I check which "type" is my user ? if they are a teacher, student or manager? do I need to go over all 3 tables every time a user sends a request, and figure out which they belong to ? doesnt sound right. I thought about creating a global 'user' table with a "type" column, but then I wont be able to add specific columns to my models (ie a student should have an avg grade while a teacher shouldn't) . Would appreciate any pointers in the right direction.
[ "When you need multiple user types, for example, in your case multiple roles are needed like a student, teacher, manager, etc… then you need a different role for all the persons to categorize.\nTo have these roles you need to extend AbstractUser(for simple case) in your models.py for your User model also You can specify permissions in your models. Attaching permissions is done on the model's class Meta using the permissions field. You will be able to specify as many permissions as you need, but it must be in a tuple like below:\nfrom django.db import models\nfrom django.contrib.auth.models import AbstractUser\nfrom django.db.models.fields.related import ForeignKey\nfrom django.utils.translation import gettext as _\n\n\nclass Role(models.Model):\n STUDENT = 1\n TEACHER = 2\n MANAGER = 3\n ROLE_CHOICES = (\n (STUDENT, 'student'),\n (TEACHER, 'teacher'),\n (MANAGER, 'manager'),\n )\n\n id = models.PositiveSmallIntegerField(choices=ROLE_CHOICES, primary_key=True)\n\n def __str__(self):\n return self.get_id_display()\n\nclass User(AbstractUser):\n roles = models.ManyToManyField(Role)\n username = models.CharField(max_length = 50, blank = True, null = True, unique = True)\n email = models.EmailField(_('email address'), unique = True)\n native_name = models.CharField(max_length = 5)\n phone_no = models.CharField(max_length = 10)\n USERNAME_FIELD = 'email'\n REQUIRED_FIELDS = ['username', 'first_name', 'last_name']\n def __str__(self):\n return \"{}\".format(self.email)\n\nclass Student(models.Model):\n user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True, related_name='students')\n sample_field_name = models.CharField(max_length = 50, blank = True, null = True)\n\n class Meta:\n permissions = ((\"sample_permission\", \"can change sth of sth\"),)\n\n\nclass Teacher(models.Model):\n user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True, related_name='teachers')\n sample_field_name = models.CharField(max_length = 50, blank = True, null = True)\n\n class Meta:\n permissions = ((\"sample_permission\", \"can change sth in sth\"),)\n\nclass Manager(models.Model):\n user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True, related_name='managers')\n sample_field_name = models.CharField(max_length = 50, blank = True, null = True)\n\n class Meta:\n permissions = ((\"sample_permission\", \"can change sth in sth\"),)\n\nAfter that you should have your permissions for your views and Adding permissions to restrict a function to only users that have that particular permission can be done by using a Django built-in decorator, permission_required for function-based views::\nfrom django.contrib.auth.decorators import permission_required\n\n@permission_required('students.sample_permission')\ndef student_sample_view(request):\n \"\"\"Raise permission denied exception or redirect user\"\"\"\n\nAnd if you are using a class-based view, you just need to use a mixin, PermissionRequiredMixin:\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.views.generic import ListView\n\nclass SampleListView(PermissionRequiredMixin, ListView):\n permission_required = 'students.sample_permission'\n # Or multiple permissions\n permission_required = ('students.sample_permission', 'teachers.other_sample_permission')\n\n\nThis was one way you can manage multiple roles in your Django project,\nyou can also find more ways in below blogs and references:\n\nHow to Implement Multiple User Types with Django\n\nManaging User Permissions in Django\n\nSupporting Multiple Roles Using Django’s User Model\n\nDjango Roles, Groups and Permissions Introduction\n\ndjango-multiple-user-types-example GitHub repository\n\n\n" ]
[ 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074481519_django_python.txt
Q: Find elements between two tags in a list Language: Python 3.4 OS: Windows 8.1 I have some lists like the following: a = ['text1', 'text2', 'text3','text4','text5'] b = ['text1', 'text2', 'text3','text4','New_element', 'text5'] What is the simplest way to find the elements between two tags in a list? I want to be able to get it even if the lists and tags have variable number of elements or variable length. Ex: get elements between text1 and text4 or text1 or text5, etc. Or get the elements between text1 and text5 that has longer length. I tried using regular expressions like: re.findall(r'text1(.*?)text5', a) This will give me an error I guess because you can only use this in a string but not lists. A: To get the location of an element in a list use index(). Then use the discovered index to create a slice of the list like: Code: print(b[b.index('text3')+1:b.index('text5')]) Results: ['text4', 'New_element'] A: You can use the list.index method to find the first occurrence of each of your tags, then slice the list to get the value between the indexes. def find_between_tags(lst, start_tag, end_tag): start_index = lst.index(start_tag) end_index = lst.index(end_tag, start_index) return lst[start_index + 1: end_index] If either of the tags is not in the list (or if the end tag only occurs before the start tag), one of the index calls will raise a ValueError. You could suppress the exception if you want to do something else, but just letting the caller deal with it seems like a reasonable option to me, so I've left the exception uncaught. If the tags might occur in this list multiple times, you could extend the logic of the function above to find all of them. For this you'll want to use the start argument to list.index, which will tell it not to look at values before the previous end tag. def find_all_between_tags(lst, start_tag, end_tag): search_from = 0 try: while True: start_index = lst.index(start_tag, search_from) end_index = lst.index(end_tag, start_index + 1) yield lst[start_index + 1:end_index] search_from = end_index + 1 except ValueError: pass This generator does suppress the ValueError, since it keeps on searching until it can't find another pair of tags. If the tags don't exist anywhere in the list, the generator will be empty, but it won't raise any exception (other than StopIteration). A: You can get the items between the values by utilizing the index function to search for the index of both objects in the list. Be sure to add one to the index of the first object so it won't be included in the result. See my code below: def get_sublist_between(e1, e2, li): return li[li.index(e1) + 1:li.index(e2)]
Find elements between two tags in a list
Language: Python 3.4 OS: Windows 8.1 I have some lists like the following: a = ['text1', 'text2', 'text3','text4','text5'] b = ['text1', 'text2', 'text3','text4','New_element', 'text5'] What is the simplest way to find the elements between two tags in a list? I want to be able to get it even if the lists and tags have variable number of elements or variable length. Ex: get elements between text1 and text4 or text1 or text5, etc. Or get the elements between text1 and text5 that has longer length. I tried using regular expressions like: re.findall(r'text1(.*?)text5', a) This will give me an error I guess because you can only use this in a string but not lists.
[ "To get the location of an element in a list use index(). Then use the discovered index to create a slice of the list like:\nCode:\nprint(b[b.index('text3')+1:b.index('text5')])\n\nResults:\n['text4', 'New_element']\n\n", "You can use the list.index method to find the first occurrence of each of your tags, then slice the list to get the value between the indexes.\ndef find_between_tags(lst, start_tag, end_tag):\n start_index = lst.index(start_tag)\n end_index = lst.index(end_tag, start_index)\n return lst[start_index + 1: end_index]\n\nIf either of the tags is not in the list (or if the end tag only occurs before the start tag), one of the index calls will raise a ValueError. You could suppress the exception if you want to do something else, but just letting the caller deal with it seems like a reasonable option to me, so I've left the exception uncaught.\nIf the tags might occur in this list multiple times, you could extend the logic of the function above to find all of them. For this you'll want to use the start argument to list.index, which will tell it not to look at values before the previous end tag.\ndef find_all_between_tags(lst, start_tag, end_tag):\n search_from = 0\n try:\n while True:\n start_index = lst.index(start_tag, search_from)\n end_index = lst.index(end_tag, start_index + 1)\n yield lst[start_index + 1:end_index]\n search_from = end_index + 1\n except ValueError:\n pass\n\nThis generator does suppress the ValueError, since it keeps on searching until it can't find another pair of tags. If the tags don't exist anywhere in the list, the generator will be empty, but it won't raise any exception (other than StopIteration).\n", "You can get the items between the values by utilizing the index function to search for the index of both objects in the list. Be sure to add one to the index of the first object so it won't be included in the result. See my code below:\ndef get_sublist_between(e1, e2, li): \n return li[li.index(e1) + 1:li.index(e2)]\n\n" ]
[ 4, 1, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0043422243_python_python_3.x.txt
Q: Python for loop to read a JSON file I am trying to understand a Python for loop that is implemented as below samples= [(objectinstance.get('sample', record['token'])['timestamp'], record) for record in objectinstance.scene] 'scene' is a JSON file with list of dictionaries and each dictionary entry refers through values of the token to another JSON file called 'sample' containing 'timestamp' key among other keys. Although I can roughly understand at a high level, I am not able to decipher how the 'record' is being used here as the output of object's get method. I am thinking this is some sort of list comprehension, but not sure. Can you help understand this and also point me any reference to understand this better? thank you A: in non comprehension form it is as below samples = [] for record in objectinstance.scene: data = ( objectinstance.get('sample', record['token'])['timestamp'], record ) samples.append(data) objectinstance.get('sample', record['token']) this looks like a method, which took two arguments and return a json/dictionary {<key1>:<value1>, ... ,'timestmap':<somedata>, ...<keyn>:<valuen>} and you are saving record with the timestamp value of this call. it this objectinstance.get can be seen as class Tmp: def __init__(self): self.scene = [{'token': 'a'}, {'token':'b'}, {'token':'c'}] def get(self, arg1, arg2): # calculation return result objectinstance = Tmp() samples =[] for record in objectinstance.scene: object_instance_data = objectinstance.get('sample', record['token']) data = object_instance_data['timestamp'] samples.append(data) so as you can see, there is method in the object class name get, which take 2 arguments, and use them calculation to provide you result in dict/json which as timestamp as key value A: Yes, you are right, it is a list comprehension. Schematically, it is something like this: samples = [(timestamp, item) for item in list_of_dicts] The result will be a list of touples, where (objectinstance.get('sample', record['token'])['timestamp'] is the first entry and record is the second. Moreover, objectinstance.get('key', default) gets 'key' from a dict, if not present returns the default value, cf. documentation at python.org. The result of the get method seems to be a dict as well, from which the value of key ['timestamp'] is retrieved.
Python for loop to read a JSON file
I am trying to understand a Python for loop that is implemented as below samples= [(objectinstance.get('sample', record['token'])['timestamp'], record) for record in objectinstance.scene] 'scene' is a JSON file with list of dictionaries and each dictionary entry refers through values of the token to another JSON file called 'sample' containing 'timestamp' key among other keys. Although I can roughly understand at a high level, I am not able to decipher how the 'record' is being used here as the output of object's get method. I am thinking this is some sort of list comprehension, but not sure. Can you help understand this and also point me any reference to understand this better? thank you
[ "in non comprehension form it is as below\nsamples = []\nfor record in objectinstance.scene:\n data = (\n objectinstance.get('sample', record['token'])['timestamp'],\n record\n )\n samples.append(data)\n\nobjectinstance.get('sample', record['token']) this looks like a method, which took two arguments and return a json/dictionary\n{<key1>:<value1>, ... ,'timestmap':<somedata>, ...<keyn>:<valuen>}\nand you are saving record with the timestamp value of this call.\nit this objectinstance.get can be seen as\nclass Tmp:\n def __init__(self):\n self.scene = [{'token': 'a'}, {'token':'b'}, {'token':'c'}] \n def get(self, arg1, arg2):\n # calculation\n \n return result \n \nobjectinstance = Tmp()\n\nsamples =[]\n\nfor record in objectinstance.scene:\n object_instance_data = objectinstance.get('sample', record['token'])\n data = object_instance_data['timestamp']\n samples.append(data)\n\nso as you can see, there is method in the object class name get, which take 2 arguments, and use them calculation to provide you result in dict/json which as timestamp as key value\n", "Yes, you are right, it is a list comprehension. Schematically, it is something like this:\nsamples = [(timestamp, item) for item in list_of_dicts]\n\nThe result will be a list of touples, where (objectinstance.get('sample', record['token'])['timestamp'] is the first entry and record is the second.\nMoreover, objectinstance.get('key', default) gets 'key' from a dict, if not present returns the default value, cf. documentation at python.org. The result of the get method seems to be a dict as well, from which the value of key ['timestamp'] is retrieved.\n" ]
[ 0, 0 ]
[]
[]
[ "for_loop", "json", "list_comprehension", "python" ]
stackoverflow_0074481454_for_loop_json_list_comprehension_python.txt
Q: Pandas Dataframe : How to flatten nested dictionaries inside a list into new rows I am trying to flatten API response. This is the response data = [{ "id": 1, "status": "Public", "Options": [ { "id": 8, "pId": 9 }, { "id": 10, "pId": 11 } ] }, { "id": 2, "status": "Public", "Options": [ { "id": 12, "pId": 13 }, { "id": 14, "pId": 15 } ] } ] I am trying to do this(applying ast literal eval, df.pop and json normalize). And then i am concatinating the results def pop(child_df, column_value): child_df = child_df.dropna(subset=[column_value]) if isinstance(child_df[column_value][0], str): print("yes") child_df[column_value] = child_df[column_value].apply(ast.literal_eval) normalized_json = [json_normalize(x) for x in child_df.pop(column_value)] expanded_child_df = child_df.join(pd.concat(normalized_json, ignore_index=True, sort=False).add_prefix(column_value + '_')) expanded_child_df.columns = [str(col).replace('\r','') for col in expanded_child_df.columns] expanded_child_df.columns = map(str.lower, expanded_child_df.columns) return expanded_child_df df = pd.DataFrame.from_dict(data) df2 = pop(df,'Options') This is the output i am getting id status options_id options_pid 0 1 Public 8 9 1 2 Public 10 11 But the code is skipping some values inside the Options list. This is the expected output id status options_id options_pid 0 1 Public 8 9 1 1 Public 10 11 2 2 Public 12 13 3 2 Public 14 15 What am i missing here ? A: you can use: df=pd.json_normalize(data).explode('Options') df=df.join(df['Options'].apply(pd.Series).add_prefix('options_')).drop(['Options'],axis=1).drop_duplicates() print(df) ''' id status optionsid optionspId 0 1 Public 8 9 0 1 Public 10 11 1 2 Public 12 13 1 2 Public 14 15 ''' A: df = pd.json_normalize(data, record_path="Options", meta=['id','status'], record_prefix='options.') A: df = pd.json_normalize(data).explode('Options') tmp= df['Options'].apply(pd.Series) df = pd.concat([df[['id', 'status']], tmp], axis=1) print(df)
Pandas Dataframe : How to flatten nested dictionaries inside a list into new rows
I am trying to flatten API response. This is the response data = [{ "id": 1, "status": "Public", "Options": [ { "id": 8, "pId": 9 }, { "id": 10, "pId": 11 } ] }, { "id": 2, "status": "Public", "Options": [ { "id": 12, "pId": 13 }, { "id": 14, "pId": 15 } ] } ] I am trying to do this(applying ast literal eval, df.pop and json normalize). And then i am concatinating the results def pop(child_df, column_value): child_df = child_df.dropna(subset=[column_value]) if isinstance(child_df[column_value][0], str): print("yes") child_df[column_value] = child_df[column_value].apply(ast.literal_eval) normalized_json = [json_normalize(x) for x in child_df.pop(column_value)] expanded_child_df = child_df.join(pd.concat(normalized_json, ignore_index=True, sort=False).add_prefix(column_value + '_')) expanded_child_df.columns = [str(col).replace('\r','') for col in expanded_child_df.columns] expanded_child_df.columns = map(str.lower, expanded_child_df.columns) return expanded_child_df df = pd.DataFrame.from_dict(data) df2 = pop(df,'Options') This is the output i am getting id status options_id options_pid 0 1 Public 8 9 1 2 Public 10 11 But the code is skipping some values inside the Options list. This is the expected output id status options_id options_pid 0 1 Public 8 9 1 1 Public 10 11 2 2 Public 12 13 3 2 Public 14 15 What am i missing here ?
[ "you can use:\ndf=pd.json_normalize(data).explode('Options')\ndf=df.join(df['Options'].apply(pd.Series).add_prefix('options_')).drop(['Options'],axis=1).drop_duplicates()\nprint(df)\n'''\n id status optionsid optionspId\n0 1 Public 8 9\n0 1 Public 10 11\n1 2 Public 12 13\n1 2 Public 14 15\n'''\n\n", "df = pd.json_normalize(data, record_path=\"Options\", meta=['id','status'], record_prefix='options.')\n\n", "df = pd.json_normalize(data).explode('Options')\ntmp= df['Options'].apply(pd.Series)\ndf = pd.concat([df[['id', 'status']], tmp], axis=1)\nprint(df)\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074481315_pandas_python.txt
Q: Python Pymoo - get an import error when copy and paste tutorial code I am a Python beginner. Trying to follow a getting started tutorial of a multi-objective optimization algoritm https://pymoo.org/getting_started/part_2.html I have installed pymoo according to the installation instructions: pip install -U pymoo Everything works fine up to the paragraph Define a Termination Criterion I imput the code: from pymoo import get_termination ERROR ImportError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_10384/2370239780.py in <module> 1 #Define a Termination Criterion 2 ----> 3 from pymoo import get_termination 4 ImportError: cannot import name 'get_termination' from 'pymoo' (C:\Users\musae\anaconda3\lib\site-packages\pymoo\__init__.py) The same things happens when I imput from pymoo.problems import get_problem from that example of NSGA2 algorithm https://pymoo.org/algorithms/moo/nsga2.html#nb-nsga2 ERROR ImportError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_24508/113033208.py in <module> 1 from pymoo.algorithms.moo.nsga2 import NSGA2 ----> 2 from pymoo.problems import get_problem 3 from pymoo.optimize import minimize 4 from pymoo.visualization.scatter import Scatter 5 ImportError: cannot import name 'get_problem' from 'pymoo.problems' (C:\Users\musae\anaconda3\lib\site-packages\pymoo\problems\__init__.py) Have I installed it wrong? Why do I get those errors? Thank you! A: Instead of from pymoo.problems import get_problem use from pymoo.problems.multi import * . And for get_problem use problem instead. As an example: get_problem("zdt1").pareto_front() Should be converted to: ZDT1().pareto_front()
Python Pymoo - get an import error when copy and paste tutorial code
I am a Python beginner. Trying to follow a getting started tutorial of a multi-objective optimization algoritm https://pymoo.org/getting_started/part_2.html I have installed pymoo according to the installation instructions: pip install -U pymoo Everything works fine up to the paragraph Define a Termination Criterion I imput the code: from pymoo import get_termination ERROR ImportError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_10384/2370239780.py in <module> 1 #Define a Termination Criterion 2 ----> 3 from pymoo import get_termination 4 ImportError: cannot import name 'get_termination' from 'pymoo' (C:\Users\musae\anaconda3\lib\site-packages\pymoo\__init__.py) The same things happens when I imput from pymoo.problems import get_problem from that example of NSGA2 algorithm https://pymoo.org/algorithms/moo/nsga2.html#nb-nsga2 ERROR ImportError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_24508/113033208.py in <module> 1 from pymoo.algorithms.moo.nsga2 import NSGA2 ----> 2 from pymoo.problems import get_problem 3 from pymoo.optimize import minimize 4 from pymoo.visualization.scatter import Scatter 5 ImportError: cannot import name 'get_problem' from 'pymoo.problems' (C:\Users\musae\anaconda3\lib\site-packages\pymoo\problems\__init__.py) Have I installed it wrong? Why do I get those errors? Thank you!
[ "Instead of from pymoo.problems import get_problem use from pymoo.problems.multi import * .\nAnd for get_problem use problem instead. As an example:\nget_problem(\"zdt1\").pareto_front()\nShould be converted to:\nZDT1().pareto_front()\n" ]
[ 0 ]
[]
[]
[ "optimization", "pymoo", "python" ]
stackoverflow_0074064643_optimization_pymoo_python.txt
Q: xpath error, XPath position >= 1 expected I am trying to parse a xml from a string. Below is the xml in the string. <xc:Application class="bril::lumistore::Application" id="111" instance="0" logpolicy="inherit" network="local" service="lumistore"> <ns4:properties xsi:type="soapenc:Struct"> <ns4:datasources soapenc:arrayType="xsd:ur-type[1]" xsi:type="soapenc:Array"> <ns4:item soapenc:position="[0]" xsi:type="soapenc:Struct"> <ns4:properties xsi:type="soapenc:Struct"> <ns4:bus xsi:type="xsd:string">brildata</ns4:bus> <ns4:topics xsi:type="xsd:string">tcds,beam,bestlumi,bcm1fagghist,bcm1flumi,bcm1fbkg,pltaggzero,pltlumizero,hfoclumi,hfOcc1Agg,bunchmask,ScopeData,atlasbeam,hfetlumi,hfEtSumAgg,hfafterglowfrac,hfEtPedestal,dtlumi,bunchlength,radmonraw,radmonflux,radmonlumi,pltslinklumi,bcm1futca_bkg12,bcm1futca_background,bcm1futcalumi,remuslumi,remuslumi_5514,remuslumi_5515,remuslumi_5516,remuslumi_5517,bcm1futca_agg_hist</ns4:topics> </ns4:properties> </ns4:item> </ns4:datasources> <ns4:maxstalesec xsi:type="xsd:unsignedInt">30</ns4:maxstalesec> <ns4:checkagesec xsi:type="xsd:unsignedInt">10</ns4:checkagesec> <ns4:maxsizeMB xsi:type="xsd:unsignedInt">120</ns4:maxsizeMB> <ns4:fileformat xsi:type="xsd:string">hd5</ns4:fileformat> <ns4:filepath xsi:type="xsd:string">/scratch/central_current</ns4:filepath> <ns4:nrowperwbuf xsi:type="xsd:unsignedInt">102</ns4:nrowperwbuf> <ns4:workinterval xsi:type="xsd:unsignedInt">50000</ns4:workinterval> </ns4:properties> </xc:Application> This is the code i use to parse the string import xml.etree.ElementTree as ET root = ET.fromstring(xml) node = root.find(field['xpath'], ns) where field['xpath'] = ".//xc:Application[@class='bril::lumistore::Application']/lst:properties/lst:datasources/lst:item[0]/lst:properties/lst:topics" and ns = {'xc': 'http://path/XMLConfiguration-30', 'lst': 'urn:application-urn:bril::lumistore::Application'} I get the following error XPath position >= 1 expected Traceback (most recent call last): File "/usr/lib64/python3.6/xml/etree/ElementPath.py", line 263, in iterfind selector = _cache[cache_key] KeyError: (".//xc:Application[@class='bril::lumistore::Application']/lst:properties/lst:datasources/lst:item[0]/lst:properties/lst:topics", (('lst', 'urn:application-urn:bril::lumistore::Application'), ('xc', 'http://path/XMLConfiguration-30'))) During handling of the above exception, another exception occurred: SyntaxError: XPath position >= 1 expected Any help is much appreciated Note: Usin Python 2.7 seems to work fine, not with Python 3.6 though. A: Positions in XPath start at 1; not 0. So the positional predicate [0] in: lst:item[0] isn't going to select anything. If you want to select the first lst:item child of lst:datasources, use: lst:item[1]
xpath error, XPath position >= 1 expected
I am trying to parse a xml from a string. Below is the xml in the string. <xc:Application class="bril::lumistore::Application" id="111" instance="0" logpolicy="inherit" network="local" service="lumistore"> <ns4:properties xsi:type="soapenc:Struct"> <ns4:datasources soapenc:arrayType="xsd:ur-type[1]" xsi:type="soapenc:Array"> <ns4:item soapenc:position="[0]" xsi:type="soapenc:Struct"> <ns4:properties xsi:type="soapenc:Struct"> <ns4:bus xsi:type="xsd:string">brildata</ns4:bus> <ns4:topics xsi:type="xsd:string">tcds,beam,bestlumi,bcm1fagghist,bcm1flumi,bcm1fbkg,pltaggzero,pltlumizero,hfoclumi,hfOcc1Agg,bunchmask,ScopeData,atlasbeam,hfetlumi,hfEtSumAgg,hfafterglowfrac,hfEtPedestal,dtlumi,bunchlength,radmonraw,radmonflux,radmonlumi,pltslinklumi,bcm1futca_bkg12,bcm1futca_background,bcm1futcalumi,remuslumi,remuslumi_5514,remuslumi_5515,remuslumi_5516,remuslumi_5517,bcm1futca_agg_hist</ns4:topics> </ns4:properties> </ns4:item> </ns4:datasources> <ns4:maxstalesec xsi:type="xsd:unsignedInt">30</ns4:maxstalesec> <ns4:checkagesec xsi:type="xsd:unsignedInt">10</ns4:checkagesec> <ns4:maxsizeMB xsi:type="xsd:unsignedInt">120</ns4:maxsizeMB> <ns4:fileformat xsi:type="xsd:string">hd5</ns4:fileformat> <ns4:filepath xsi:type="xsd:string">/scratch/central_current</ns4:filepath> <ns4:nrowperwbuf xsi:type="xsd:unsignedInt">102</ns4:nrowperwbuf> <ns4:workinterval xsi:type="xsd:unsignedInt">50000</ns4:workinterval> </ns4:properties> </xc:Application> This is the code i use to parse the string import xml.etree.ElementTree as ET root = ET.fromstring(xml) node = root.find(field['xpath'], ns) where field['xpath'] = ".//xc:Application[@class='bril::lumistore::Application']/lst:properties/lst:datasources/lst:item[0]/lst:properties/lst:topics" and ns = {'xc': 'http://path/XMLConfiguration-30', 'lst': 'urn:application-urn:bril::lumistore::Application'} I get the following error XPath position >= 1 expected Traceback (most recent call last): File "/usr/lib64/python3.6/xml/etree/ElementPath.py", line 263, in iterfind selector = _cache[cache_key] KeyError: (".//xc:Application[@class='bril::lumistore::Application']/lst:properties/lst:datasources/lst:item[0]/lst:properties/lst:topics", (('lst', 'urn:application-urn:bril::lumistore::Application'), ('xc', 'http://path/XMLConfiguration-30'))) During handling of the above exception, another exception occurred: SyntaxError: XPath position >= 1 expected Any help is much appreciated Note: Usin Python 2.7 seems to work fine, not with Python 3.6 though.
[ "Positions in XPath start at 1; not 0.\nSo the positional predicate [0] in:\nlst:item[0]\n\nisn't going to select anything.\nIf you want to select the first lst:item child of lst:datasources, use:\nlst:item[1]\n\n" ]
[ 1 ]
[]
[]
[ "elementtree", "python", "python_3.x", "xml", "xml_parsing" ]
stackoverflow_0074479578_elementtree_python_python_3.x_xml_xml_parsing.txt
Q: unsort a list to get it back the way it was I want to unsort a list. For you to understand: list1 = ["Hi", "Whats up this Morning" "Hello", "Good Morning"] new_list = sorted(list1, key=len, reverse=True) ["Whats up this Morning", "Good Morning", "Hello", "Hi"] And know it should go back exactly in the same way it was in the beginning ["Hi", "Whats up this Morning", "Hello", "Good Morning"] Anyone knows the answer? A: Getting to it right out of the gate, list1 never changes, so as long as you retain this object in memory you will always have a way to refer to the original list, since it's unchanged. new_list is a new object, and the absolute simplest thing you can do is keep both these objects and refer to them at will. Taking your question from a conceptual standpoint: is there a way to keep track of the original order based on the new object? If you absolutely needed to revert to the original order based only on the new_list, you could first index the list using enumerate(). This tracks the order of your object like so: (0, 'Hi') (1, 'Whats up this Morning') (2, 'Hello') (3, 'Good Morning'). Then, you can sort by the length of the second object in each tuple (the original string) with new_list = sorted(enumerate(list1), key=lambda x: len(x[1]), reverse=True). Finally, you can extract your original list (as a tuple) with words, indices = zip(*new_list). Then reversing back to the string is quite simple, see the below example: list1 = ["Hi", "Whats up this Morning", "Hello", "Good Morning"] # Pack indices and sort new_list = sorted(enumerate(list1), key=lambda x: len(x[1]), reverse=True) # Unpack sorted strings and old indices indices, words = zip(*new_list) print(words) # ('Whats up this Morning', 'Good Morning', 'Hello', 'Hi') print(indices) # (1, 3, 2, 0) # Reverse the sorting based on the old indices original_list_packed = sorted(zip(indices, words)) print(original_list_packed) # [(0, 'Hi'), (1, 'Whats up this Morning'), (2, 'Hello'), (3, 'Good Morning')] _, original_list = zip(*original_list_packed) print(original_list) # ('Hi', 'Whats up this Morning', 'Hello', 'Good Morning') A: Our list is: list1 = ["Hi", "Whats up this Morning" "Hello", "Good Morning"] And the new list is: new_list = sorted(list1, key=len, reverse=True) We sort list1and store original keys of new_list list_keys = sorted(range(len(list1)), key=lambda k: list1[k], reverse=True) >>> [1, 0, 2] Ant get back the fist list: list3 = [None] * len(list1) for indx in range(len(list_keys)): list3[indx] = list1[indx] Output >>> list3 >>> ['Hi', 'Whats up this MorningHello', 'Good Morning']
unsort a list to get it back the way it was
I want to unsort a list. For you to understand: list1 = ["Hi", "Whats up this Morning" "Hello", "Good Morning"] new_list = sorted(list1, key=len, reverse=True) ["Whats up this Morning", "Good Morning", "Hello", "Hi"] And know it should go back exactly in the same way it was in the beginning ["Hi", "Whats up this Morning", "Hello", "Good Morning"] Anyone knows the answer?
[ "Getting to it right out of the gate, list1 never changes, so as long as you retain this object in memory you will always have a way to refer to the original list, since it's unchanged. new_list is a new object, and the absolute simplest thing you can do is keep both these objects and refer to them at will.\nTaking your question from a conceptual standpoint: is there a way to keep track of the original order based on the new object? If you absolutely needed to revert to the original order based only on the new_list, you could first index the list using enumerate(). This tracks the order of your object like so: (0, 'Hi') (1, 'Whats up this Morning') (2, 'Hello') (3, 'Good Morning'). Then, you can sort by the length of the second object in each tuple (the original string) with new_list = sorted(enumerate(list1), key=lambda x: len(x[1]), reverse=True). Finally, you can extract your original list (as a tuple) with words, indices = zip(*new_list). Then reversing back to the string is quite simple, see the below example:\nlist1 = [\"Hi\", \"Whats up this Morning\", \"Hello\", \"Good Morning\"]\n# Pack indices and sort\nnew_list = sorted(enumerate(list1), key=lambda x: len(x[1]), reverse=True)\n\n# Unpack sorted strings and old indices\nindices, words = zip(*new_list)\nprint(words) # ('Whats up this Morning', 'Good Morning', 'Hello', 'Hi')\nprint(indices) # (1, 3, 2, 0)\n\n# Reverse the sorting based on the old indices\noriginal_list_packed = sorted(zip(indices, words))\nprint(original_list_packed) # [(0, 'Hi'), (1, 'Whats up this Morning'), (2, 'Hello'), (3, 'Good Morning')]\n_, original_list = zip(*original_list_packed)\nprint(original_list) # ('Hi', 'Whats up this Morning', 'Hello', 'Good Morning')\n\n", "Our list is:\nlist1 = [\"Hi\", \"Whats up this Morning\" \"Hello\", \"Good Morning\"]\n\nAnd the new list is:\nnew_list = sorted(list1, key=len, reverse=True)\n\nWe sort list1and store original keys of new_list\nlist_keys = sorted(range(len(list1)), key=lambda k: list1[k], reverse=True)\n\n>>> [1, 0, 2]\n\nAnt get back the fist list:\nlist3 = [None] * len(list1)\nfor indx in range(len(list_keys)):\n list3[indx] = list1[indx]\n\nOutput\n>>> list3\n>>> ['Hi', 'Whats up this MorningHello', 'Good Morning']\n\n" ]
[ 1, 1 ]
[]
[]
[ "list", "python", "sortedlist", "sorting" ]
stackoverflow_0074481253_list_python_sortedlist_sorting.txt
Q: How do i create a semicolon separated excel or csv file from the values of a column in PANDAS? i have an excel file, it has one column, "emails"... i need to get the email values from the column and add them to a row separated by a semicolon. export or save to either an excel or csv. is this possible in pandas? A: Not really sure what your email 'values' are particularly. Once getting them into a pandas df, you can output to .csv with a semi colon delimiter using this code df.to_csv(sep=';', index=False)
How do i create a semicolon separated excel or csv file from the values of a column in PANDAS?
i have an excel file, it has one column, "emails"... i need to get the email values from the column and add them to a row separated by a semicolon. export or save to either an excel or csv. is this possible in pandas?
[ "Not really sure what your email 'values' are particularly. Once getting them into a pandas df, you can output to .csv with a semi colon delimiter using this code\ndf.to_csv(sep=';', index=False)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074481323_dataframe_pandas_python_python_3.x.txt
Q: How can I set max line length in vscode for python? For JavaScript formatter works fine but not for Python. I have installed autopep8 but it seems that I can't set max line length. I tried this: "python.formatting.autopep8Args": [ "--max-line-length", "79", "--experimental" ] and my settings.json looks like this: { "workbench.colorTheme": "One Dark Pro", "git.autofetch": true, "workbench.iconTheme": "material-icon-theme", "git.enableSmartCommit": true, "terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\cmd.exe", "[javascript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[html]": { "editor.defaultFormatter": "vscode.html-language-features" }, "javascript.updateImportsOnFileMove.enabled": "always", "[javascriptreact]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "liveServer.settings.donotShowInfoMsg": true, "editor.formatOnSave": true, "window.zoomLevel": 1, "vscode-w3cvalidation.validator-token": "Fri, 07 Aug 2020 07:35:05 GMT", "python.formatting.provider": "autopep8", "python.formatting.autopep8Args": [ "--max-line-length", "79", "--experimental" ], "python.autoComplete.addBrackets": true, "python.autoComplete.extraPaths": [] } Any ideas how to fix that? A: From autopep8-usage, the default value of max-line-length is 79, so you can change to other value and have a try. About the effect of autopep8 in vscode, I made a test with the same settings as yours, like the following screenshot shows: every print sentence line-length is over 79, the first and the second print() parameters are expressions, and the setting works for the first one, not for the second. This is because setting applicable rules are provided by python extension and it has own caculation mechanism. When it comes to print strings, the setting doesn't work, so if you mean this in your question, you can add the following code in user settings.json. "editor.wordWrap": "wordWrapColumn", "editor.wordWrapColumn": 79 A: "python.formatting.autopep8Args": [ "--line-length", "119" ] works for me
How can I set max line length in vscode for python?
For JavaScript formatter works fine but not for Python. I have installed autopep8 but it seems that I can't set max line length. I tried this: "python.formatting.autopep8Args": [ "--max-line-length", "79", "--experimental" ] and my settings.json looks like this: { "workbench.colorTheme": "One Dark Pro", "git.autofetch": true, "workbench.iconTheme": "material-icon-theme", "git.enableSmartCommit": true, "terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\cmd.exe", "[javascript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[html]": { "editor.defaultFormatter": "vscode.html-language-features" }, "javascript.updateImportsOnFileMove.enabled": "always", "[javascriptreact]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "liveServer.settings.donotShowInfoMsg": true, "editor.formatOnSave": true, "window.zoomLevel": 1, "vscode-w3cvalidation.validator-token": "Fri, 07 Aug 2020 07:35:05 GMT", "python.formatting.provider": "autopep8", "python.formatting.autopep8Args": [ "--max-line-length", "79", "--experimental" ], "python.autoComplete.addBrackets": true, "python.autoComplete.extraPaths": [] } Any ideas how to fix that?
[ "From autopep8-usage, the default value of max-line-length is 79, so you can change to other value and have a try.\nAbout the effect of autopep8 in vscode, I made a test with the same settings as yours, like the following screenshot shows:\n\nevery print sentence line-length is over 79, the first and the second print() parameters are expressions, and the setting works for the first one, not for the second. This is because setting applicable rules are provided by python extension and it has own caculation mechanism.\nWhen it comes to print strings, the setting doesn't work, so if you mean this in your question, you can add the following code in user settings.json.\n\"editor.wordWrap\": \"wordWrapColumn\",\n\"editor.wordWrapColumn\": 79\n\n", "\"python.formatting.autopep8Args\": [\n \"--line-length\",\n \"119\"\n]\n\nworks for me\n" ]
[ 3, 0 ]
[]
[]
[ "python", "visual_studio_code", "vscode_settings" ]
stackoverflow_0063570108_python_visual_studio_code_vscode_settings.txt
Q: Export to CSV in Python from JSON for loop How do I fix my formatting. I know how to get the header and can also get the data exported in json format out to file. My problem is each column needs to have the item index for each line. data = json.loads(response.text) f = open("export-results.csv", "a", newline="") writer = csv.writer(f) header = 'Device Name', 'Operating System', 'IP Address' for item in data: writer.writerow(header) writer.writerow(item['name']) writer.writerow(item['os_version_and_architecture']) writer.writerow(item['last_ip_address']) f.close() Each column to have the value in full formatting and chosen from the ITEM list. A: You can use enumerate() to get index of the row. For example: with open("export-results.csv", "w", newline="") as f: writer = csv.writer(f) header = "Index", "Device Name", "Operating System", "IP Address" # write header writer.writerow(header) # write rows with index (starting from 1) for idx, item in enumerate(data, 1): writer.writerow( [ idx, item["name"], item["os_version_and_architecture"], item["last_ip_address"], ] ) EDIT: Without the index: with open("export-results.csv", "w", newline="") as f: writer = csv.writer(f) header = "Device Name", "Operating System", "IP Address" # write header writer.writerow(header) # write rows for item in data: writer.writerow( [ item["name"], item["os_version_and_architecture"], item["last_ip_address"], ] )
Export to CSV in Python from JSON for loop
How do I fix my formatting. I know how to get the header and can also get the data exported in json format out to file. My problem is each column needs to have the item index for each line. data = json.loads(response.text) f = open("export-results.csv", "a", newline="") writer = csv.writer(f) header = 'Device Name', 'Operating System', 'IP Address' for item in data: writer.writerow(header) writer.writerow(item['name']) writer.writerow(item['os_version_and_architecture']) writer.writerow(item['last_ip_address']) f.close() Each column to have the value in full formatting and chosen from the ITEM list.
[ "You can use enumerate() to get index of the row. For example:\nwith open(\"export-results.csv\", \"w\", newline=\"\") as f:\n writer = csv.writer(f)\n header = \"Index\", \"Device Name\", \"Operating System\", \"IP Address\"\n\n # write header\n writer.writerow(header)\n\n # write rows with index (starting from 1)\n for idx, item in enumerate(data, 1):\n writer.writerow(\n [\n idx,\n item[\"name\"],\n item[\"os_version_and_architecture\"],\n item[\"last_ip_address\"],\n ]\n )\n\n\nEDIT: Without the index:\nwith open(\"export-results.csv\", \"w\", newline=\"\") as f:\n writer = csv.writer(f)\n header = \"Device Name\", \"Operating System\", \"IP Address\"\n\n # write header\n writer.writerow(header)\n\n # write rows\n for item in data:\n writer.writerow(\n [\n item[\"name\"],\n item[\"os_version_and_architecture\"],\n item[\"last_ip_address\"],\n ]\n )\n\n" ]
[ 0 ]
[]
[]
[ "csv", "export", "json", "python" ]
stackoverflow_0074481515_csv_export_json_python.txt
Q: Does the TensorFlow save function automatically overwrite old models? If not, how does the save/load system work? I've tried finding information regarding this online but the word overwrite does not show up at all in the official Tensorflow documentation and all the Stack Overflow questions are related to changing the number of copies saved by the model. I would just like to know whether or not the save function overwrites at all. If I re-train a model and would like to re-run the save function will the newer model load in when I use the load_model function? Or will it be a model that is trained on the same data twice? Do older iterations get stored somewhere? A: According to the tensorflow documentation, model.save() is an alias for tensorflow.keras.models.save_model(), which has default parameter "overwrite" set to "True". From this I assume that by calling model.save('model.h5') you automatically overwrite your previous save. Source: https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model A: You can use model.save('./model.h5') which will save the model to a file and model = tf.keras.models.load_model('./model.h5') to load the model A: I think Eyal's answer is a good point to start. However, if you want to be sure you can let your program delete the previous model or change it's name on the fly. I also observed different results when deleting a model and not, but this could also be effects of the different training process, due to random initialization and updating the weights.
Does the TensorFlow save function automatically overwrite old models? If not, how does the save/load system work?
I've tried finding information regarding this online but the word overwrite does not show up at all in the official Tensorflow documentation and all the Stack Overflow questions are related to changing the number of copies saved by the model. I would just like to know whether or not the save function overwrites at all. If I re-train a model and would like to re-run the save function will the newer model load in when I use the load_model function? Or will it be a model that is trained on the same data twice? Do older iterations get stored somewhere?
[ "According to the tensorflow documentation, model.save() is an alias for tensorflow.keras.models.save_model(), which has default parameter \"overwrite\" set to \"True\". From this I assume that by calling model.save('model.h5') you automatically overwrite your previous save.\nSource: https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model\n", "You can use\nmodel.save('./model.h5')\nwhich will save the model to a file\nand\nmodel = tf.keras.models.load_model('./model.h5')\nto load the model\n", "I think Eyal's answer is a good point to start. However, if you want to be sure you can let your program delete the previous model or change it's name on the fly. I also observed different results when deleting a model and not, but this could also be effects of the different training process, due to random initialization and updating the weights.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "tensorflow" ]
stackoverflow_0072985903_python_tensorflow.txt
Q: How to open an excel file with multiple sheets in pandas? I have an excel file composed of several sheets. I need to load them as separate dataframes individually. What would be a similar function as pd.read_csv("") for this kind of task? P.S. due to the size I cannot copy and paste individual sheets in excel A: Use pandas read_excel() method that accepts a sheet_name parameter: import pandas as pd df = pd.read_excel(excel_file_path, sheet_name="sheet_name") Multiple data frames can be loaded by passing in a list. For a more in-depth explanation of how read_excel() works see: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html A: If you can't type out each sheet name and want to read whole worksheet try this: dfname=pd.ExcelFile('C://full_path.xlsx') print(dfname.sheet_names) df=pd.read_excel('C://fullpath.xlsx') for items in dfname.sheet_names[1:]: dfnew=pd.read_excel(full_path,sheet_name=items) df=pd.concat([df,dfnew]) The thing is that pd.read_excel() can read the very first sheet and rest are unread.So you can use this A: import pandas # setting sheet_name = None, reads all sheets into a dict sheets = pandas.read_excel(filepath, sheet_name=None) # i will be the keys in a dictionary object # the values are the dataframes of each sheet for i in sheets: print(f"sheet[{i}]") print(f"sheet[{i}].columns={sheets[i].columns}") for index, row in sheets[i].iterrows(): print(f"index={index} row={row}") A: exFile = ExcelFile(f) #load file f data = ExcelFile.parse(exFile) #this creates a dataframe out of the first sheet in file
How to open an excel file with multiple sheets in pandas?
I have an excel file composed of several sheets. I need to load them as separate dataframes individually. What would be a similar function as pd.read_csv("") for this kind of task? P.S. due to the size I cannot copy and paste individual sheets in excel
[ "Use pandas read_excel() method that accepts a sheet_name parameter:\nimport pandas as pd\n\ndf = pd.read_excel(excel_file_path, sheet_name=\"sheet_name\")\n\nMultiple data frames can be loaded by passing in a list. For a more in-depth explanation of how read_excel() works see: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html\n", "If you can't type out each sheet name and want to read whole worksheet try this:\n dfname=pd.ExcelFile('C://full_path.xlsx')\n print(dfname.sheet_names)\n df=pd.read_excel('C://fullpath.xlsx')\n for items in dfname.sheet_names[1:]:\n dfnew=pd.read_excel(full_path,sheet_name=items)\n df=pd.concat([df,dfnew])\n\nThe thing is that pd.read_excel() can read the very first sheet and rest are unread.So you can use this\n", "import pandas\n# setting sheet_name = None, reads all sheets into a dict\nsheets = pandas.read_excel(filepath, sheet_name=None) \n# i will be the keys in a dictionary object\n# the values are the dataframes of each sheet\nfor i in sheets: \n print(f\"sheet[{i}]\")\n print(f\"sheet[{i}].columns={sheets[i].columns}\")\n for index, row in sheets[i].iterrows():\n print(f\"index={index} row={row}\")\n\n", "exFile = ExcelFile(f) #load file f\ndata = ExcelFile.parse(exFile) #this creates a dataframe out of the first sheet in file\n" ]
[ 11, 5, 1, 0 ]
[]
[]
[ "excel", "import", "python" ]
stackoverflow_0031582821_excel_import_python.txt
Q: Local data cannot be read in a Dataproc cluster, when using SparkNLP I am trying to build a Dataproc cluster, with Spark NLP installed in it, then quick test it by reading some CoNLL 2003 data. First, I used this codelab as inspiration, to build my own smaller cluster (project name has been edited for safety purposes): gcloud dataproc clusters create s17-sparknlp-experiments \ --enable-component-gateway \ --region us-west1 \ --metadata 'PIP_PACKAGES=google-cloud-storage spark-nlp==2.5.5' \ --zone us-west1-a \ --single-node \ --master-machine-type n1-standard-4 \ --master-boot-disk-size 35 \ --image-version 1.5-debian10 \ --initialization-actions gs://dataproc-initialization-actions/python/pip-install.sh \ --optional-components JUPYTER,ANACONDA \ --project my-project I started the previous cluster via JupyterLab, then downloaded these CoNLL 2003 files in ~/original directory, existing in root . If done correctly, when you run these commands: cd / && head -n 5 original/eng.train The following result should obtained: -DOCSTART- -X- -X- O EU NNP B-NP B-ORG rejects VBZ B-VP O German JJ B-NP B-MISC This means these files should be able to be read in the following Python code, existing in a single-celled Jupyter Notebook: from pyspark.ml import Pipeline from pyspark.sql import SparkSession from sparknlp.annotator import * from sparknlp.base import * from sparknlp.common import * from sparknlp.training import CoNLL import sparknlp spark = sparknlp.start() print("Spark NLP version: ", sparknlp.version()) # 2.4.4 print("Apache Spark version: ", spark.version) # 2.4.8 # Other info of possible interest: # Python 3.6.13 :: Anaconda, Inc. # openjdk version "1.8.0_312" # OpenJDK Runtime Environment (Temurin)(build 1.8.0_312-b07) # OpenJDK 64-Bit Server VM (Temurin)(build 25.312-b07, mixed mode) training_data = CoNLL().readDataset(spark, 'original/eng.train') # The exact same path used before... training_data.show() Instead, the following error gets triggered: --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-4-2b145ab3b733> in <module> ----> 1 training_data = CoNLL().readDataset(spark, 'original/eng.train') 2 training_data.show() /opt/conda/anaconda/lib/python3.6/site-packages/sparknlp/training.py in readDataset(self, spark, path, read_as) 32 jSession = spark._jsparkSession 33 ---> 34 jdf = self._java_obj.readDataset(jSession, path, read_as) 35 return DataFrame(jdf, spark._wrapped) 36 /opt/conda/anaconda/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args) 1255 answer = self.gateway_client.send_command(command) 1256 return_value = get_return_value( -> 1257 answer, self.gateway_client, self.target_id, self.name) 1258 1259 for temp_arg in temp_args: /usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw) 61 def deco(*a, **kw): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString() /opt/conda/anaconda/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". --> 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( Py4JJavaError: An error occurred while calling o87.readDataset. : java.io.FileNotFoundException: file or folder: original/eng.train not found at com.johnsnowlabs.nlp.util.io.ResourceHelper$SourceStream.<init>(ResourceHelper.scala:44) at com.johnsnowlabs.nlp.util.io.ResourceHelper$.parseLines(ResourceHelper.scala:215) at com.johnsnowlabs.nlp.training.CoNLL.readDocs(CoNLL.scala:31) at com.johnsnowlabs.nlp.training.CoNLL.readDataset(CoNLL.scala:198) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) QUESTION: What could be possibly going wrong here? A: I think the problem is related to the fact that as you can see in the library source code (1 2) CoNLL().readDataset() read the information from HDFS. You downloaded the required files and uncompressed them in your cluster master node file system, but you need to make that content accessible through HDFS. Please, try copying it to HDFS and then repeat the test. A: @jccampanero led me in the right direction, however with some tweaks. In specific, you must store the files you want to import, in some Google Cloud Storage bucket; then use that file URI in readDataset: training_data = CoNLL().readDataset(spark, 'gs://my-bucket/subfolders/eng.train') This is not the only valid option to achieve what I am looking for, there are more.
Local data cannot be read in a Dataproc cluster, when using SparkNLP
I am trying to build a Dataproc cluster, with Spark NLP installed in it, then quick test it by reading some CoNLL 2003 data. First, I used this codelab as inspiration, to build my own smaller cluster (project name has been edited for safety purposes): gcloud dataproc clusters create s17-sparknlp-experiments \ --enable-component-gateway \ --region us-west1 \ --metadata 'PIP_PACKAGES=google-cloud-storage spark-nlp==2.5.5' \ --zone us-west1-a \ --single-node \ --master-machine-type n1-standard-4 \ --master-boot-disk-size 35 \ --image-version 1.5-debian10 \ --initialization-actions gs://dataproc-initialization-actions/python/pip-install.sh \ --optional-components JUPYTER,ANACONDA \ --project my-project I started the previous cluster via JupyterLab, then downloaded these CoNLL 2003 files in ~/original directory, existing in root . If done correctly, when you run these commands: cd / && head -n 5 original/eng.train The following result should obtained: -DOCSTART- -X- -X- O EU NNP B-NP B-ORG rejects VBZ B-VP O German JJ B-NP B-MISC This means these files should be able to be read in the following Python code, existing in a single-celled Jupyter Notebook: from pyspark.ml import Pipeline from pyspark.sql import SparkSession from sparknlp.annotator import * from sparknlp.base import * from sparknlp.common import * from sparknlp.training import CoNLL import sparknlp spark = sparknlp.start() print("Spark NLP version: ", sparknlp.version()) # 2.4.4 print("Apache Spark version: ", spark.version) # 2.4.8 # Other info of possible interest: # Python 3.6.13 :: Anaconda, Inc. # openjdk version "1.8.0_312" # OpenJDK Runtime Environment (Temurin)(build 1.8.0_312-b07) # OpenJDK 64-Bit Server VM (Temurin)(build 25.312-b07, mixed mode) training_data = CoNLL().readDataset(spark, 'original/eng.train') # The exact same path used before... training_data.show() Instead, the following error gets triggered: --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-4-2b145ab3b733> in <module> ----> 1 training_data = CoNLL().readDataset(spark, 'original/eng.train') 2 training_data.show() /opt/conda/anaconda/lib/python3.6/site-packages/sparknlp/training.py in readDataset(self, spark, path, read_as) 32 jSession = spark._jsparkSession 33 ---> 34 jdf = self._java_obj.readDataset(jSession, path, read_as) 35 return DataFrame(jdf, spark._wrapped) 36 /opt/conda/anaconda/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args) 1255 answer = self.gateway_client.send_command(command) 1256 return_value = get_return_value( -> 1257 answer, self.gateway_client, self.target_id, self.name) 1258 1259 for temp_arg in temp_args: /usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw) 61 def deco(*a, **kw): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString() /opt/conda/anaconda/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". --> 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( Py4JJavaError: An error occurred while calling o87.readDataset. : java.io.FileNotFoundException: file or folder: original/eng.train not found at com.johnsnowlabs.nlp.util.io.ResourceHelper$SourceStream.<init>(ResourceHelper.scala:44) at com.johnsnowlabs.nlp.util.io.ResourceHelper$.parseLines(ResourceHelper.scala:215) at com.johnsnowlabs.nlp.training.CoNLL.readDocs(CoNLL.scala:31) at com.johnsnowlabs.nlp.training.CoNLL.readDataset(CoNLL.scala:198) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) QUESTION: What could be possibly going wrong here?
[ "I think the problem is related to the fact that as you can see in the library source code (1 2) CoNLL().readDataset() read the information from HDFS.\nYou downloaded the required files and uncompressed them in your cluster master node file system, but you need to make that content accessible through HDFS.\nPlease, try copying it to HDFS and then repeat the test.\n", "@jccampanero led me in the right direction, however with some tweaks. In specific, you must store the files you want to import, in some Google Cloud Storage bucket; then use that file URI in readDataset:\ntraining_data = CoNLL().readDataset(spark, 'gs://my-bucket/subfolders/eng.train')\n\nThis is not the only valid option to achieve what I am looking for, there are more.\n" ]
[ 1, 1 ]
[]
[]
[ "apache_spark", "google_cloud_dataproc", "google_cloud_platform", "johnsnowlabs_spark_nlp", "python" ]
stackoverflow_0074468280_apache_spark_google_cloud_dataproc_google_cloud_platform_johnsnowlabs_spark_nlp_python.txt
Q: How to conditionally split and extend inside a list comprehension? How do I convert this input: values = ['v1,v2', 'v3'] to this output: ['v1', 'v2', 'v3'] Attempt without list comprehension that works: values = ['v1,v2', 'v3'] parsed_values = [] for v in values: if ',' in v: parsed_values.extend(v.split(',')) else: parsed_values.append(v) print(parsed_values) # ['v1', 'v2', 'v3'] Attempt with list comprehension that does not work: parsed_values = [_ for _ in [v.split(',') if ',' in v else v for v in values]] # [['v1', 'v2'], 'v3'] A: You don't care if there is a comma or not, splitting on it will always give a list you can iterate on values = ['v1,v2', 'v3'] parsed_values = [word for value in values for word in value.split(",")] print(parsed_values) # ['v1', 'v2', 'v3'] A: Try: values = ["v1,v2", "v3"] values = ",".join(values).split(",") print(values) Prints: ['v1', 'v2', 'v3']
How to conditionally split and extend inside a list comprehension?
How do I convert this input: values = ['v1,v2', 'v3'] to this output: ['v1', 'v2', 'v3'] Attempt without list comprehension that works: values = ['v1,v2', 'v3'] parsed_values = [] for v in values: if ',' in v: parsed_values.extend(v.split(',')) else: parsed_values.append(v) print(parsed_values) # ['v1', 'v2', 'v3'] Attempt with list comprehension that does not work: parsed_values = [_ for _ in [v.split(',') if ',' in v else v for v in values]] # [['v1', 'v2'], 'v3']
[ "You don't care if there is a comma or not, splitting on it will always give a list you can iterate on\nvalues = ['v1,v2', 'v3']\nparsed_values = [word for value in values for word in value.split(\",\")]\nprint(parsed_values)\n# ['v1', 'v2', 'v3']\n\n", "Try:\nvalues = [\"v1,v2\", \"v3\"]\n\nvalues = \",\".join(values).split(\",\")\nprint(values)\n\nPrints:\n['v1', 'v2', 'v3']\n\n" ]
[ 5, 2 ]
[]
[]
[ "list", "list_comprehension", "python", "python_3.x" ]
stackoverflow_0074481703_list_list_comprehension_python_python_3.x.txt
Q: Simplest way to change which of two sections of Python script code should be run I wrote two Python functions for converting RGB colors of an image representing tuples to single integer values using two different approaches. In order to test if both the approaches deliver the same results it was necessary to frequently switch between the two code sections choosing which one should be run. Finally I decided to use only one of the approaches, but decided to keep the other one in the script code as it better demonstrates what the code does. In order to 'switch off' one block of code and 'switch on' another one I have used two different methods: an if code block (see one of the functions in the code below) and a triple quoted string. The first approach (with if) makes it necessary to introduce additional indentation to the code and the other one required to move a line with triple quotes from the bottom to the top of the code block with an intermediate triple quotes. Both methods work ok, but ... Is there a better and more easy way of such switching? Best if it would require to press a key on the keyboard only once in order to switch between the two code blocks? Here my code: # ====================================================================== ''' Conversion functions for single RGB-color values ''' def rgb2int(rgb_tuple): if 1: # <<< change to 0 to switch to the else: part of code from sys import byteorder as endian # endianiness = sys.byteorder # 'little' int_rgb = int.from_bytes(bytearray(rgb_tuple), endian) # ,signed=False) else: if len(rgb_tuple) == 4: # RGBA tuple R,G,B,A = rgb_tuple else: R,G,B = rgb_tuple A = None if A is not None: int_rgb =( 0 ) + A else: int_rgb = 0 int_rgb = (int_rgb<<8) + B int_rgb = (int_rgb<<8) + G # ! int_rgb<<8 + G == int_rgb<<(8+G) ! int_rgb = (int_rgb<<8) + R return int_rgb def int2rgb(int_rgb, alpha=False): from sys import byteorder as endian tplsize = 4 if alpha else 3 rgb_tuple = tuple(int_rgb.to_bytes(tplsize, endian)) # ,signed=False)) """ if not alpha: rgb_tuple = ( int_rgb & 0xff, ( int_rgb >> 8 ) & 0xff, ( int_rgb >> 16 ) & 0xff ) else: # with alpha channel: rgb_tuple = ( int_rgb & 0xff, ( int_rgb >> 8 ) & 0xff, ( int_rgb >> 16 ) & 0xff, ( int_rgb >> 24 ) & 0xff ) """ # <<< move to top to switch to the code block above return rgb_tuple rgb = (32,253,200,100) int_rgb = rgb2int(rgb) rgb_ = int2rgb(int_rgb, alpha=True) print(rgb, int_rgb, rgb_, sep='\n') assert rgb == rgb_ rgb = (32,253,200) int_rgb = rgb2int(rgb) rgb_ = int2rgb(int_rgb) assert rgb == rgb_ # --- if __name__ == "__main__": print(' --- ') print(rgb) print(int_rgb) print(rgb_) #This gives: [32, 253, 200] 13172000 [32, 253, 200] UPDATE because of response: Responding to a comment an explanation why I haven't choose to use two different functions to separate the pieces of code: Two separate functions would separate parts of code which belong together as code of one function and makes it necessary to explain in the code that both functions are doing exactly the same in spite of the fact they have different names. The use case is to test if two parts of code actually do exactly the same after editing their code in order to decide later which version to use. In the provided case the second code block can be used as an explanation what the other does, so it makes sense to keep it in the function in spite of the fact it won't be used. A: Don't write one function that does two different things. Write two functions, each of which does one thing: def rgb2int_v1(rgb_tuple): from sys import byteorder as endian # endianiness = sys.byteorder # 'little' int_rgb = int.from_bytes(bytearray(rgb_tuple), endian) # ,signed=False) return int_rgb def rgb2int_v2(rgb_tuple): if len(rgb_tuple) == 4: # RGBA tuple R,G,B,A = rgb_tuple else: R,G,B = rgb_tuple A = None if A is not None: int_rgb =( 0 ) + A else: int_rgb = 0 int_rgb = (int_rgb<<8) + B int_rgb = (int_rgb<<8) + G # ! int_rgb<<8 + G == int_rgb<<(8+G) ! int_rgb = (int_rgb<<8) + R return int_rgb Then select which version to use at the beginning of your script: rgb2int = rgb2int_v1 if use_v1 else rgb2int_v2 where use_v1 is a variable you set either by editing the script, or preferably by parsing a command-line option so that you can switch between runs without editing your script each time.
Simplest way to change which of two sections of Python script code should be run
I wrote two Python functions for converting RGB colors of an image representing tuples to single integer values using two different approaches. In order to test if both the approaches deliver the same results it was necessary to frequently switch between the two code sections choosing which one should be run. Finally I decided to use only one of the approaches, but decided to keep the other one in the script code as it better demonstrates what the code does. In order to 'switch off' one block of code and 'switch on' another one I have used two different methods: an if code block (see one of the functions in the code below) and a triple quoted string. The first approach (with if) makes it necessary to introduce additional indentation to the code and the other one required to move a line with triple quotes from the bottom to the top of the code block with an intermediate triple quotes. Both methods work ok, but ... Is there a better and more easy way of such switching? Best if it would require to press a key on the keyboard only once in order to switch between the two code blocks? Here my code: # ====================================================================== ''' Conversion functions for single RGB-color values ''' def rgb2int(rgb_tuple): if 1: # <<< change to 0 to switch to the else: part of code from sys import byteorder as endian # endianiness = sys.byteorder # 'little' int_rgb = int.from_bytes(bytearray(rgb_tuple), endian) # ,signed=False) else: if len(rgb_tuple) == 4: # RGBA tuple R,G,B,A = rgb_tuple else: R,G,B = rgb_tuple A = None if A is not None: int_rgb =( 0 ) + A else: int_rgb = 0 int_rgb = (int_rgb<<8) + B int_rgb = (int_rgb<<8) + G # ! int_rgb<<8 + G == int_rgb<<(8+G) ! int_rgb = (int_rgb<<8) + R return int_rgb def int2rgb(int_rgb, alpha=False): from sys import byteorder as endian tplsize = 4 if alpha else 3 rgb_tuple = tuple(int_rgb.to_bytes(tplsize, endian)) # ,signed=False)) """ if not alpha: rgb_tuple = ( int_rgb & 0xff, ( int_rgb >> 8 ) & 0xff, ( int_rgb >> 16 ) & 0xff ) else: # with alpha channel: rgb_tuple = ( int_rgb & 0xff, ( int_rgb >> 8 ) & 0xff, ( int_rgb >> 16 ) & 0xff, ( int_rgb >> 24 ) & 0xff ) """ # <<< move to top to switch to the code block above return rgb_tuple rgb = (32,253,200,100) int_rgb = rgb2int(rgb) rgb_ = int2rgb(int_rgb, alpha=True) print(rgb, int_rgb, rgb_, sep='\n') assert rgb == rgb_ rgb = (32,253,200) int_rgb = rgb2int(rgb) rgb_ = int2rgb(int_rgb) assert rgb == rgb_ # --- if __name__ == "__main__": print(' --- ') print(rgb) print(int_rgb) print(rgb_) #This gives: [32, 253, 200] 13172000 [32, 253, 200] UPDATE because of response: Responding to a comment an explanation why I haven't choose to use two different functions to separate the pieces of code: Two separate functions would separate parts of code which belong together as code of one function and makes it necessary to explain in the code that both functions are doing exactly the same in spite of the fact they have different names. The use case is to test if two parts of code actually do exactly the same after editing their code in order to decide later which version to use. In the provided case the second code block can be used as an explanation what the other does, so it makes sense to keep it in the function in spite of the fact it won't be used.
[ "Don't write one function that does two different things. Write two functions, each of which does one thing:\ndef rgb2int_v1(rgb_tuple):\n from sys import byteorder as endian\n # endianiness = sys.byteorder # 'little'\n int_rgb = int.from_bytes(bytearray(rgb_tuple), endian) # ,signed=False)\n return int_rgb\n\n\ndef rgb2int_v2(rgb_tuple):\n if len(rgb_tuple) == 4: # RGBA tuple\n R,G,B,A = rgb_tuple\n else:\n R,G,B = rgb_tuple\n A = None\n if A is not None: \n int_rgb =( 0 ) + A \n else:\n int_rgb = 0\n int_rgb = (int_rgb<<8) + B\n int_rgb = (int_rgb<<8) + G # ! int_rgb<<8 + G == int_rgb<<(8+G) !\n int_rgb = (int_rgb<<8) + R\n return int_rgb\n\nThen select which version to use at the beginning of your script:\nrgb2int = rgb2int_v1 if use_v1 else rgb2int_v2\n\nwhere use_v1 is a variable you set either by editing the script, or preferably by parsing a command-line option so that you can switch between runs without editing your script each time.\n" ]
[ 1 ]
[ "Using a smart combination of a line comment '#' character and triple quotes \"\"\" it is possible in Python to switch between two code blocks like magic pressing [Del] or [#] on the keyboard.\nSee the code below how it is done and enjoy the 'magic'.\n# ======================================================================\n''' Conversion functions for single RGB-color values '''\ndef rgb2int(rgb_tuple):\n #\"\"\" <<< remove or add '#' to 'switch' between the two code blocks\n from sys import byteorder as endian\n # endianiness = sys.byteorder # 'little'\n int_rgb = int.from_bytes(bytearray(rgb_tuple), endian) # ,signed=False)\n \"\"\"\n if len(rgb_tuple) == 4: # RGBA tuple\n R,G,B,A = rgb_tuple\n else:\n R,G,B = rgb_tuple\n A = None\n if A is not None: \n int_rgb =( 0 ) + A \n else:\n int_rgb = 0\n int_rgb = (int_rgb<<8) + B\n int_rgb = (int_rgb<<8) + G # ! int_rgb<<8 + G == int_rgb<<(8+G) !\n int_rgb = (int_rgb<<8) + R\n # \"\"\" # the TRICK is to use a line comment before triple quotes\n return int_rgb\n\ndef int2rgb(int_rgb, alpha=False):\n #\"\"\" <<< remove or add '#' to 'switch' between the two code blocks\n from sys import byteorder as endian\n tplsize = 4 if alpha else 3\n rgb_tuple = tuple(int_rgb.to_bytes(tplsize, endian)) # ,signed=False)) \n \"\"\"\n if not alpha: \n rgb_tuple = (\n int_rgb & 0xff,\n ( int_rgb >> 8 ) & 0xff,\n ( int_rgb >> 16 ) & 0xff )\n else: # with alpha channel:\n rgb_tuple = (\n int_rgb & 0xff,\n ( int_rgb >> 8 ) & 0xff,\n ( int_rgb >> 16 ) & 0xff,\n ( int_rgb >> 24 ) & 0xff )\n # \"\"\" # the TRICK is to use a line comment before triple quotes\n return rgb_tuple\n\n\nrgb = (32,253,200,100)\nint_rgb = rgb2int(rgb)\nrgb_ = int2rgb(int_rgb, alpha=True)\nprint(rgb, int_rgb, rgb_, sep='\\n')\nassert rgb == rgb_\n\nrgb = (32,253,200)\nint_rgb = rgb2int(rgb)\nrgb_ = int2rgb(int_rgb)\nassert rgb == rgb_\n\n# ---\nif __name__ == \"__main__\":\n print(' --- ') \n\n print(rgb)\n print(int_rgb)\n print(rgb_)\n #This gives:\n\n [32, 253, 200]\n 13172000\n [32, 253, 200]\n\nBye the way: you should avoid using it in your coding eventually except for testing purposes.\nTriple quotes in Python are there to make multiline strings and docstrings possible and using them as a kind of preprocessing switch isn't 'Pythonic'.\n" ]
[ -1 ]
[ "ide", "python", "workflow" ]
stackoverflow_0074481682_ide_python_workflow.txt
Q: Assigning new values to rows with iloc and loc produce different results. How do I avoid the SettingToCopyWarning same as iloc? I currently have a DataFrame with a shape of (16280, 13). I want to assign values to specific rows in a single column. I was originally doing so with: for idx, row in enumerate(df.to_dict('records')): instances = row['instances'] labels = row['labels'].split('|') for instance in instances: if instance not in relevant_labels: labels = ['O' if instance in l else l for l in labels] df.iloc[idx]['labels'] = '|'.join(labels) But this kept returning the SettingWithCopyWarning due to the last line. I tried changing this to df.loc[idx, 'labels'] = '|'.join(labels) which doesn't return the warning anymore but caused errors in the latter parts of my code. I noticed that the sizes of the DataFrames were (16280, 13) when using iloc and (16751, 13) when using loc. How can I prevent the warning from printing and get the same functionality as using iloc? A: You have multiple things we can improve here. First, try not as possible to loop over a dataframe but use some tools provided by the pandas package. However, if not avoidable, looping on dataframe's rows are better done with the .iterrows() methods instead of .to_dict(). Keep in mind, if using iterrows, you should not modify your dataframe while iterating over. Then, for the iloc/loc uses. Loc is using the key names (like a dictionary) although iloc is using the key index (like an array). Here idx is an index, not the name of the key, then df.loc[idx, 'labels'] will lead to some errors if the name of the key is not the same as its index. We can easily use both of them like the following : df.iloc[idx, : ].loc['labels']. To illustrate the difference between loc and iloc : df_example = pd.DataFrame({"a": [1, 2, 3, 4], "b": ['a', 'b', 'a', 'b']}, index=[0, 1, 3, 5]) print(df_example.loc[0] == df_example.iloc[0]) # 0 is the first key, loc and iloc same results print(df_example.loc[1] == df_example.iloc[1]) # 1 is the second key, loc and iloc same results try: print(df_example.loc[2] == df_example.iloc[2]) # 2 is not a key, then it will crash on loc (Keyerror) except KeyError: pass print(df_example.loc[3] == df_example.iloc[3]) # 3 the third key, then iloc and loc will lead different results try: print(df_example.loc[5] == df_example.iloc[5]) # 5 is the last key but there is no 6th key so it will crash on iloc (indexerror) except IndexError: pass Remember that chaining your dataframe will return a copy of your data instead of a slice : doc. That's why both df.iloc[idx]['labels'] and df.iloc[idx, : ].loc['labels'] will trigger the warning. If labels is your ith columns, df.iloc[idx, i ] won't trigger the warning. A: Please take note that in your case, SettingWithCopyWarning is a valid warning as the chained assigment is not working as expected. df.iloc[idx] returns a copy of the slice instead of a slice into the original object. Therefore, df.iloc[idx]['labels'] = '|'.join(labels) makes modification on a copy of the row instead of the row of the original df. It seems to happen when the dataframe has mixed datatypes. Regarding the different results by .loc and .iloc, it is because your row label is different with row integer locations (probably due to a train test split). When a row label does not exist, .loc cannot find it in existing rows, so it generate new row (.loc gets row (and/or col) with row (and/or col) label, while .iloc gets row (and/or col) with integer locations.) Please find the examples after the solutions. Solutions Basic idea: You should avoid chained assignments and use the correct labels/integer locations. Solution 1: reset_index and .loc If you don't need to keep the row index, a solution is to do reset_index before your code, and use your df.loc[idx, 'labels'] = '|'.join(labels). import pandas as pd df = pd.DataFrame({'instances': ["a", "b", "c", "d"], 'labels': [1, 2, 3, 4]}, index=[0, 2, 4, 5]) df instances labels 0 a 1 2 b 2 4 c 3 5 d 4 df = df.reset_index(drop=True) df instances labels 0 a 1 1 b 2 2 c 3 3 d 4 This will make the dataframe row labels same as the row integer locations. So .loc[n, 'labels'] refers to the same thing as .iloc[n, 'labels']. Solution 2: Use column integer locations of 'labels' and .iloc Example: Update labels of the 4th row to 100 col_idx = df.columns.get_loc("labels") # get the column integer locations of 'labels' df.iloc[3, col_idx] = 100 df instances labels 0 a 1 2 b 2 4 c 3 5 d 100 More Examples Example of Valid SettingWithCopyWarning import pandas as pd df = pd.DataFrame({'instances': ["a", "b", "c", "d"], 'labels': [1, 2, 3, 4]}, index=[0, 2, 4, 5]) df instances labels 0 a 1 2 b 2 4 c 3 5 d 4 Assume I want to update the labels of first row to 100. df.iloc[0]['labels'] = 100 df It returned the warning and failed to update the value. /usr/local/lib/python3.7/dist-packages/pandas/core/series.py:1056: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy cacher_needs_updating = self._check_is_chained_assignment_possible() instances labels 0 a 1 2 b 2 4 c 3 5 d 4 If all columns have the same datatype (eg: all str, all int), iloc will work and won't return SettingWithCopyWarning. Apparently, pandas handles mixed-type and single-type dataframes differently when it comes to chained assignments. Referring to this post which points to this Github issue. You can also read this post or pandas documentation to gain a better understanding on chained assignment. Example of Additional Row by .loc df instances labels 0 a 1 2 b 2 4 c 3 5 d 4 The row labels in our example are (0, 2, 4, 5), while row integer locations are (0, 1, 2, 3). When you use .loc with a label that does not exist, it will create a new row. df.loc[1, 'labels'] = 100 df instances labels 0 a 1 2 b 2 4 c 3 5 d 4 1 NaN 100
Assigning new values to rows with iloc and loc produce different results. How do I avoid the SettingToCopyWarning same as iloc?
I currently have a DataFrame with a shape of (16280, 13). I want to assign values to specific rows in a single column. I was originally doing so with: for idx, row in enumerate(df.to_dict('records')): instances = row['instances'] labels = row['labels'].split('|') for instance in instances: if instance not in relevant_labels: labels = ['O' if instance in l else l for l in labels] df.iloc[idx]['labels'] = '|'.join(labels) But this kept returning the SettingWithCopyWarning due to the last line. I tried changing this to df.loc[idx, 'labels'] = '|'.join(labels) which doesn't return the warning anymore but caused errors in the latter parts of my code. I noticed that the sizes of the DataFrames were (16280, 13) when using iloc and (16751, 13) when using loc. How can I prevent the warning from printing and get the same functionality as using iloc?
[ "You have multiple things we can improve here.\nFirst, try not as possible to loop over a dataframe but use some tools provided by the pandas package.\nHowever, if not avoidable, looping on dataframe's rows are better done with the .iterrows() methods instead of .to_dict(). Keep in mind, if using iterrows, you should not modify your dataframe while iterating over.\nThen, for the iloc/loc uses. Loc is using the key names (like a dictionary) although iloc is using the key index (like an array). Here idx is an index, not the name of the key, then df.loc[idx, 'labels'] will lead to some errors if the name of the key is not the same as its index. We can easily use both of them like the following : df.iloc[idx, : ].loc['labels'].\nTo illustrate the difference between loc and iloc :\ndf_example = pd.DataFrame({\"a\": [1, 2, 3, 4],\n \"b\": ['a', 'b', 'a', 'b']},\n index=[0, 1, 3, 5])\n\nprint(df_example.loc[0] == df_example.iloc[0]) # 0 is the first key, loc and iloc same results\nprint(df_example.loc[1] == df_example.iloc[1]) # 1 is the second key, loc and iloc same results\ntry:\n print(df_example.loc[2] == df_example.iloc[2]) # 2 is not a key, then it will crash on loc (Keyerror)\nexcept KeyError:\n pass\nprint(df_example.loc[3] == df_example.iloc[3]) # 3 the third key, then iloc and loc will lead different results\ntry:\n print(df_example.loc[5] == df_example.iloc[5]) # 5 is the last key but there is no 6th key so it will crash on iloc (indexerror)\nexcept IndexError:\n pass\n\nRemember that chaining your dataframe will return a copy of your data instead of a slice : doc. That's why both df.iloc[idx]['labels'] and df.iloc[idx, : ].loc['labels'] will trigger the warning. If labels is your ith columns, df.iloc[idx, i ] won't trigger the warning.\n", "Please take note that in your case, SettingWithCopyWarning is a valid warning as the chained assigment is not working as expected. df.iloc[idx] returns a copy of the slice instead of a slice into the original object. Therefore, df.iloc[idx]['labels'] = '|'.join(labels) makes modification on a copy of the row instead of the row of the original df. It seems to happen when the dataframe has mixed datatypes.\nRegarding the different results by .loc and .iloc, it is because your row label is different with row integer locations (probably due to a train test split). When a row label does not exist, .loc cannot find it in existing rows, so it generate new row (.loc gets row (and/or col) with row (and/or col) label, while .iloc gets row (and/or col) with integer locations.)\nPlease find the examples after the solutions.\nSolutions\nBasic idea: You should avoid chained assignments and use the correct labels/integer locations.\nSolution 1: reset_index and .loc\nIf you don't need to keep the row index, a solution is to do reset_index before your code, and use your df.loc[idx, 'labels'] = '|'.join(labels).\nimport pandas as pd\n\ndf = pd.DataFrame({'instances': [\"a\", \"b\", \"c\", \"d\"],\n 'labels': [1, 2, 3, 4]},\n index=[0, 2, 4, 5])\ndf\n\n instances labels\n0 a 1\n2 b 2\n4 c 3\n5 d 4\n\ndf = df.reset_index(drop=True)\ndf\n\n instances labels\n0 a 1\n1 b 2\n2 c 3\n3 d 4\n\nThis will make the dataframe row labels same as the row integer locations. So .loc[n, 'labels'] refers to the same thing as .iloc[n, 'labels'].\nSolution 2: Use column integer locations of 'labels' and .iloc\nExample: Update labels of the 4th row to 100\ncol_idx = df.columns.get_loc(\"labels\") # get the column integer locations of 'labels'\ndf.iloc[3, col_idx] = 100\ndf\n\n instances labels\n0 a 1\n2 b 2\n4 c 3\n5 d 100\n\nMore Examples\nExample of Valid SettingWithCopyWarning\nimport pandas as pd\n\ndf = pd.DataFrame({'instances': [\"a\", \"b\", \"c\", \"d\"],\n 'labels': [1, 2, 3, 4]},\n index=[0, 2, 4, 5])\ndf\n\n instances labels\n0 a 1\n2 b 2\n4 c 3\n5 d 4\n\nAssume I want to update the labels of first row to 100.\ndf.iloc[0]['labels'] = 100\ndf\n\nIt returned the warning and failed to update the value.\n/usr/local/lib/python3.7/dist-packages/pandas/core/series.py:1056: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n cacher_needs_updating = self._check_is_chained_assignment_possible()\n\n instances labels\n0 a 1\n2 b 2\n4 c 3\n5 d 4\n\nIf all columns have the same datatype (eg: all str, all int), iloc will work and won't return SettingWithCopyWarning. Apparently, pandas handles mixed-type and single-type dataframes differently when it comes to chained assignments. Referring to this post which points to this Github issue.\nYou can also read this post or pandas documentation to gain a better understanding on chained assignment.\nExample of Additional Row by .loc\ndf\n\n instances labels\n0 a 1\n2 b 2\n4 c 3\n5 d 4\n\nThe row labels in our example are (0, 2, 4, 5), while row integer locations are (0, 1, 2, 3). When you use .loc with a label that does not exist, it will create a new row.\ndf.loc[1, 'labels'] = 100\ndf\n\n instances labels\n0 a 1\n2 b 2\n4 c 3\n5 d 4\n1 NaN 100\n\n" ]
[ 2, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074383862_pandas_python.txt
Q: Save ForeignKey on post in django I am having trouble with saving a fk in Infringer table on post. I am trying to save the customer ID when I add a record. For troubleshoot purposes I added a few print lines and this the out put. As you can see below the correct customer ID is present but the customer is None so its not being saved into the record. The other fields save fine. PLEASE HELP! I am a beginner. customer in forms.py is 2 forms.py instance was saved with the customer None customer in views.py is 2 Successfully saved the infringer in views.py with its customer None views.py @login_required(login_url='login') def createInfringer(request): customer=request.user.customer.id form = InfringerForm(customer=customer) if request.method == 'POST': form = InfringerForm(customer, request.POST) if form.is_valid(): saved_instance = form.save(customer) print (f'customer in views.py is {customer}') print (f'Successfully saved the infringer in views.py with its customer {saved_instance.customer}') return redirect('infringer-list') context ={'form': form} return render (request, 'base/infringement_form.html', context) forms.py class InfringerForm(ModelForm): class Meta: model = Infringer fields = ['name', 'brand_name','status'] def __init__(self, customer, *args, **kwargs): super(InfringerForm,self).__init__(*args, **kwargs) self.fields['status'].queryset = Status.objects.filter(customer=customer) def save(self, customer, *args, **kwargs): instance = super(InfringerForm, self).save( *args, **kwargs) if customer: print (f'customer in forms.py is {customer}') self.customer = customer instance.save() print (f' forms.py instance was saved with the customer {instance.customer}') return instance models.py class Infringer (models.Model): name = models.CharField(max_length=200) brand_name = models.CharField(max_length=200, null=True) updated = models.DateTimeField(auto_now=True) created = models.DateTimeField(auto_now_add=True) status = models.ForeignKey(Status, on_delete=models.SET_NULL,null=True) customer = models.ForeignKey(Customer, on_delete=models.SET_NULL,null=True) class Meta: ordering = ['-updated', '-created'] def __str__(self): return self.name A: It might help to simplify your form, for example with: class InfringerForm(ModelForm): class Meta: model = Infringer fields = ['name', 'brand_name', 'status'] def __init__(self, customer, *args, **kwargs): super().__init__(*args, **kwargs) self.customer = customer self.fields['status'].queryset = Status.objects.filter(customer=customer) def save(self, *args, **kwargs): self.instance.customer = self.customer return super().save( *args, **kwargs) With that done, we can also simplify the view logic to: @login_required(login_url='login') def createInfringer(request): customer = request.user.customer form = InfringerForm(customer=customer) if request.method == 'POST': form = InfringerForm(customer, request.POST, request.FILES) if form.is_valid(): saved_instance = form.save() print (f'customer in views.py is {customer}') print (f'Successfully saved the infringer in views.py with its customer {saved_instance.customer}') return redirect('infringer-list') return render (request, 'base/infringement_form.html', {'form': form}) So we use the customer, not its primary key, and we do not have to pass the customer in the .save() method anymore.
Save ForeignKey on post in django
I am having trouble with saving a fk in Infringer table on post. I am trying to save the customer ID when I add a record. For troubleshoot purposes I added a few print lines and this the out put. As you can see below the correct customer ID is present but the customer is None so its not being saved into the record. The other fields save fine. PLEASE HELP! I am a beginner. customer in forms.py is 2 forms.py instance was saved with the customer None customer in views.py is 2 Successfully saved the infringer in views.py with its customer None views.py @login_required(login_url='login') def createInfringer(request): customer=request.user.customer.id form = InfringerForm(customer=customer) if request.method == 'POST': form = InfringerForm(customer, request.POST) if form.is_valid(): saved_instance = form.save(customer) print (f'customer in views.py is {customer}') print (f'Successfully saved the infringer in views.py with its customer {saved_instance.customer}') return redirect('infringer-list') context ={'form': form} return render (request, 'base/infringement_form.html', context) forms.py class InfringerForm(ModelForm): class Meta: model = Infringer fields = ['name', 'brand_name','status'] def __init__(self, customer, *args, **kwargs): super(InfringerForm,self).__init__(*args, **kwargs) self.fields['status'].queryset = Status.objects.filter(customer=customer) def save(self, customer, *args, **kwargs): instance = super(InfringerForm, self).save( *args, **kwargs) if customer: print (f'customer in forms.py is {customer}') self.customer = customer instance.save() print (f' forms.py instance was saved with the customer {instance.customer}') return instance models.py class Infringer (models.Model): name = models.CharField(max_length=200) brand_name = models.CharField(max_length=200, null=True) updated = models.DateTimeField(auto_now=True) created = models.DateTimeField(auto_now_add=True) status = models.ForeignKey(Status, on_delete=models.SET_NULL,null=True) customer = models.ForeignKey(Customer, on_delete=models.SET_NULL,null=True) class Meta: ordering = ['-updated', '-created'] def __str__(self): return self.name
[ "It might help to simplify your form, for example with:\nclass InfringerForm(ModelForm):\n class Meta:\n model = Infringer\n fields = ['name', 'brand_name', 'status'] \n\n def __init__(self, customer, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.customer = customer\n self.fields['status'].queryset = Status.objects.filter(customer=customer)\n\n def save(self, *args, **kwargs):\n self.instance.customer = self.customer\n return super().save( *args, **kwargs)\n\nWith that done, we can also simplify the view logic to:\n@login_required(login_url='login')\ndef createInfringer(request):\n customer = request.user.customer\n form = InfringerForm(customer=customer)\n if request.method == 'POST':\n form = InfringerForm(customer, request.POST, request.FILES)\n if form.is_valid(): \n saved_instance = form.save()\n print (f'customer in views.py is {customer}')\n print (f'Successfully saved the infringer in views.py with its customer {saved_instance.customer}')\n return redirect('infringer-list')\n\n return render (request, 'base/infringement_form.html', {'form': form})\n\nSo we use the customer, not its primary key, and we do not have to pass the customer in the .save() method anymore.\n" ]
[ 2 ]
[]
[]
[ "django", "foreign_keys", "forms", "python" ]
stackoverflow_0074480931_django_foreign_keys_forms_python.txt
Q: How to read KiCad page settings values from a python BOM generation script I am using KiCad V6 and have modified the bill of materials generation script bom_csv_grouped_by_value.py to produce BOM's only containing the information I am interested in, and formatted how I like. These currently have the filename matching the KiCad project name, e.g. for a project called "valve-tester" it would be valve-tester.xlsx. I would like to be able to read the "Title" and "Revision" fields from the schematic Page Settings to name the BOM something more meaningful, e.g. BOM for Valve Tester revC 17-11-22.xlsx. "Title" and "Revision" fields Does anyone know how I can extract this information from a python script, or somehow automate passing it in as an argument? Any help would be greatly appreciated! So far I am thinking one option would be to have the user manually enter the desired filename each time you run the script, although this is sub-optimal and I am aiming to automate it. A: Turns out you can just read the .kicad_sch schematic file as a text file and the information is all there, e.g with open ("valve-tester.kicad_sch", "r") as myfile: data = myfile.read().splitlines() title_line = data[7] revision_line = data[9]
How to read KiCad page settings values from a python BOM generation script
I am using KiCad V6 and have modified the bill of materials generation script bom_csv_grouped_by_value.py to produce BOM's only containing the information I am interested in, and formatted how I like. These currently have the filename matching the KiCad project name, e.g. for a project called "valve-tester" it would be valve-tester.xlsx. I would like to be able to read the "Title" and "Revision" fields from the schematic Page Settings to name the BOM something more meaningful, e.g. BOM for Valve Tester revC 17-11-22.xlsx. "Title" and "Revision" fields Does anyone know how I can extract this information from a python script, or somehow automate passing it in as an argument? Any help would be greatly appreciated! So far I am thinking one option would be to have the user manually enter the desired filename each time you run the script, although this is sub-optimal and I am aiming to automate it.
[ "Turns out you can just read the .kicad_sch schematic file as a text file and the information is all there, e.g\nwith open (\"valve-tester.kicad_sch\", \"r\") as myfile:\n data = myfile.read().splitlines()\n title_line = data[7]\n revision_line = data[9]\n\n" ]
[ 0 ]
[]
[]
[ "kicad", "python" ]
stackoverflow_0074469833_kicad_python.txt
Q: Python Authlib: 'View' object has no attribute 'get_absolute_uri' I am adding OAuth 2.0 to a new Django-DRF API via Auth0 using Authlib. Everything has always worked fine using a function-based views however when I try to apply the authlib ResourceProtector decorator to a class-based view it keeps returning an error 'ViewSet' object has no attribute 'build_absolute_uri'. How can I use the Authlib resource protector decorator to add OAuth to a class-based view? Views.py from api.permissions import auth0_validator from authlib.integrations.django_oauth2 import ResourceProtector from django.http import JsonResponse require_oauth = ResourceProtector() validator = auth0_validator.Auth0JWTBearerTokenValidator( os.environ['AUTH0_DOMAIN'], os.environ['AUTH0_IDENTIFIER'] ) require_oauth.register_token_validator(validator) #Resource protector decorator works here @require_oauth() def index(request): return Response('Access granted') class Users(ModelViewSet): #Resource protector decorator does not work and invokes error below @require_oauth() def list(self, request): return Response('access granted') stack trace Internal Server Error: /v2/statistics Traceback (most recent call last): File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/sentry_sdk/integrations/django/views.py", line 68, in sentry_wrapped_callback return callback(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception self.raise_uncaught_exception(exc) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception raise exc File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 39, in decorated token = self.acquire_token(request, scopes) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 25, in acquire_token url = request.build_absolute_uri() AttributeError: 'StatisticsViewSet' object has no attribute 'build_absolute_uri' Internal Server Error: /v2/statistics Traceback (most recent call last): File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/sentry_sdk/integrations/django/views.py", line 68, in sentry_wrapped_callback return callback(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception self.raise_uncaught_exception(exc) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception raise exc File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 39, in decorated token = self.acquire_token(request, scopes) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 25, in acquire_token url = request.build_absolute_uri() AttributeError: 'StatisticsViewSet' object has no attribute 'build_absolute_uri' A: After digging through Authlib, it turns out its Django integration doesn't support class based views. This is because the first parameter in the ResourceProtectors decorator function, will be the view object instead of the request since it's being called on a class method. To fix this I simply extended the ResourceProtector class and added an extra 'view' parameter so that it can be applied to class methods. class CustomResourceProtector(ResourceProtector): def __call__(self, scopes=None, optional=False): def wrapper(f): @functools.wraps(f) def decorated(view, request, *args, **kwargs): #Added view as the first argument so it works with class based view methods try: token = self.acquire_token(request, scopes) request.oauth_token = token except MissingAuthorizationError as error: if optional: request.oauth_token = None return f(request, *args, **kwargs) return return_error_response(error) except OAuth2Error as error: return return_error_response(error) return f(request, *args, **kwargs) return decorated return wrapper To make it even more python and prevent having to decorate every single method. I turned the decorator into a DRF permission class by further extending the ResourceProtector class to make it return a boolean instead of a decorator permissions.py from auth0 import CustomResourceProtector class OAuthPermission(permissions.BasePermission): """ Ensures request has a valid OAuth token to access the endpoint. """ message = 'Permission denied, invalid access token.' def has_permission(self, request, view): oauth_protector = CustomResourceProtector() validator = Auth0JWTBearerTokenValidator( os.environ['AUTH0_DOMAIN'], os.environ['AUTH0_IDENTIFIER'] ) oauth_protector.register_token_validator(validator) if oauth_protector.is_token_valid(request): return True return False auth0.py import os import json import functools from django.http import JsonResponse from rest_framework import permissions from authlib.integrations.django_oauth2 import ResourceProtector from authlib.oauth2.rfc6749.errors import * from urllib.request import urlopen from authlib.oauth2.rfc7523 import JWTBearerTokenValidator from authlib.jose.rfc7517.jwk import JsonWebKey class CustomResourceProtector(ResourceProtector): def is_token_valid(self, request): try: scopes = None token = self.acquire_token(request, scopes) #request.oauth_token = token return token except Exception as e: return False #Auth0 Authlib token validator - validates Auth0 access tokens class Auth0JWTBearerTokenValidator(JWTBearerTokenValidator): def __init__(self, domain, audience): issuer = f"https://{domain}/" jsonurl = urlopen(f"{issuer}.well-known/jwks.json") public_key = JsonWebKey.import_key_set( json.loads(jsonurl.read()) ) super(Auth0JWTBearerTokenValidator, self).__init__( public_key ) self.claims_options = { "exp": {"essential": True}, "aud": {"essential": True, "value": audience}, "iss": {"essential": True, "value": issuer}, }
Python Authlib: 'View' object has no attribute 'get_absolute_uri'
I am adding OAuth 2.0 to a new Django-DRF API via Auth0 using Authlib. Everything has always worked fine using a function-based views however when I try to apply the authlib ResourceProtector decorator to a class-based view it keeps returning an error 'ViewSet' object has no attribute 'build_absolute_uri'. How can I use the Authlib resource protector decorator to add OAuth to a class-based view? Views.py from api.permissions import auth0_validator from authlib.integrations.django_oauth2 import ResourceProtector from django.http import JsonResponse require_oauth = ResourceProtector() validator = auth0_validator.Auth0JWTBearerTokenValidator( os.environ['AUTH0_DOMAIN'], os.environ['AUTH0_IDENTIFIER'] ) require_oauth.register_token_validator(validator) #Resource protector decorator works here @require_oauth() def index(request): return Response('Access granted') class Users(ModelViewSet): #Resource protector decorator does not work and invokes error below @require_oauth() def list(self, request): return Response('access granted') stack trace Internal Server Error: /v2/statistics Traceback (most recent call last): File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/sentry_sdk/integrations/django/views.py", line 68, in sentry_wrapped_callback return callback(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception self.raise_uncaught_exception(exc) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception raise exc File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 39, in decorated token = self.acquire_token(request, scopes) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 25, in acquire_token url = request.build_absolute_uri() AttributeError: 'StatisticsViewSet' object has no attribute 'build_absolute_uri' Internal Server Error: /v2/statistics Traceback (most recent call last): File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/sentry_sdk/integrations/django/views.py", line 68, in sentry_wrapped_callback return callback(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception self.raise_uncaught_exception(exc) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception raise exc File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 39, in decorated token = self.acquire_token(request, scopes) File "/Users/td/Desktop/test-api/lib/python3.8/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 25, in acquire_token url = request.build_absolute_uri() AttributeError: 'StatisticsViewSet' object has no attribute 'build_absolute_uri'
[ "After digging through Authlib, it turns out its Django integration doesn't support class based views. This is because the first parameter in the ResourceProtectors decorator function, will be the view object instead of the request since it's being called on a class method. To fix this I simply extended the ResourceProtector class\nand added an extra 'view' parameter so that it can be applied to class methods.\nclass CustomResourceProtector(ResourceProtector):\n\n def __call__(self, scopes=None, optional=False):\n def wrapper(f):\n @functools.wraps(f)\n def decorated(view, request, *args, **kwargs): #Added view as the first argument so it works with class based view methods\n try:\n token = self.acquire_token(request, scopes)\n request.oauth_token = token\n except MissingAuthorizationError as error:\n if optional:\n request.oauth_token = None\n return f(request, *args, **kwargs)\n return return_error_response(error)\n except OAuth2Error as error:\n return return_error_response(error)\n return f(request, *args, **kwargs)\n return decorated\n return wrapper\n\nTo make it even more python and prevent having to decorate every single method. I turned the decorator into a DRF permission class by further extending the ResourceProtector class to make it return a boolean instead of a decorator\npermissions.py\nfrom auth0 import CustomResourceProtector\n\nclass OAuthPermission(permissions.BasePermission):\n \"\"\"\n Ensures request has a valid OAuth token to access the endpoint.\n \"\"\"\n message = 'Permission denied, invalid access token.'\n\n def has_permission(self, request, view):\n oauth_protector = CustomResourceProtector()\n validator = Auth0JWTBearerTokenValidator(\n os.environ['AUTH0_DOMAIN'],\n os.environ['AUTH0_IDENTIFIER']\n )\n oauth_protector.register_token_validator(validator)\n if oauth_protector.is_token_valid(request):\n return True\n\n return False\n\nauth0.py\nimport os\nimport json\nimport functools\nfrom django.http import JsonResponse\nfrom rest_framework import permissions\nfrom authlib.integrations.django_oauth2 import ResourceProtector\nfrom authlib.oauth2.rfc6749.errors import *\nfrom urllib.request import urlopen\nfrom authlib.oauth2.rfc7523 import JWTBearerTokenValidator\nfrom authlib.jose.rfc7517.jwk import JsonWebKey\n\nclass CustomResourceProtector(ResourceProtector):\n\n def is_token_valid(self, request):\n try:\n scopes = None\n token = self.acquire_token(request, scopes)\n #request.oauth_token = token\n return token\n except Exception as e:\n return False\n\n#Auth0 Authlib token validator - validates Auth0 access tokens \nclass Auth0JWTBearerTokenValidator(JWTBearerTokenValidator):\n def __init__(self, domain, audience):\n issuer = f\"https://{domain}/\"\n jsonurl = urlopen(f\"{issuer}.well-known/jwks.json\")\n public_key = JsonWebKey.import_key_set(\n json.loads(jsonurl.read())\n )\n super(Auth0JWTBearerTokenValidator, self).__init__(\n public_key\n )\n self.claims_options = {\n \"exp\": {\"essential\": True},\n \"aud\": {\"essential\": True, \"value\": audience},\n \"iss\": {\"essential\": True, \"value\": issuer},\n }\n\n" ]
[ 0 ]
[]
[]
[ "authlib", "django", "python", "python_3.x" ]
stackoverflow_0074466731_authlib_django_python_python_3.x.txt
Q: Line split is not functioning as intended I am trying to get this code to split one at a time, but it is not functioning as expected: for line in text_line: one_line = line.split(' ',1) if len(one_line) > 1: acro = one_line[0].strip() meaning = one_line[1].strip() if acro in acronyms_dict: acronyms_dict[acro] = acronyms_dict[acro] + ', ' + meaning else: acronyms_dict[acro] = meaning A: Remove the ' ' from the str.split. The file is using tabs to delimit the acronyms: import requests data_site = requests.get( "https://raw.githubusercontent.com/priscian/nlp/master/OpenNLP/models/coref/acronyms.txt" ) text_line = data_site.text.split("\n") acronyms_dict = {} for line in text_line: one_line = line.split(maxsplit=1) # <-- remove the ' ' if len(one_line) > 1: acro = one_line[0].strip() meaning = one_line[1].strip() if acro in acronyms_dict: acronyms_dict[acro] = acronyms_dict[acro] + ", " + meaning else: acronyms_dict[acro] = meaning print(acronyms_dict) Prints: { '24KHGE': '24 Karat Heavy Gold Electroplate', '2B1Q': '2 Binary 1 Quaternary', '2D': '2-Dimensional', ...
Line split is not functioning as intended
I am trying to get this code to split one at a time, but it is not functioning as expected: for line in text_line: one_line = line.split(' ',1) if len(one_line) > 1: acro = one_line[0].strip() meaning = one_line[1].strip() if acro in acronyms_dict: acronyms_dict[acro] = acronyms_dict[acro] + ', ' + meaning else: acronyms_dict[acro] = meaning
[ "Remove the ' ' from the str.split. The file is using tabs to delimit the acronyms:\nimport requests\n\ndata_site = requests.get(\n \"https://raw.githubusercontent.com/priscian/nlp/master/OpenNLP/models/coref/acronyms.txt\"\n)\ntext_line = data_site.text.split(\"\\n\")\nacronyms_dict = {}\n\nfor line in text_line:\n one_line = line.split(maxsplit=1) # <-- remove the ' '\n if len(one_line) > 1:\n acro = one_line[0].strip()\n meaning = one_line[1].strip()\n\n if acro in acronyms_dict:\n acronyms_dict[acro] = acronyms_dict[acro] + \", \" + meaning\n else:\n acronyms_dict[acro] = meaning\n\nprint(acronyms_dict)\n\nPrints:\n{\n '24KHGE': '24 Karat Heavy Gold Electroplate', \n '2B1Q': '2 Binary 1 Quaternary', \n '2D': '2-Dimensional', \n\n...\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "for_loop", "if_statement", "python", "python_requests" ]
stackoverflow_0074481702_dictionary_for_loop_if_statement_python_python_requests.txt
Q: Python Tkinter sync canvas image loading I have the following script which creates 2 windows (Main, Image). The main window contains a button called Write and the image window contains a canvas with no image in it. When the write button is clicked it moves a "motor" connected to my raspberry pi and then updates the image on the canavas. This process is repeated twice. Unfortunately this is not the case. The motor moves twice before the image is refreshed on the canvas. How do I make it work synchronously? import tkinter as tk from tkinter import * from PIL import Image, ImageTk # root window root = tk.Tk() root.geometry("500x500") root.title("Main window") images = [ImageTk.PhotoImage(Image.open("1.png")), ImageTk.PhotoImage(Image.open("2.png"))] def move_motor(): motor.init() motor.SetMicroStep('hardward','fullstep') motor.TurnStep("forward", steps=50, stepdelay = 1000) motor.Stop() ctr = 0 def update_image(): global ctr ctr += 1 print(f"Loading: {ctr} image") canvas.itemconfig(image_container, image = images[ctr]) # --Problematic function-- # def write_operations(): for i in range(1, 3): move_motor() # operation 1 update_image() # operation 2 # image window containing a canvas imageWin = Toplevel(root) imageWin.title("Image window") imageWin.geometry("768x768") canvas = Canvas(imageWin, width=768, height=768) image_container = canvas.create_image(0, 0, anchor = NW, image = None) canvas.pack() btn_write = Button(root, text ="Write", command = write_operations).place(x = 130, y = 280) root.mainloop() A: I simulated your program by putting a slight delay on the motor firing. I replaced the for loop with after, everything works, if the after delay is greater than sleep time. import time from tkinter import * # root window root = tk.Tk() root.geometry("500x500") root.title("Main window") def move_motor(): time.sleep(0.5) ctr = 0 txt = None def update_image(fr): global ctr, txt ctr += 1 print(f"Loading: {ctr} image") if txt: canvas.delete(txt) txt = canvas.create_text(100, 100, text=ctr, justify=CENTER, font="Verdana 34") if ctr == 2: root.after_cancel(fr) ctr = 0 def write_operations(): fr = root.after(1000, write_operations) move_motor() # operation 1 update_image(fr) # operation 2 # image window containing a canvas imageWin = Toplevel(root) imageWin.title("Image window") imageWin.geometry("768x768") canvas = Canvas(imageWin, width=768, height=768) image_container = canvas.create_image(0, 0, anchor=NW, image=None) canvas.pack() btn_write = Button(root, text="Write", command=write_operations).place(x=130, y=280) root.mainloop()
Python Tkinter sync canvas image loading
I have the following script which creates 2 windows (Main, Image). The main window contains a button called Write and the image window contains a canvas with no image in it. When the write button is clicked it moves a "motor" connected to my raspberry pi and then updates the image on the canavas. This process is repeated twice. Unfortunately this is not the case. The motor moves twice before the image is refreshed on the canvas. How do I make it work synchronously? import tkinter as tk from tkinter import * from PIL import Image, ImageTk # root window root = tk.Tk() root.geometry("500x500") root.title("Main window") images = [ImageTk.PhotoImage(Image.open("1.png")), ImageTk.PhotoImage(Image.open("2.png"))] def move_motor(): motor.init() motor.SetMicroStep('hardward','fullstep') motor.TurnStep("forward", steps=50, stepdelay = 1000) motor.Stop() ctr = 0 def update_image(): global ctr ctr += 1 print(f"Loading: {ctr} image") canvas.itemconfig(image_container, image = images[ctr]) # --Problematic function-- # def write_operations(): for i in range(1, 3): move_motor() # operation 1 update_image() # operation 2 # image window containing a canvas imageWin = Toplevel(root) imageWin.title("Image window") imageWin.geometry("768x768") canvas = Canvas(imageWin, width=768, height=768) image_container = canvas.create_image(0, 0, anchor = NW, image = None) canvas.pack() btn_write = Button(root, text ="Write", command = write_operations).place(x = 130, y = 280) root.mainloop()
[ "I simulated your program by putting a slight delay on the motor firing. I replaced the for loop with after, everything works, if the after delay is greater than sleep time.\nimport time\nfrom tkinter import *\n\n\n# root window\nroot = tk.Tk()\nroot.geometry(\"500x500\")\nroot.title(\"Main window\")\n\n\ndef move_motor():\n time.sleep(0.5)\n\n\nctr = 0\ntxt = None\n\n\ndef update_image(fr):\n global ctr, txt\n ctr += 1\n print(f\"Loading: {ctr} image\")\n if txt:\n canvas.delete(txt)\n txt = canvas.create_text(100, 100, text=ctr, justify=CENTER, font=\"Verdana 34\")\n if ctr == 2:\n root.after_cancel(fr)\n ctr = 0\n\ndef write_operations():\n fr = root.after(1000, write_operations)\n move_motor() # operation 1\n update_image(fr) # operation 2\n\n\n\n# image window containing a canvas\nimageWin = Toplevel(root)\nimageWin.title(\"Image window\")\nimageWin.geometry(\"768x768\")\ncanvas = Canvas(imageWin, width=768, height=768)\nimage_container = canvas.create_image(0, 0, anchor=NW, image=None)\ncanvas.pack()\n\nbtn_write = Button(root, text=\"Write\", command=write_operations).place(x=130, y=280)\n\nroot.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "tkinter", "tkinter_canvas" ]
stackoverflow_0074479091_python_python_3.x_tkinter_tkinter_canvas.txt
Q: Is there a way to split a string into length n but also accounting for its permutations? permutations might not be exactly the right word. say x = "123456". I want my code to output ['12','23','34','45','56']. Right now, I know how to split it into ['12','34','56'] A: You just need a range that increments by 1 def split_into(values, n): return [values[i:i + n] for i in range(len(values) - n + 1)] x = "123456789" print(split_into(x, 2)) # ['12', '23', '34', '45', '56', '67', '78', '89'] print(split_into(x, 3)) # ['123', '234', '345', '456', '567', '678', '789'] print(split_into(x, 4)) # ['1234', '2345', '3456', '4567', '5678', '6789'] print(split_into(x, 5)) # ['12345', '23456', '34567', '45678', '56789'] A: In Python 3.10, it looks like itertools.pairwise() will do what you want: >>> from itertools import pairwise >>> print(*map(''.join, pairwise("123456"))) The above is just a simulation as I don't have 3.10 yet ;-) Until then, the documenation for pairwise() provides an alternative: from itertools import tee def pairwise(iterable): a, b = tee(iterable) next(b, None) return zip(a, b) print(*map(''.join, pairwise("123456")))
Is there a way to split a string into length n but also accounting for its permutations?
permutations might not be exactly the right word. say x = "123456". I want my code to output ['12','23','34','45','56']. Right now, I know how to split it into ['12','34','56']
[ "You just need a range that increments by 1\ndef split_into(values, n):\n return [values[i:i + n] for i in range(len(values) - n + 1)]\n\n\nx = \"123456789\"\nprint(split_into(x, 2)) # ['12', '23', '34', '45', '56', '67', '78', '89']\nprint(split_into(x, 3)) # ['123', '234', '345', '456', '567', '678', '789']\nprint(split_into(x, 4)) # ['1234', '2345', '3456', '4567', '5678', '6789']\nprint(split_into(x, 5)) # ['12345', '23456', '34567', '45678', '56789']\n\n", "In Python 3.10, it looks like itertools.pairwise() will do what you want:\n>>> from itertools import pairwise\n>>> print(*map(''.join, pairwise(\"123456\")))\n\nThe above is just a simulation as I don't have 3.10 yet ;-) Until then, the documenation for pairwise() provides an alternative:\nfrom itertools import tee\n\ndef pairwise(iterable):\n a, b = tee(iterable)\n next(b, None)\n return zip(a, b)\n\nprint(*map(''.join, pairwise(\"123456\")))\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "slice" ]
stackoverflow_0074481748_python_slice.txt
Q: Testing multiple conditions with a Python if statement I am trying to get into coding and this is kinda part of the assignments that i need to do to get into the classes. In this task, you will implement a check using the if… else structure you learned earlier.You are required to create a program that uses this conditional. At your school, the front gate is locked at night for safety. You often need to study late on campus. There is sometimes a night guard on duty who can let you in. You want to be able to check if you can access the school campus at a particular time. The current hour of the day is given in the range 0, 1, 2 … 23 and the guard’s presence is indicated by with a True/False boolean. If the hour is from 7 to 17, you do not need the guard to be there as the gate is open If the hour is before 7 or after 17, the guard must be there to let you in Using predefined variables for the hour of the day and whether the guard is present or not, write an if statement to print out whether you can get in. Example start: hour = 4 guard = True Example output: 'You're in!' Make use of the if statement structure to implement the program. One of my ideas was: Time = int(input("Time of getting in: ")) open = 7 closed = 17 if Time > open and Time < closed: print("You can not enter") A: cap O will solve Time = int(input("Time of getting in: ")) Open = 7 closed = 17 if Time > Open and Time < closed: print("You can not enter")
Testing multiple conditions with a Python if statement
I am trying to get into coding and this is kinda part of the assignments that i need to do to get into the classes. In this task, you will implement a check using the if… else structure you learned earlier.You are required to create a program that uses this conditional. At your school, the front gate is locked at night for safety. You often need to study late on campus. There is sometimes a night guard on duty who can let you in. You want to be able to check if you can access the school campus at a particular time. The current hour of the day is given in the range 0, 1, 2 … 23 and the guard’s presence is indicated by with a True/False boolean. If the hour is from 7 to 17, you do not need the guard to be there as the gate is open If the hour is before 7 or after 17, the guard must be there to let you in Using predefined variables for the hour of the day and whether the guard is present or not, write an if statement to print out whether you can get in. Example start: hour = 4 guard = True Example output: 'You're in!' Make use of the if statement structure to implement the program. One of my ideas was: Time = int(input("Time of getting in: ")) open = 7 closed = 17 if Time > open and Time < closed: print("You can not enter")
[ "cap O will solve\nTime = int(input(\"Time of getting in: \"))\nOpen = 7\nclosed = 17\nif Time > Open and Time < closed:\n print(\"You can not enter\")\n\n" ]
[ 1 ]
[ "It's not too difficult, you can do a simple function like that :\ndef go_to_study(hour, start_day = 7, end_day = 17):\n if (hour >= start_day and hour <= end_day):\n return True\n else:\n return False\n\n // on one line, uncomment if you want.\n // return (hour >= start_day and hour <= end_day)\n\n", "hour=int(input(\"Enter the Hour\"))\n\nif hour>=7 and hour<=17:\n print(\"You can Go\") \nelse:\n print(\"You need Guard to let you in\")\n\n" ]
[ -1, -1 ]
[ "if_statement", "python" ]
stackoverflow_0073936772_if_statement_python.txt
Q: Python Script To Loop Through All of the Switches and Interfaces and Pull Info and Output into a CSV? I am pretty new to Python and such so please bear with me. I am tasked with creating a python script that loops through all of the switches and all of the interfaces and pulls the interface stats and outputs them into a CSV. Switch, interface, state, giants, crc, input errors, output errors, input packets, input bytes, output packets, output bytes? I know as of right now there is nothing in script for the giants and crc and such, which I need to figure out how to do, but this is just of where I'm at now trying to get this script to run as is. I think I have found a script similar online, but am struggling with the troubleshooting. I get several errors, such as: "AttributeError" 'NoneType' object has no attribute 'group'" @ Line "IndexError: list index out of range" "NameError: name 'regex_memory' is not defined. Code ---> import netmiko from netmiko import ConnectHandler from netmiko.exceptions import NetMikoTimeoutException from netmiko.exceptions import SSHException from netmiko.exceptions import AuthenticationException import re #here is list of infrastructure switch ip addresses ip_list = ['xx.xx.xx.xx', 'xx.xx.xx.xx'.....] #list where informations will be stored devices = [] #clearing the old data from the CSV file and writing the headers f = open("IOS.csv", "w+") f.write("IP Address, Hostname, Uptime, Current_Version, Current_Image, Serial_Number, Device_Model, Device_Memory") f.write("\n") f.close() #clearing the old data from the CSV file and writing the headers f = open("login_issues.csv", "w+") f.write("IP Address, Status") f.write("\n") f.close() #loop all ip addresses in ip_list for ip in ip_list: cisco = { 'device_type': 'cisco_ios', 'ip': ip, 'username': 'xxx', #ssh username 'password': 'xxx', #ssh password 'secret': 'xxx', #ssh_enable_password 'ssh_strict': False, 'fast_cli': False, } #handling exceptions errors try: net_connect = ConnectHandler(**cisco) except NetMikoTimeoutException: f = open("login_issues.csv", "a") f.write(ip + "," + "Device Unreachable/SSH not enabled") f.write("\n") f.close() continue except AuthenticationException: f = open("login_issues.csv", "a") f.write(ip + "," + "Authentication Failure") f.write("\n") f.close() continue except SSHException: f = open("login_issues.csv", "a") f.write(ip + "," + "SSH not enabled") f.write("\n") f.close() continue try: net_connect.enable() #handling exceptions errors except ValueError: f = open("login_issues.csv", "a") f.write(ip + "," + "Could be SSH Enable Password issue") f.write("\n") f.close() continue #execute show version on router and save output to output object sh_ver_output = net_connect.send_command('show version') ###Below are show int, MAC address-table, and ip arp for task### #sh_int_output = net_connect.send_command('show int') #sh_macTable_output = net_connect.send_command('show mac address-table') #sh_arpTable_output = net_connect.send_command('show ip arp') #finding hostname in output using regular expressions regex_hostname = re.compile(r'(\S+)\suptime') hostname = regex_hostname.findall(sh_ver_output) #finding uptime in output using regular expressions regex_uptime = re.compile(r'\S+\suptime\sis\s(.+)') uptime = regex_uptime.findall(sh_ver_output) uptime = str(uptime).replace(',', '').replace("'", "") uptime = str(uptime)[1: -1] #finding version in output using regular expressions regex_version = re.compile(r'Cisco\sIOS\sSoftware.+Version\s([^ ,]+)') version = regex_version.findall(sh_ver_output) #finding serial in output using regular expressions regex_serial = re.compile(r'Processor\sboard\sID\s(\S+)') serial = regex_serial.findall(sh_ver_output) #finding ios image in output using regular expressions regex_ios = re.compile(r'System\simage\sfile\sis\s"([^ "]+)') ios = regex_ios.findall(sh_ver_output) #finding model in output using regular expressions regex_model = re.compile(r'[Cc]isco\s(\S+).*memory.') model = regex_model.findall(sh_ver_output) #finding the router's memory using regular expressions regex_memory = re.search(r'with (.*?) bytes of memory', sh_ver_output).group(1) memory = regex_memory #append results to table[hostname, uptime, version, serial, ios, model] devices.append([ip, hostname[0], uptime, version[0], ios[0], serial[0], model[0], memory]) #print all results(for all routers) on screen for i in devices: i = ", ".join(i) f = open("IOS.csv", "a") f.write(i) f.write("\n") f.close() I have tried to comment out blocks of the regex functions but issues persist with each one I don't comment out. Tried to figure out the AttributeException error with 'group', kinda lost on that one. Overall, I took an intro course to Java 3-4 years ago and would love and greatly appreciate some assistance A: so with multiple errors, you need to go at it step by step. Fix one issue, go to next issue, fix that, etc The code you found is likely quite old, as the file handling is quite horrible. Since I don't have a cisco router, I can only provide partial assistance, but the code below is fixed at least for the file handling. It will print out the response that the "show_version" command outputs. You'll then need to make sure that the regex you have matches what you actually expect. Then dump the output in something like regex101 (in the test string section) and find a regex that finds the parts you need. Note that simple python string splitting is likely easier. I advise you to update your question with the output of a single sh_ver_output = net_connect.send_command('show version') so the community actually has some data to work with... from netmiko import ConnectHandler from netmiko.exceptions import NetMikoTimeoutException from netmiko.exceptions import SSHException from netmiko.exceptions import AuthenticationException import re #here is list of infrastructure switch ip addresses ip_list = ['xx.xx.xx.xx', 'xx.xx.xx.xx'] #list where informations will be stored devices = [] #clearing the old data from the CSV file and writing the headers with open("IOS.csv", "w+") as f: f.write("IP Address,Hostname,Uptime,Current_Version,Current_Image,Serial_Number,Device_Model,Device_Memory\n") #clearing the old data from the CSV file and writing the headers with open("login_issues.csv", "w+") as f: f.write("IP Address,Status\n") #loop all ip addresses in ip_list for ip in ip_list: cisco = { 'device_type': 'cisco_ios', 'ip': ip, 'username': 'xxx', #ssh username 'password': 'xxx', #ssh password 'secret': 'xxx', #ssh_enable_password 'ssh_strict': False, 'fast_cli': False, } #handling exceptions errors try: net_connect = ConnectHandler(**cisco) except NetMikoTimeoutException: with open("login_issues.csv", "a") as f: f.write(f"{ip},Device Unreachable/SSH not enabled\n") continue except AuthenticationException: with open("login_issues.csv", "a") as f: f.write(f"{ip},Authentication Failure\n") continue except SSHException: with open("login_issues.csv", "a") as f: f.write(f"{ip},SSH not enabled\n") continue try: net_connect.enable() #handling exceptions errors except ValueError: with open("login_issues.csv", "a") as f: f.write(f"{ip},Could be SSH Enable Password issue\n") continue #execute show version on router and save output to output object sh_ver_output = net_connect.send_command('show version') print(f"-----\n{sh_ver_output}\n")
Python Script To Loop Through All of the Switches and Interfaces and Pull Info and Output into a CSV?
I am pretty new to Python and such so please bear with me. I am tasked with creating a python script that loops through all of the switches and all of the interfaces and pulls the interface stats and outputs them into a CSV. Switch, interface, state, giants, crc, input errors, output errors, input packets, input bytes, output packets, output bytes? I know as of right now there is nothing in script for the giants and crc and such, which I need to figure out how to do, but this is just of where I'm at now trying to get this script to run as is. I think I have found a script similar online, but am struggling with the troubleshooting. I get several errors, such as: "AttributeError" 'NoneType' object has no attribute 'group'" @ Line "IndexError: list index out of range" "NameError: name 'regex_memory' is not defined. Code ---> import netmiko from netmiko import ConnectHandler from netmiko.exceptions import NetMikoTimeoutException from netmiko.exceptions import SSHException from netmiko.exceptions import AuthenticationException import re #here is list of infrastructure switch ip addresses ip_list = ['xx.xx.xx.xx', 'xx.xx.xx.xx'.....] #list where informations will be stored devices = [] #clearing the old data from the CSV file and writing the headers f = open("IOS.csv", "w+") f.write("IP Address, Hostname, Uptime, Current_Version, Current_Image, Serial_Number, Device_Model, Device_Memory") f.write("\n") f.close() #clearing the old data from the CSV file and writing the headers f = open("login_issues.csv", "w+") f.write("IP Address, Status") f.write("\n") f.close() #loop all ip addresses in ip_list for ip in ip_list: cisco = { 'device_type': 'cisco_ios', 'ip': ip, 'username': 'xxx', #ssh username 'password': 'xxx', #ssh password 'secret': 'xxx', #ssh_enable_password 'ssh_strict': False, 'fast_cli': False, } #handling exceptions errors try: net_connect = ConnectHandler(**cisco) except NetMikoTimeoutException: f = open("login_issues.csv", "a") f.write(ip + "," + "Device Unreachable/SSH not enabled") f.write("\n") f.close() continue except AuthenticationException: f = open("login_issues.csv", "a") f.write(ip + "," + "Authentication Failure") f.write("\n") f.close() continue except SSHException: f = open("login_issues.csv", "a") f.write(ip + "," + "SSH not enabled") f.write("\n") f.close() continue try: net_connect.enable() #handling exceptions errors except ValueError: f = open("login_issues.csv", "a") f.write(ip + "," + "Could be SSH Enable Password issue") f.write("\n") f.close() continue #execute show version on router and save output to output object sh_ver_output = net_connect.send_command('show version') ###Below are show int, MAC address-table, and ip arp for task### #sh_int_output = net_connect.send_command('show int') #sh_macTable_output = net_connect.send_command('show mac address-table') #sh_arpTable_output = net_connect.send_command('show ip arp') #finding hostname in output using regular expressions regex_hostname = re.compile(r'(\S+)\suptime') hostname = regex_hostname.findall(sh_ver_output) #finding uptime in output using regular expressions regex_uptime = re.compile(r'\S+\suptime\sis\s(.+)') uptime = regex_uptime.findall(sh_ver_output) uptime = str(uptime).replace(',', '').replace("'", "") uptime = str(uptime)[1: -1] #finding version in output using regular expressions regex_version = re.compile(r'Cisco\sIOS\sSoftware.+Version\s([^ ,]+)') version = regex_version.findall(sh_ver_output) #finding serial in output using regular expressions regex_serial = re.compile(r'Processor\sboard\sID\s(\S+)') serial = regex_serial.findall(sh_ver_output) #finding ios image in output using regular expressions regex_ios = re.compile(r'System\simage\sfile\sis\s"([^ "]+)') ios = regex_ios.findall(sh_ver_output) #finding model in output using regular expressions regex_model = re.compile(r'[Cc]isco\s(\S+).*memory.') model = regex_model.findall(sh_ver_output) #finding the router's memory using regular expressions regex_memory = re.search(r'with (.*?) bytes of memory', sh_ver_output).group(1) memory = regex_memory #append results to table[hostname, uptime, version, serial, ios, model] devices.append([ip, hostname[0], uptime, version[0], ios[0], serial[0], model[0], memory]) #print all results(for all routers) on screen for i in devices: i = ", ".join(i) f = open("IOS.csv", "a") f.write(i) f.write("\n") f.close() I have tried to comment out blocks of the regex functions but issues persist with each one I don't comment out. Tried to figure out the AttributeException error with 'group', kinda lost on that one. Overall, I took an intro course to Java 3-4 years ago and would love and greatly appreciate some assistance
[ "so with multiple errors, you need to go at it step by step.\nFix one issue, go to next issue, fix that, etc\nThe code you found is likely quite old, as the file handling is quite horrible. Since I don't have a cisco router, I can only provide partial assistance, but the code below is fixed at least for the file handling.\nIt will print out the response that the \"show_version\" command outputs.\nYou'll then need to make sure that the regex you have matches what you actually expect. Then dump the output in something like regex101 (in the test string section) and find a regex that finds the parts you need. Note that simple python string splitting is likely easier.\nI advise you to update your question with the output of a single sh_ver_output = net_connect.send_command('show version') so the community actually has some data to work with...\nfrom netmiko import ConnectHandler\nfrom netmiko.exceptions import NetMikoTimeoutException\nfrom netmiko.exceptions import SSHException\nfrom netmiko.exceptions import AuthenticationException\nimport re\n\n\n#here is list of infrastructure switch ip addresses\nip_list = ['xx.xx.xx.xx', 'xx.xx.xx.xx'] \n\n#list where informations will be stored\ndevices = []\n\n#clearing the old data from the CSV file and writing the headers\nwith open(\"IOS.csv\", \"w+\") as f:\n f.write(\"IP Address,Hostname,Uptime,Current_Version,Current_Image,Serial_Number,Device_Model,Device_Memory\\n\")\n\n#clearing the old data from the CSV file and writing the headers\nwith open(\"login_issues.csv\", \"w+\") as f:\n f.write(\"IP Address,Status\\n\")\n\n\n\n#loop all ip addresses in ip_list\nfor ip in ip_list:\n cisco = {\n 'device_type': 'cisco_ios',\n 'ip': ip,\n 'username': 'xxx', #ssh username\n 'password': 'xxx', #ssh password\n 'secret': 'xxx', #ssh_enable_password\n 'ssh_strict': False,\n 'fast_cli': False,\n }\n\n #handling exceptions errors\n\n try:\n net_connect = ConnectHandler(**cisco)\n except NetMikoTimeoutException:\n with open(\"login_issues.csv\", \"a\") as f:\n f.write(f\"{ip},Device Unreachable/SSH not enabled\\n\")\n continue\n except AuthenticationException:\n with open(\"login_issues.csv\", \"a\") as f:\n f.write(f\"{ip},Authentication Failure\\n\")\n continue\n except SSHException:\n with open(\"login_issues.csv\", \"a\") as f:\n f.write(f\"{ip},SSH not enabled\\n\")\n continue\n try:\n net_connect.enable()\n #handling exceptions errors \n except ValueError:\n with open(\"login_issues.csv\", \"a\") as f:\n f.write(f\"{ip},Could be SSH Enable Password issue\\n\")\n continue\n \n\n #execute show version on router and save output to output object\n sh_ver_output = net_connect.send_command('show version')\n print(f\"-----\\n{sh_ver_output}\\n\")\n\n" ]
[ 0 ]
[]
[]
[ "automation", "python", "scripting" ]
stackoverflow_0074480924_automation_python_scripting.txt
Q: Count how many occurrences of value in a column I have a dataset in Python and a column which lists the type of loan applicant (individual, couple, business etc) and i am trying to find out how many of each applicant there are. i am new to Python and this is probably a very basic question. any feedback is appreciated i tried: df['applicant_type'].count() = only provided the total number of data in column df['applicant_type'].head() df['applicant_type'].info() df['applicant_type'].dict() none of the above worked A: Try: df['applicant_type'].value_counts()
Count how many occurrences of value in a column
I have a dataset in Python and a column which lists the type of loan applicant (individual, couple, business etc) and i am trying to find out how many of each applicant there are. i am new to Python and this is probably a very basic question. any feedback is appreciated i tried: df['applicant_type'].count() = only provided the total number of data in column df['applicant_type'].head() df['applicant_type'].info() df['applicant_type'].dict() none of the above worked
[ "Try:\ndf['applicant_type'].value_counts()\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074481879_dataframe_numpy_pandas_python.txt
Q: How to add langdetect's language probability vector to a Keras Sequential Model? I'm currently studying the singing language identification problem (and the basics of machine learning). I found lots of works about this on the internet, but some of them don't provide any code (or even pseudocode) and that's why I'm trying to reproduce them using their machine learning model description. A good example is LISTEN, READ, AND IDENTIFY: MULTIMODAL SINGING LANGUAGE IDENTIFICATION OF MUSIC written by Keunwoo Choi and Yuxuan Wang. To sum up, they are concatenating two layers: audio layer (in form of spectrogram), text layer (language probability vector on metadata using langdetect, 56-dimensional vector). The text branch is a 3-layer MLP where each layer consists of a 128-unit fully-connected layer, a batch normalization layer, and a ReLU activation [22]. For text model I got something like this: text_model = Sequential() text_model.add(Input((56,), name='input')) text_model.add(BatchNormalization()) text_model.add(Dense(128, activation='relu')) langdetect.detect_langs(metadata) returns [de:0.8571399874707945, en:0.14285867860989504]. I m not sure I've described my model correctly and I cannot understand how to put it properly (langdetect probability vector) into keras model. A: First, you need to transform the langdetect output into vector of a constant length. There are 55 languages in the library, therefore we need to create vector of length 55, where i-th element represents the probability of text coming from the i-th language. You could do this like this: import tensorflow as tf import numpy as np import langdetect langdetect.detector_factory.init_factory() LANGUAGES_LIST = langdetect.detector_factory._factory.langlist def get_probabilities_vector(text): predictions = langdetect.detect_langs(text) output = np.zeros(len(LANGUAGES_LIST)) for p in predictions: output[LANGUAGES_LIST.index(p.lang)] = p.prob return tf.constant(output) Then you need to create a model with multiple inputs. This can be done using functional API, e.g. like this (change your inputs according to your use case): def create_model(): audio_input = tf.keras.Input(shape=(256,)) langdetect_input = tf.keras.Input(shape=(55,)) x = tf.keras.layers.concatenate([audio_input, langdetect_input]) x = tf.keras.layers.Dense(128, activation='relu')(x) output = tf.keras.layers.Dense(55)(x) model = tf.keras.Model( inputs={ 'audio': audio_input, 'text': langdetect_input }, outputs=output) return model Testing the model on some input: model = create_model() audio_input = tf.constant(np.random.rand(256)) langdetect_input = get_probabilities_vector('This is just a test input') model({ 'audio': tf.expand_dims(audio_input, 0), 'text': tf.expand_dims(langdetect_input, 0) }) >>> <tf.Tensor: shape=(1, 55), dtype=float32, numpy= array([[ 0.23361185, 0.19011918, -0.45230836, -0.0602392 , -0.20067683, 0.9698535 , -1.0724173 , 0.08978442, 0.052798 , -0.16554174, 0.9238764 , 1.0331644 , 0.4508734 , -0.2450786 , -1.0605856 , 0.3239496 , -1.0073977 , -0.2129285 , -0.6817296 , 0.05288622, 0.9089616 , -0.11521344, 0.25696573, -0.07688305, -0.36123943, -0.0317415 , -0.18303779, 0.13786468, 0.88620317, 0.11393422, -0.5215691 , -0.28585738, 0.54988045, -0.02300271, -0.4347821 , -0.57744324, 0.14031887, 0.8255624 , -0.13157232, -1.1060234 , -0.24097277, 0.12950295, 0.4586677 , 0.37702668, 0.7558856 , -0.05933011, 0.53903174, 0.27433476, -0.18464057, 1.0673125 , -0.05723387, -0.03429477, 0.4431308 , -0.14510366, -0.28087378]], dtype=float32)> I am expanding the dimensions of the inputs using expand_dims function so that the inputs have shapes (1, 256) and (1, 55) (which is similar to inputs (batch_size, 256) and (batch_size, 55) that the model expects during training). This is just a draft, but this is roughly how your problem could be solved.
How to add langdetect's language probability vector to a Keras Sequential Model?
I'm currently studying the singing language identification problem (and the basics of machine learning). I found lots of works about this on the internet, but some of them don't provide any code (or even pseudocode) and that's why I'm trying to reproduce them using their machine learning model description. A good example is LISTEN, READ, AND IDENTIFY: MULTIMODAL SINGING LANGUAGE IDENTIFICATION OF MUSIC written by Keunwoo Choi and Yuxuan Wang. To sum up, they are concatenating two layers: audio layer (in form of spectrogram), text layer (language probability vector on metadata using langdetect, 56-dimensional vector). The text branch is a 3-layer MLP where each layer consists of a 128-unit fully-connected layer, a batch normalization layer, and a ReLU activation [22]. For text model I got something like this: text_model = Sequential() text_model.add(Input((56,), name='input')) text_model.add(BatchNormalization()) text_model.add(Dense(128, activation='relu')) langdetect.detect_langs(metadata) returns [de:0.8571399874707945, en:0.14285867860989504]. I m not sure I've described my model correctly and I cannot understand how to put it properly (langdetect probability vector) into keras model.
[ "First, you need to transform the langdetect output into vector of a constant length. There are 55 languages in the library, therefore we need to create vector of length 55, where i-th element represents the probability of text coming from the i-th language. You could do this like this:\nimport tensorflow as tf\n\nimport numpy as np\nimport langdetect\n\nlangdetect.detector_factory.init_factory()\nLANGUAGES_LIST = langdetect.detector_factory._factory.langlist\n\ndef get_probabilities_vector(text):\n \n predictions = langdetect.detect_langs(text)\n output = np.zeros(len(LANGUAGES_LIST))\n \n for p in predictions:\n output[LANGUAGES_LIST.index(p.lang)] = p.prob\n \n return tf.constant(output)\n\nThen you need to create a model with multiple inputs. This can be done using functional API, e.g. like this (change your inputs according to your use case):\ndef create_model():\n \n audio_input = tf.keras.Input(shape=(256,))\n langdetect_input = tf.keras.Input(shape=(55,))\n \n x = tf.keras.layers.concatenate([audio_input, langdetect_input])\n x = tf.keras.layers.Dense(128, activation='relu')(x)\n output = tf.keras.layers.Dense(55)(x)\n \n model = tf.keras.Model(\n inputs={\n 'audio': audio_input,\n 'text': langdetect_input\n },\n outputs=output)\n \n return model\n\nTesting the model on some input:\nmodel = create_model()\n\naudio_input = tf.constant(np.random.rand(256))\nlangdetect_input = get_probabilities_vector('This is just a test input')\n\nmodel({\n 'audio': tf.expand_dims(audio_input, 0),\n 'text': tf.expand_dims(langdetect_input, 0)\n})\n\n>>> <tf.Tensor: shape=(1, 55), dtype=float32, numpy=\narray([[ 0.23361185, 0.19011918, -0.45230836, -0.0602392 , -0.20067683,\n 0.9698535 , -1.0724173 , 0.08978442, 0.052798 , -0.16554174,\n 0.9238764 , 1.0331644 , 0.4508734 , -0.2450786 , -1.0605856 ,\n 0.3239496 , -1.0073977 , -0.2129285 , -0.6817296 , 0.05288622,\n 0.9089616 , -0.11521344, 0.25696573, -0.07688305, -0.36123943,\n -0.0317415 , -0.18303779, 0.13786468, 0.88620317, 0.11393422,\n -0.5215691 , -0.28585738, 0.54988045, -0.02300271, -0.4347821 ,\n -0.57744324, 0.14031887, 0.8255624 , -0.13157232, -1.1060234 ,\n -0.24097277, 0.12950295, 0.4586677 , 0.37702668, 0.7558856 ,\n -0.05933011, 0.53903174, 0.27433476, -0.18464057, 1.0673125 ,\n -0.05723387, -0.03429477, 0.4431308 , -0.14510366, -0.28087378]],\n dtype=float32)>\n\nI am expanding the dimensions of the inputs using expand_dims function so that the inputs have shapes (1, 256) and (1, 55) (which is similar to inputs (batch_size, 256) and (batch_size, 55) that the model expects during training).\nThis is just a draft, but this is roughly how your problem could be solved.\n" ]
[ 2 ]
[]
[]
[ "keras", "machine_learning", "python", "tensorflow", "tf.keras" ]
stackoverflow_0074481279_keras_machine_learning_python_tensorflow_tf.keras.txt
Q: Is there a hash of a class instance in Python? Let's suppose I have a class like this: class MyClass: def __init__(self, a): self._a = a And I construct such instances: obj1 = MyClass(5) obj2 = MyClass(12) obj3 = MyClass(5) Is there a general way to hash my objects such that objects constructed with same values have equal hashes? In this case: myhash(obj1) != myhash(obj2) myhash(obj1) == myhash(obj3) By general I mean a Python function that can work with objects created by any class I can define. For different classes and same values the hash function must return different results, of course; otherwise this question would be about hashing of several arguments instead. A: def myhash(obj): items = sorted(obj.__dict__.items(), key=lambda it: it[0]) return hash((type(obj),) + tuple(items)) This solution obviously has limitations: It assumes that all fields in __dict__ are important. It assumes that __dict__ is present, e.g. this won't work with __slots__. It assumes that all values are hashable It breaks the Liskov substitution principle. A: The question is badly formed for a couple reasons: Hashes don't test eqaulity, just inequality. That is, they guarantee that hash(a) != hash(b) implies a != b, but the reverse does not hold true. For example, checking "aKey" in myDict will do a linear search through all keys in myDict that have the same hash as "aKey". You seem to wanting to do something with storage. Note that the hash of "aKey" will change between runs, so don't write it to a file. See the bottom of __hash__ for more information. In general, you need to think carefully about subclasses, hashes, and equality. There is a pit here, so even the official documentation quietly sidesteps what the hash of instance means. Do note that each instance has a __dict__ for local variables and the __class__ with more information. Hope this helps those who come after you.
Is there a hash of a class instance in Python?
Let's suppose I have a class like this: class MyClass: def __init__(self, a): self._a = a And I construct such instances: obj1 = MyClass(5) obj2 = MyClass(12) obj3 = MyClass(5) Is there a general way to hash my objects such that objects constructed with same values have equal hashes? In this case: myhash(obj1) != myhash(obj2) myhash(obj1) == myhash(obj3) By general I mean a Python function that can work with objects created by any class I can define. For different classes and same values the hash function must return different results, of course; otherwise this question would be about hashing of several arguments instead.
[ "def myhash(obj):\n items = sorted(obj.__dict__.items(), key=lambda it: it[0])\n return hash((type(obj),) + tuple(items))\n\nThis solution obviously has limitations:\n\nIt assumes that all fields in __dict__ are important.\nIt assumes that __dict__ is present, e.g. this won't work with __slots__.\nIt assumes that all values are hashable\nIt breaks the Liskov substitution principle.\n\n", "The question is badly formed for a couple reasons:\n\nHashes don't test eqaulity, just inequality. That is, they guarantee that hash(a) != hash(b) implies a != b, but the reverse does not hold true. For example, checking \"aKey\" in myDict will do a linear search through all keys in myDict that have the same hash as \"aKey\".\nYou seem to wanting to do something with storage. Note that the hash of \"aKey\" will change between runs, so don't write it to a file. See the bottom of __hash__ for more information.\nIn general, you need to think carefully about subclasses, hashes, and equality. There is a pit here, so even the official documentation quietly sidesteps what the hash of instance means. Do note that each instance has a __dict__ for local variables and the __class__ with more information.\n\nHope this helps those who come after you.\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0060094137_python.txt
Q: Adding a Line with File.write I am trying to add line while doing a file.write adding a line. I am using with open('CI.txt', 'a+', encoding='utf8') as file: file.write(str('CINV')) and obtaining this: [['PO: CRZ229728', 'Invoice #: 2561047778']][['PO: CRZ229728', 'Invoice #: 2561047778']] I want the below result ['PO: CRZ229728', 'Invoice #: 2561047778'] ['PO: CRZ229728', 'Invoice #: 2561047778']
Adding a Line with File.write
I am trying to add line while doing a file.write adding a line. I am using with open('CI.txt', 'a+', encoding='utf8') as file: file.write(str('CINV')) and obtaining this: [['PO: CRZ229728', 'Invoice #: 2561047778']][['PO: CRZ229728', 'Invoice #: 2561047778']] I want the below result ['PO: CRZ229728', 'Invoice #: 2561047778'] ['PO: CRZ229728', 'Invoice #: 2561047778']
[]
[]
[ "I think , I figured it out.\nSee below:\nwith open('CI.txt', 'a+', encoding='utf8') as file:\n file.write('\\n')\n file.write(str('CINV')) \n\n" ]
[ -1 ]
[ "file", "python" ]
stackoverflow_0074478046_file_python.txt
Q: What is the result of this recursive function What does this recursive function return? def fun(a,b): if(b==0): return a else: return fun(b, a%b) I tried checking on some numbers for example it returns 3 for 15,6 A: This calculates the greatest common divisor between a and b. See this question for the proof: https://math.stackexchange.com/questions/59147/why-gcda-b-gcdb-a-bmod-b-understanding-euclidean-algorithm The greatest common divisor (gcd) of two numbers a and b is the largest number that divides both a and b. Note: f(6, 15) should return 3, as 3 is the largest number that divides both 6 and 15
What is the result of this recursive function
What does this recursive function return? def fun(a,b): if(b==0): return a else: return fun(b, a%b) I tried checking on some numbers for example it returns 3 for 15,6
[ "This calculates the greatest common divisor between a and b.\nSee this question for the proof: https://math.stackexchange.com/questions/59147/why-gcda-b-gcdb-a-bmod-b-understanding-euclidean-algorithm\nThe greatest common divisor (gcd) of two numbers a and b is the largest number that divides both a and b.\nNote: f(6, 15) should return 3, as 3 is the largest number that divides both 6 and 15\n" ]
[ 3 ]
[]
[]
[ "python", "recursion" ]
stackoverflow_0074481908_python_recursion.txt
Q: Deploying Mkdocs to Azure web apps Can't seem to deploy Mkdocs (material) site to Azure Web Apps. We built an Mkdocs site for our collateral and documentation, I have tried several time to host it using Azure (web app, static app and DevOps) but nothing seems to work. Prefer not to use Git pages or 3rd party hosting apps If anyone has done it please could you share a step-by-step guide of how this could be done Below is what my GitHub repository looks like: A: You could follow one of the static site generator tutorials available like the one for hugo for example. There are two main steps really which would be part of your build pipeline like GitHub Actions or an ADO pipeline Generate the static assets For MkDocs, this is done by running mkdocs builds Deploy to Azure For this, depending on the what your build solution is, use the appropriate plugin for deploying. For GitHub actions, it would be something like this - name: Build And Deploy id: builddeploy uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }} repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments) action: "upload" ###### Repository/Build Configurations - These values can be configured to match you app requirements. ###### # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig app_location: "/" # App source code path output_location: "site" # Built app content directory - optional ###### End of Repository/Build Configurations ######
Deploying Mkdocs to Azure web apps
Can't seem to deploy Mkdocs (material) site to Azure Web Apps. We built an Mkdocs site for our collateral and documentation, I have tried several time to host it using Azure (web app, static app and DevOps) but nothing seems to work. Prefer not to use Git pages or 3rd party hosting apps If anyone has done it please could you share a step-by-step guide of how this could be done Below is what my GitHub repository looks like:
[ "You could follow one of the static site generator tutorials available like the one for hugo for example.\nThere are two main steps really which would be part of your build pipeline like GitHub Actions or an ADO pipeline\n\nGenerate the static assets\n\nFor MkDocs, this is done by running mkdocs builds\n\nDeploy to Azure\n\nFor this, depending on the what your build solution is, use the appropriate plugin for deploying. For GitHub actions, it would be something like this\n- name: Build And Deploy\n id: builddeploy\n uses: Azure/static-web-apps-deploy@v1\n with:\n azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}\n repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments)\n action: \"upload\"\n ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######\n # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig\n app_location: \"/\" # App source code path\n output_location: \"site\" # Built app content directory - optional\n ###### End of Repository/Build Configurations ######\n\n" ]
[ 0 ]
[]
[]
[ "azure", "hosting", "markdown", "mkdocs", "python" ]
stackoverflow_0073084470_azure_hosting_markdown_mkdocs_python.txt
Q: getting sheet names from openpyxl I have a moderately large xlsx file (around 14 MB) and OpenOffice hangs trying to open it. I was trying to use openpyxl to read the content, following this tutorial. The code snippet is as follows: from openpyxl import load_workbook wb = load_workbook(filename = 'large_file.xlsx', use_iterators = True) ws = wb.get_sheet_by_name(name = 'big_data') The problem is, I don't know the sheet name, and Sheet1/Sheet2.. etc. didn't work (returned NoneType object). I could not find a documentation telling me How to get the sheet names for an xlsx files using openpyxl. Can anyone help me? A: Use the sheetnames property: sheetnames Returns the list of the names of worksheets in this workbook. Names are returned in the worksheets order. Type: list of strings print (wb.sheetnames) You can also get worksheet objects from wb.worksheets: ws = wb.worksheets[0] A: python 3.x for get sheet name you must use attribute g_sheet=wb.sheetnames return by list for i in g_sheet: print(i) **shoose any name ** ws=wb[g_sheet[0]] or ws=wb[any name] suppose name sheet is paster ws=wb["paster"] A: As a complement to the other answers, for a particular worksheet, you can also use cf documentation in the constructor parameters: ws.title A: As mentioned the earlier answer you can get the list of sheet names by using the ws.sheetnames But if you know the sheet names you can get that worksheet object by ws.get_sheet_by_name("YOUR_SHEET_NAME") Another way of doing this is as mentioned in earlier answer ws['YOUR_SHEET_NAME'] A: for worksheet in workbook: print(worksheet.name)
getting sheet names from openpyxl
I have a moderately large xlsx file (around 14 MB) and OpenOffice hangs trying to open it. I was trying to use openpyxl to read the content, following this tutorial. The code snippet is as follows: from openpyxl import load_workbook wb = load_workbook(filename = 'large_file.xlsx', use_iterators = True) ws = wb.get_sheet_by_name(name = 'big_data') The problem is, I don't know the sheet name, and Sheet1/Sheet2.. etc. didn't work (returned NoneType object). I could not find a documentation telling me How to get the sheet names for an xlsx files using openpyxl. Can anyone help me?
[ "Use the sheetnames property:\n\nsheetnames\nReturns the list of the names of worksheets in this workbook.\nNames are returned in the worksheets order.\nType: list of strings\n\nprint (wb.sheetnames)\n\nYou can also get worksheet objects from wb.worksheets:\nws = wb.worksheets[0]\n\n", "python 3.x\nfor get sheet name you must use attribute \ng_sheet=wb.sheetnames\n\nreturn by list\nfor i in g_sheet:\n print(i)\n\n**shoose any name **\nws=wb[g_sheet[0]]\n\nor ws=wb[any name]\nsuppose name sheet is paster\nws=wb[\"paster\"]\n\n", "As a complement to the other answers, for a particular worksheet, you can also use cf documentation in the constructor parameters:\nws.title\n\n", "As mentioned the earlier answer\nyou can get the list of sheet names \nby using the ws.sheetnames\nBut if you know the sheet names you can get that worksheet object by\nws.get_sheet_by_name(\"YOUR_SHEET_NAME\")\n\nAnother way of doing this is as mentioned in earlier answer\nws['YOUR_SHEET_NAME']\n\n", "for worksheet in workbook:\n print(worksheet.name)\n\n" ]
[ 129, 5, 4, 2, 0 ]
[]
[]
[ "excel", "openpyxl", "python" ]
stackoverflow_0023527887_excel_openpyxl_python.txt
Q: web scraping python beautifulsoup, javascriot I want to get the product names from this web address:'https://telenor.se/handla/mobiler/' I am using python and beautifulsoup I tried this but it couldnt catch the product lists, it seems products that are in the list are not capturing by beautifulsoup mobile_page_url='https://telenor.se/handla/mobiler/' mobile_page_data=requests.get(mobile_page_url) mobile_page_soup=BeautifulSoup(mobile_page_data.text) mobile_page_soup=mobile_page_soup.select('div',{'class':'grid-items__item'}) A: The data you see on the page is loaded from external URL via JavaScript. You can simulate this call with requests/json modules: import re import json import requests from bs4 import BeautifulSoup url = "https://telenor.se/handla/mobiler/" items_url = "https://telenor.se/service/product-grid/get-component-data/{}" soup = BeautifulSoup(requests.get(url).content, "html.parser") data = soup.select_one("#ProductGridPage")[":data"] data = json.loads(re.search(r"\{.*\}", data).group(0)) currentPageId = data["currentPageId"] data = requests.get(items_url.format(currentPageId)).json() # uncomment to print all data: # print(json.dumps(data, indent=4)) for i in data["productGridPageJsonViewModel"]["gridItems"]: print(i.get("name")) Prints: iPhone 14 Galaxy S22 Ultra iPhone 14 Plus iPhone 14 Pro 12T Pro Phone (1) 12 Pro ...
web scraping python beautifulsoup, javascriot
I want to get the product names from this web address:'https://telenor.se/handla/mobiler/' I am using python and beautifulsoup I tried this but it couldnt catch the product lists, it seems products that are in the list are not capturing by beautifulsoup mobile_page_url='https://telenor.se/handla/mobiler/' mobile_page_data=requests.get(mobile_page_url) mobile_page_soup=BeautifulSoup(mobile_page_data.text) mobile_page_soup=mobile_page_soup.select('div',{'class':'grid-items__item'})
[ "The data you see on the page is loaded from external URL via JavaScript. You can simulate this call with requests/json modules:\nimport re\nimport json\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = \"https://telenor.se/handla/mobiler/\"\nitems_url = \"https://telenor.se/service/product-grid/get-component-data/{}\"\n\nsoup = BeautifulSoup(requests.get(url).content, \"html.parser\")\n\ndata = soup.select_one(\"#ProductGridPage\")[\":data\"]\ndata = json.loads(re.search(r\"\\{.*\\}\", data).group(0))\n\ncurrentPageId = data[\"currentPageId\"]\ndata = requests.get(items_url.format(currentPageId)).json()\n\n# uncomment to print all data:\n# print(json.dumps(data, indent=4))\n\nfor i in data[\"productGridPageJsonViewModel\"][\"gridItems\"]:\n print(i.get(\"name\"))\n\nPrints:\niPhone 14\nGalaxy S22 Ultra\niPhone 14 Plus\niPhone 14 Pro\n12T Pro\nPhone (1)\n12 Pro\n\n...\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074481787_beautifulsoup_python_web_scraping.txt
Q: Learning Python - len() returns 2n+2 I'm sorry if this is a duplicate post but search seemed to yield no useful results...or maybe I'm such a noob that I'm not understanding what is being said in the answers. I wrote this small code for practice (following "learning Python the hard way"). I tried to make a shorter version of a code which was already given to me. from sys import argv script, from_file, to_file = argv # here is the part where I tried to simplify the commands and see if I still get the same result, # Turns out it's the same 2n+2 trial = open(from_file) trial_data = trial.read() print(len(trial_data)) trial.close() # actual code after defining the argumentative variables in_file = open(from_file).read() input(f"Transfering {len(in_file)} characters from {from_file} to {to_file}, hit RETURN to continue, CRTL-C to abort.") #'in_data = in_file.read() out_file = open(to_file, 'w').write(in_file) When using len() it always seems to return 2n+2 value instead of n, where n is the actual number of characters in the text file. I also made sure there are no extra lines in the text file. Can someone kindly explain? TIA I was expecting the exact number of characters found in the txt file to be returned. Turns out it's too much to ask. Edit: since so many are asking for a practical example....here it goes: The poem dedicated to Puxijn The Chonk one What i get is ÿþT h e p o e m d e d i c a t e d t o P u x i j n T h e C h o n k o n e I think it is an encoding problem. I'm using the latest python if that is of any help. A: Based on your updated question, you're definitely reading from UTF-16 encoded text files using the locale default encoding (probably latin-1 or cp1252, both of which would decode the UTF-16 BOM to ÿþ; Windows often uses cp1252 as the default, and latin-1, while largely eclipsed by UTF-8 in the present day, was a popular locale on older UNIX-likes for a long time). Those encodings will read any old bytes without error, even if the encoding is wrong (they map one to one from all 256 bytes to a matching 256 characters), producing gibberish (for bytes outside the ASCII range), and weird gaps (for the null bytes before each ASCII character in UTF-16). Change all your open calls to add an extra argument, encoding='utf-16', e.g.: trial = open(from_file, encoding='utf-16') and Python will use the correct text encoding to decode the raw bytes to a str, and all your lengths will match up. Alternatively, when saving the files in a reasonable editor, make sure to tweak the encoding to make it an encoding Python will use by default (in modern Python, you can force UTF-8 mode regardless of locale settings, and UTF-8 is probably the most popular portable encoding, in part because for pure ASCII text, it's identical to ASCII, wasting no disk space).
Learning Python - len() returns 2n+2
I'm sorry if this is a duplicate post but search seemed to yield no useful results...or maybe I'm such a noob that I'm not understanding what is being said in the answers. I wrote this small code for practice (following "learning Python the hard way"). I tried to make a shorter version of a code which was already given to me. from sys import argv script, from_file, to_file = argv # here is the part where I tried to simplify the commands and see if I still get the same result, # Turns out it's the same 2n+2 trial = open(from_file) trial_data = trial.read() print(len(trial_data)) trial.close() # actual code after defining the argumentative variables in_file = open(from_file).read() input(f"Transfering {len(in_file)} characters from {from_file} to {to_file}, hit RETURN to continue, CRTL-C to abort.") #'in_data = in_file.read() out_file = open(to_file, 'w').write(in_file) When using len() it always seems to return 2n+2 value instead of n, where n is the actual number of characters in the text file. I also made sure there are no extra lines in the text file. Can someone kindly explain? TIA I was expecting the exact number of characters found in the txt file to be returned. Turns out it's too much to ask. Edit: since so many are asking for a practical example....here it goes: The poem dedicated to Puxijn The Chonk one What i get is ÿþT h e p o e m d e d i c a t e d t o P u x i j n T h e C h o n k o n e I think it is an encoding problem. I'm using the latest python if that is of any help.
[ "Based on your updated question, you're definitely reading from UTF-16 encoded text files using the locale default encoding (probably latin-1 or cp1252, both of which would decode the UTF-16 BOM to ÿþ; Windows often uses cp1252 as the default, and latin-1, while largely eclipsed by UTF-8 in the present day, was a popular locale on older UNIX-likes for a long time). Those encodings will read any old bytes without error, even if the encoding is wrong (they map one to one from all 256 bytes to a matching 256 characters), producing gibberish (for bytes outside the ASCII range), and weird gaps (for the null bytes before each ASCII character in UTF-16).\nChange all your open calls to add an extra argument, encoding='utf-16', e.g.:\ntrial = open(from_file, encoding='utf-16')\n\nand Python will use the correct text encoding to decode the raw bytes to a str, and all your lengths will match up.\nAlternatively, when saving the files in a reasonable editor, make sure to tweak the encoding to make it an encoding Python will use by default (in modern Python, you can force UTF-8 mode regardless of locale settings, and UTF-8 is probably the most popular portable encoding, in part because for pure ASCII text, it's identical to ASCII, wasting no disk space).\n" ]
[ 1 ]
[ "Possibly the extra characters are the new line character or some other invisible to-your-text-editor character?\nTry to make a simple test file with only one character.\neg run\necho \"a\" > test_file\n\nAlso there is a dedicated bash command to count such stuff\nwc -m\n\n", "The observed behaviour is consistent with opening the file in binary mode and the file being encoded in utf-16 with a BOM.\nIf you then call len on the contents of that file it will count the bytes in that file.\nThe amount of bytes will depend on the specific encoding.\nThat would explain both the 2n cause every utf-16 char has 2 bytes as well as the + 2 the BOM newline.\n" ]
[ -2, -4 ]
[ "python", "string_length" ]
stackoverflow_0074452431_python_string_length.txt
Q: Debug a c++ python 3.10 extension, venvlauncher.pdb missing I followed Microsoft excellent tutorial to create a Python extension in c++. Everything works fine, I can compile, run and debug the code (both the Python and the C++) in Visual Studio 2022. However, the issue is that I want do this within a venv, this was possible with Python 3.7.0 but now when I create a venv with Python3.10-64 I can't debug the C++ part. I have included the include and libs of the "global Python" in the Visual Studio 2022. Do I miss something when I create the venv with c:\python310-64\python -m venv venv? There seems to be a lot less in the Script folder now with Python3.10 compared to with Python3.7. is there anything that I can change within Visual Studio 2022 to hit the C++ breakpoints when I run Python from venv created with Python3.10? When I look at what modules that are loaded when I start the debugging from my venv, and right click to find the symbols for Python.exe it looks like this: . Compared to this long list when I start the debugging with the "global" Python installation: A: Woho! I finally figured it out. The venv needs to be created with --symlinks like this C:\Python310-64\python.exe -m venv venv --symlinks. You need to run the command as administrator to get it to work!
Debug a c++ python 3.10 extension, venvlauncher.pdb missing
I followed Microsoft excellent tutorial to create a Python extension in c++. Everything works fine, I can compile, run and debug the code (both the Python and the C++) in Visual Studio 2022. However, the issue is that I want do this within a venv, this was possible with Python 3.7.0 but now when I create a venv with Python3.10-64 I can't debug the C++ part. I have included the include and libs of the "global Python" in the Visual Studio 2022. Do I miss something when I create the venv with c:\python310-64\python -m venv venv? There seems to be a lot less in the Script folder now with Python3.10 compared to with Python3.7. is there anything that I can change within Visual Studio 2022 to hit the C++ breakpoints when I run Python from venv created with Python3.10? When I look at what modules that are loaded when I start the debugging from my venv, and right click to find the symbols for Python.exe it looks like this: . Compared to this long list when I start the debugging with the "global" Python installation:
[ "Woho! I finally figured it out. The venv needs to be created with --symlinks like this C:\\Python310-64\\python.exe -m venv venv --symlinks. You need to run the command as administrator to get it to work!\n" ]
[ 0 ]
[]
[]
[ "c++", "python", "visual_studio", "visual_studio_2022" ]
stackoverflow_0074421151_c++_python_visual_studio_visual_studio_2022.txt
Q: Save customer in the background on django forms Hi I am trying to automatically save the customer on post without having to list it in the forms. It currently shows the drop down and saves correctly but if I remove customer from forms.py it doesn't save anymore. views.py @login_required(login_url='login') def createInfringer(request): customer=request.user.customer form = InfringerForm(customer=customer) if request.method == 'POST': form = InfringerForm(customer, request.POST) if form.is_valid(): form.save() return redirect('infringer-list') context ={'form': form} return render (request, 'base/infringement_form.html', context) forms.py class InfringerForm(ModelForm): def __init__(self, customer, *args, **kwargs): super(InfringerForm,self).__init__(*args, **kwargs) self.fields['customer'].queryset = Customer.objects.filter(name=customer) self.fields['status'].queryset = Status.objects.filter(customer=customer) class Meta: model = Infringer fields = ['name', 'brand_name','status','customer'] UPDATE suggestion below was added but it still doesn't save customer. A: If I am understanding your problem correctly, you'd like to save the customer in your model but do not wish to show the customer field on your form as the customer is the logged-in user. If that assumption is correct, you need to first remove the customer field from your form fields and its __init__ method. Then, you'd need to pass the customer to your save method during the post request as your model probably requires that field: @login_required(login_url='login') def createInfringer(request): customer=request.user.customer form = InfringerForm(customer=customer) if request.method == 'POST': form = InfringerForm(customer, request.POST) if form.is_valid(): saved_instance = form.save(customer) print (f'Successfully saved the infringer with its customer {saved_instance.customer}') ## Insert this and see what it says return redirect('infringer-list') context ={'form': form} return render (request, 'base/infringement_form.html', context) class InfringerForm(ModelForm): class Meta: model = Infringer # fields = ['name', 'brand_name','status','customer'] fields = ['name', 'brand_name','status'] # Notice the above commented line. Also, Add this instead def __init__(self, customer, *args, **kwargs): super(InfringerForm,self).__init__(*args, **kwargs) # self.fields['customer'].queryset = Customer.objects.filter(name=customer) self.fields['status'].queryset = Status.objects.filter(customer=customer) def save(self, customer, *args, **kwargs): instance = super(InfringerForm, self).save( *args, **kwargs) if customer: print (f'customer is {customer}') self.customer = customer instance.save() print (f'instance was saved with the customer {instance.customer}') return instance I have not tested the above code, but it should work A: forms.py class InfringerForm(ModelForm): class Meta: model = Infringer fields = ['name', 'brand_name', 'status'] def __init__(self, customer, *args, **kwargs): super().__init__(*args, **kwargs) self.customer = customer self.fields['status'].queryset = Status.objects.filter(customer=customer) def save(self, *args, **kwargs): self.instance.customer = self.customer return super().save( *args, **kwargs) views.py @login_required(login_url='login') def createInfringer(request): customer = request.user.customer form = InfringerForm(customer=customer) if request.method == 'POST': form = InfringerForm(customer, request.POST, request.FILES) if form.is_valid(): saved_instance = form.save() print (f'customer in views.py is {customer}') print (f'Successfully saved the infringer in views.py with its customer {saved_instance.customer}') return redirect('infringer-list') return render (request, 'base/infringement_form.html', {'form': form})
Save customer in the background on django forms
Hi I am trying to automatically save the customer on post without having to list it in the forms. It currently shows the drop down and saves correctly but if I remove customer from forms.py it doesn't save anymore. views.py @login_required(login_url='login') def createInfringer(request): customer=request.user.customer form = InfringerForm(customer=customer) if request.method == 'POST': form = InfringerForm(customer, request.POST) if form.is_valid(): form.save() return redirect('infringer-list') context ={'form': form} return render (request, 'base/infringement_form.html', context) forms.py class InfringerForm(ModelForm): def __init__(self, customer, *args, **kwargs): super(InfringerForm,self).__init__(*args, **kwargs) self.fields['customer'].queryset = Customer.objects.filter(name=customer) self.fields['status'].queryset = Status.objects.filter(customer=customer) class Meta: model = Infringer fields = ['name', 'brand_name','status','customer'] UPDATE suggestion below was added but it still doesn't save customer.
[ "If I am understanding your problem correctly, you'd like to save the customer in your model but do not wish to show the customer field on your form as the customer is the logged-in user. If that assumption is correct, you need to first remove the customer field from your form fields and its __init__ method. Then, you'd need to pass the customer to your save method during the post request as your model probably requires that field:\n@login_required(login_url='login')\ndef createInfringer(request):\n customer=request.user.customer\n form = InfringerForm(customer=customer)\n if request.method == 'POST':\n form = InfringerForm(customer, request.POST)\n if form.is_valid(): \n saved_instance = form.save(customer)\n print (f'Successfully saved the infringer with its customer {saved_instance.customer}') ## Insert this and see what it says\n\n return redirect('infringer-list')\n \n context ={'form': form}\n return render (request, 'base/infringement_form.html', context)\n\nclass InfringerForm(ModelForm):\n class Meta:\n model = Infringer\n # fields = ['name', 'brand_name','status','customer']\n fields = ['name', 'brand_name','status'] # Notice the above commented line. Also, Add this instead\n \n def __init__(self, customer, *args, **kwargs):\n super(InfringerForm,self).__init__(*args, **kwargs)\n # self.fields['customer'].queryset = Customer.objects.filter(name=customer)\n self.fields['status'].queryset = Status.objects.filter(customer=customer)\n \n def save(self, customer, *args, **kwargs):\n instance = super(InfringerForm, self).save( *args, **kwargs) \n if customer:\n print (f'customer is {customer}')\n self.customer = customer\n \n instance.save()\n print (f'instance was saved with the customer {instance.customer}')\n return instance\n \n\nI have not tested the above code, but it should work\n", "forms.py\n\nclass InfringerForm(ModelForm): \n class Meta:\n model = Infringer\n fields = ['name', 'brand_name', 'status'] \n\n def __init__(self, customer, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.customer = customer\n self.fields['status'].queryset = Status.objects.filter(customer=customer)\n\n def save(self, *args, **kwargs):\n self.instance.customer = self.customer\n return super().save( *args, **kwargs)\n\n\nviews.py\n\n@login_required(login_url='login') def createInfringer(request):\ncustomer = request.user.customer\nform = InfringerForm(customer=customer)\nif request.method == 'POST':\n form = InfringerForm(customer, request.POST, request.FILES)\n if form.is_valid(): \n saved_instance = form.save()\n print (f'customer in views.py is {customer}')\n print (f'Successfully saved the infringer in views.py with its customer {saved_instance.customer}')\n return redirect('infringer-list')\n\nreturn render (request, 'base/infringement_form.html', {'form': form})\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "forms", "python" ]
stackoverflow_0074438303_django_forms_python.txt
Q: Why is "Image" not defined after PIL import First time Python user, so apologies if I am misunderstanding something basic like how libraries are accessed (I am an R user). Using a colleague's code (which works on his end) and trying to load the the following: from reportlab.lib import colors results in the following error: Traceback (most recent call last): File "Box\Py\Python3\Py3_StaticMain.py", line 32, in <module> from reportlab.lib import colors File "C:\Program Files\Python310\lib\site-packages\reportlab\lib\colors.py", line 44, in <module> from reportlab.lib.utils import asNative, isStr, rl_safe_eval File "C:\Program Files\Python310\lib\site-packages\reportlab\lib\utils.py", line 389, in <module> haveImages = Image is not None NameError: name 'Image' is not defined Pillow and reportlab are installed. After searching online I found similar error reports and the solution was to add this line prior to the previous library call (again, my colleague does need this on his end): from PIL import Image However, this did not fix the problem, the error persists. Also of note both these lines get greyed out by PyCharm which apparently means that these libraries are already loaded so these lines are unnecessary? It is counter intuitive that an unnecessary command would cause an error. The other libraries imported are os.path, sys, datetime, and tkinter. Also, even if I just put these two lines in a new py file I get the same behavior: greyed out and error. A: Not a very satisfying answer but after uninstalling and reinstalling both Python and the IDE everything worked.
Why is "Image" not defined after PIL import
First time Python user, so apologies if I am misunderstanding something basic like how libraries are accessed (I am an R user). Using a colleague's code (which works on his end) and trying to load the the following: from reportlab.lib import colors results in the following error: Traceback (most recent call last): File "Box\Py\Python3\Py3_StaticMain.py", line 32, in <module> from reportlab.lib import colors File "C:\Program Files\Python310\lib\site-packages\reportlab\lib\colors.py", line 44, in <module> from reportlab.lib.utils import asNative, isStr, rl_safe_eval File "C:\Program Files\Python310\lib\site-packages\reportlab\lib\utils.py", line 389, in <module> haveImages = Image is not None NameError: name 'Image' is not defined Pillow and reportlab are installed. After searching online I found similar error reports and the solution was to add this line prior to the previous library call (again, my colleague does need this on his end): from PIL import Image However, this did not fix the problem, the error persists. Also of note both these lines get greyed out by PyCharm which apparently means that these libraries are already loaded so these lines are unnecessary? It is counter intuitive that an unnecessary command would cause an error. The other libraries imported are os.path, sys, datetime, and tkinter. Also, even if I just put these two lines in a new py file I get the same behavior: greyed out and error.
[ "Not a very satisfying answer but after uninstalling and reinstalling both Python and the IDE everything worked.\n" ]
[ 0 ]
[]
[]
[ "python", "python_imaging_library" ]
stackoverflow_0074452121_python_python_imaging_library.txt
Q: Equivalent of "points_to_xy" in GeoPandas to generate LineStrings faster? I have a list of lines defined by start and end points. The size is on the order of 100,000s to possibly low 1,000,000. For making a list of points I use points_from_xy in GeoPandas, which is highly optimized, but is there a similar and fast way to make LineStrings in GeoPandas/Shapely? My current method is as follows, but I can't think of another way that can bypass the use of an explicit loop. [((start_x[i], start_y[i]), (end_x[i], end_y[i])) for i in range(n_pts)] A: You can use points_from_xy to build two sets of GeometryArrays, then use some sneaky geometric set operations and constructive methods to get the result. Specifically, the convex_hull of two points is a line :) # setup import numpy as np, geopandas as gpd, shapely.geometry N = int(1e7) x1, x2, y1, y2 = (np.random.random(size=N) for _ in range(4)) Running the following with 10 million points finishes in a manageable amount of time: In [3]: %%time ...: ...: points1 = gpd.points_from_xy(x1, y1) ...: points2 = gpd.points_from_xy(x2, y2) ...: lines = points1.union(points2).convex_hull ...: ...: CPU times: user 18 s, sys: 4.93 s, total: 22.9 s Wall time: 25 s The result is a GeometryArray of LineString objects: In [4]: lines Out[4]: <GeometryArray> [<shapely.geometry.linestring.LineString object at 0x186e78880>, <shapely.geometry.linestring.LineString object at 0x186e78d60>, <shapely.geometry.linestring.LineString object at 0x186e78880>, <shapely.geometry.linestring.LineString object at 0x186e78d60>, <shapely.geometry.linestring.LineString object at 0x186e78880>, <shapely.geometry.linestring.LineString object at 0x186e78d60>, <shapely.geometry.linestring.LineString object at 0x186e78880>, <shapely.geometry.linestring.LineString object at 0x186e78d60>, <shapely.geometry.linestring.LineString object at 0x186e78880>, <shapely.geometry.linestring.LineString object at 0x186e78d60>, ... <shapely.geometry.linestring.LineString object at 0x186e79e70>, <shapely.geometry.linestring.LineString object at 0x186e7bac0>, <shapely.geometry.linestring.LineString object at 0x186e79e70>, <shapely.geometry.linestring.LineString object at 0x186e7bac0>, <shapely.geometry.linestring.LineString object at 0x186e79e70>, <shapely.geometry.linestring.LineString object at 0x186e7bac0>, <shapely.geometry.linestring.LineString object at 0x186e79e70>, <shapely.geometry.linestring.LineString object at 0x186e7bac0>, <shapely.geometry.linestring.LineString object at 0x186e79e70>, <shapely.geometry.linestring.LineString object at 0x186e7bac0>] Length: 10000000, dtype: geometry I tried this using shapely.geometry.LineString with 1/10 the points (1e6) in a list comprehension and it took 23.8 seconds. I got bored waiting for this with 1e7 points...
Equivalent of "points_to_xy" in GeoPandas to generate LineStrings faster?
I have a list of lines defined by start and end points. The size is on the order of 100,000s to possibly low 1,000,000. For making a list of points I use points_from_xy in GeoPandas, which is highly optimized, but is there a similar and fast way to make LineStrings in GeoPandas/Shapely? My current method is as follows, but I can't think of another way that can bypass the use of an explicit loop. [((start_x[i], start_y[i]), (end_x[i], end_y[i])) for i in range(n_pts)]
[ "You can use points_from_xy to build two sets of GeometryArrays, then use some sneaky geometric set operations and constructive methods to get the result. Specifically, the convex_hull of two points is a line :)\n# setup \nimport numpy as np, geopandas as gpd, shapely.geometry\n\nN = int(1e7)\nx1, x2, y1, y2 = (np.random.random(size=N) for _ in range(4))\n\nRunning the following with 10 million points finishes in a manageable amount of time:\nIn [3]: %%time\n ...:\n ...: points1 = gpd.points_from_xy(x1, y1)\n ...: points2 = gpd.points_from_xy(x2, y2)\n ...: lines = points1.union(points2).convex_hull\n ...:\n ...:\nCPU times: user 18 s, sys: 4.93 s, total: 22.9 s\nWall time: 25 s\n\nThe result is a GeometryArray of LineString objects:\nIn [4]: lines\nOut[4]:\n<GeometryArray>\n[<shapely.geometry.linestring.LineString object at 0x186e78880>,\n <shapely.geometry.linestring.LineString object at 0x186e78d60>,\n <shapely.geometry.linestring.LineString object at 0x186e78880>,\n <shapely.geometry.linestring.LineString object at 0x186e78d60>,\n <shapely.geometry.linestring.LineString object at 0x186e78880>,\n <shapely.geometry.linestring.LineString object at 0x186e78d60>,\n <shapely.geometry.linestring.LineString object at 0x186e78880>,\n <shapely.geometry.linestring.LineString object at 0x186e78d60>,\n <shapely.geometry.linestring.LineString object at 0x186e78880>,\n <shapely.geometry.linestring.LineString object at 0x186e78d60>,\n ...\n <shapely.geometry.linestring.LineString object at 0x186e79e70>,\n <shapely.geometry.linestring.LineString object at 0x186e7bac0>,\n <shapely.geometry.linestring.LineString object at 0x186e79e70>,\n <shapely.geometry.linestring.LineString object at 0x186e7bac0>,\n <shapely.geometry.linestring.LineString object at 0x186e79e70>,\n <shapely.geometry.linestring.LineString object at 0x186e7bac0>,\n <shapely.geometry.linestring.LineString object at 0x186e79e70>,\n <shapely.geometry.linestring.LineString object at 0x186e7bac0>,\n <shapely.geometry.linestring.LineString object at 0x186e79e70>,\n <shapely.geometry.linestring.LineString object at 0x186e7bac0>]\nLength: 10000000, dtype: geometry\n\nI tried this using shapely.geometry.LineString with 1/10 the points (1e6) in a list comprehension and it took 23.8 seconds. I got bored waiting for this with 1e7 points...\n" ]
[ 1 ]
[]
[]
[ "geopandas", "geospatial", "python", "shapely" ]
stackoverflow_0074480794_geopandas_geospatial_python_shapely.txt
Q: For loop in pandas dataframe column For example I have 2 data frames with 3 columns and I want to do a = df[x].isin(df2[x]) b = df[x].isin(df2[y]) c = df[y].isin(df2[x]) d = df[y].isin(df2[x]) x and y is a column name of my two dataframes. How can I do it in loop and save each result ? So it can be elegant. The result I expected more or less : a; True = ddd False = eee b; True = rrr False = fff c; and so d; and so thanks A: You need two loops: columns = (x, y) for a in columns: for b in columns: df[a].isin(df2[b])
For loop in pandas dataframe column
For example I have 2 data frames with 3 columns and I want to do a = df[x].isin(df2[x]) b = df[x].isin(df2[y]) c = df[y].isin(df2[x]) d = df[y].isin(df2[x]) x and y is a column name of my two dataframes. How can I do it in loop and save each result ? So it can be elegant. The result I expected more or less : a; True = ddd False = eee b; True = rrr False = fff c; and so d; and so thanks
[ "You need two loops:\ncolumns = (x, y)\nfor a in columns:\n for b in columns: \n df[a].isin(df2[b])\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "loops", "pandas", "python" ]
stackoverflow_0074482098_for_loop_loops_pandas_python.txt
Q: Understanding session with fastApi dependency I am new to Python and was studying FastApi and SQL model. Reference link: https://sqlmodel.tiangolo.com/tutorial/fastapi/session-with-dependency/#the-with-block Here, they have something like this def create_hero(*, session: Session = Depends(get_session), hero: HeroCreate): db_hero = Hero.from_orm(hero) session.add(db_hero) session.commit() session.refresh(db_hero) return db_hero Here I am unable to understand this part session.add(db_hero) session.commit() session.refresh(db_hero) What is it doing and how is it working? Couldn't understand this In fact, you could think that all that block of code inside of the create_hero() function is still inside a with block for the session, because this is more or less what's happening behind the scenes. But now, the with block is not explicitly in the function, but in the dependency above: A: It's an explanation from docs what is a session In the most general sense, the Session establishes all conversations with the database and represents a “holding zone” for all the objects which you’ve loaded or associated with it during its lifespan. It provides the interface where SELECT and other queries are made that will return and modify ORM-mapped objects. The ORM objects themselves are maintained inside the Session, inside a structure called the identity map - a data structure that maintains unique copies of each object, where “unique” means “only one object with a particular primary key”. So # This line just simply create a python object # that sqlalchemy would "understand". db_hero = Hero.from_orm(hero) # This line add the object `db_hero` to a “holding zone” session.add(db_hero) # This line take all objects from a “holding zone” and put them in a database # In our case we have only one object in this zone, # but it is possible to have several session.commit() # This line gets created row from the database and put it to the object. # It means it could have new attributes. For example id, # that database would set for this new row session.refresh(db_hero)
Understanding session with fastApi dependency
I am new to Python and was studying FastApi and SQL model. Reference link: https://sqlmodel.tiangolo.com/tutorial/fastapi/session-with-dependency/#the-with-block Here, they have something like this def create_hero(*, session: Session = Depends(get_session), hero: HeroCreate): db_hero = Hero.from_orm(hero) session.add(db_hero) session.commit() session.refresh(db_hero) return db_hero Here I am unable to understand this part session.add(db_hero) session.commit() session.refresh(db_hero) What is it doing and how is it working? Couldn't understand this In fact, you could think that all that block of code inside of the create_hero() function is still inside a with block for the session, because this is more or less what's happening behind the scenes. But now, the with block is not explicitly in the function, but in the dependency above:
[ "It's an explanation from docs what is a session\n\nIn the most general sense, the Session establishes all conversations\nwith the database and represents a “holding zone” for all the objects\nwhich you’ve loaded or associated with it during its lifespan. It\nprovides the interface where SELECT and other queries are made that\nwill return and modify ORM-mapped objects. The ORM objects themselves\nare maintained inside the Session, inside a structure called the\nidentity map - a data structure that maintains unique copies of each\nobject, where “unique” means “only one object with a particular\nprimary key”.\n\nSo\n# This line just simply create a python object\n# that sqlalchemy would \"understand\".\ndb_hero = Hero.from_orm(hero)\n\n# This line add the object `db_hero` to a “holding zone”\nsession.add(db_hero)\n\n# This line take all objects from a “holding zone” and put them in a database\n# In our case we have only one object in this zone, \n# but it is possible to have several\nsession.commit()\n\n# This line gets created row from the database and put it to the object. \n# It means it could have new attributes. For example id, \n# that database would set for this new row\nsession.refresh(db_hero)\n\n" ]
[ 0 ]
[]
[]
[ "django", "fastapi", "python" ]
stackoverflow_0074481604_django_fastapi_python.txt
Q: Rearrange values in dataframe based on condition in Pandas I have a dataset, where when the sum of Q1 24 - Q4 24 is between the number 1 - 2.5, I would like to place the number 2 in that row under Q4 24. Data ID type Q1 24 Q2 24 Q3 24 Q4 24 AA hi 2.0 1.2 0.5 0.6 AA hello 0.7 2.0 0.6 0.6 AA bye 0.6 0.6 0.6 0.4 AA ok 0.3 0.4 0.2 0.2 Desired ID type Q1 24 Q2 24 Q3 24 Q4 24 AA hi 2.0 1.2 0.5 0.6 AA hello 0.7 2.0 0.6 0.6 AA bye 0.0 0.0 0.0 2.0 AA ok 0.0 0.0 0.0 2.0 Doing df.loc[df.iloc[:,2:].sum(axis=1)>1<2.5, ['Q1 24','Q2 24','Q3 24','Q4 24']]= 2 A SO member helped with the above script, but how would I only target that row under Q4 24. I am thinking I can utilize iloc again for this. Any suggestion is appreciated. A: You were on the right track with boolean indexing. I would use: df.loc[df.filter(regex=r'^Q\d').sum(axis=1).between(1, 2.5), 'Q4 24'] = 2 Output: ID type Q1 24 Q2 24 Q3 24 Q4 24 0 AA hi 2.0 1.2 0.5 0.6 1 AA hello 0.7 2.0 0.6 0.6 2 AA bye 0.6 0.6 0.6 2.0 3 AA ok 0.3 0.4 0.2 2.0 adding 2 to last column and setting all the others to 0 sel = df.filter(regex=r'^Q\d') df.loc[sel.sum(axis=1).between(1, 2.5), sel.columns] = [0]*(sel.shape[1]-1)+[2] Output: ID type Q1 24 Q2 24 Q3 24 Q4 24 0 AA hi 2.0 1.2 0.5 0.6 1 AA hello 0.7 2.0 0.6 0.6 2 AA bye 0.0 0.0 0.0 2.0 3 AA ok 0.0 0.0 0.0 2.0 A: As an alternative: import numpy as np df['Q4 24']=np.where((df[df.columns[2:]].sum(axis=1)<=2.5) & (df[df.columns[2:]].sum(axis=1)>=1),2,df['Q4 24']) print(df) ''' ID type Q1 24 Q2 24 Q3 24 Q4 24 0 AA hi 2.0 1.2 0.5 0.6 1 AA hello 0.7 2.0 0.6 0.6 2 AA bye 0.6 0.6 0.6 2.0 3 AA ok 0.3 0.4 0.2 2.0 ''' A: This will do what you are after including zeroing the other columns: df.loc[df.filter(regex=r'^Q\d').sum(axis=1).between(1, 2.5), ['Q1 24','Q2 24','Q3 24','Q4 24']] = 0, 0, 0, 2 ID type Q1 24 Q2 24 Q3 24 Q4 24 0 AA hi 2.0 1.2 0.5 0.6 1 AA hello 0.7 2.0 0.6 0.6 2 AA bye 0 0 0 2.0 3 AA ok 0 0 0 2.0
Rearrange values in dataframe based on condition in Pandas
I have a dataset, where when the sum of Q1 24 - Q4 24 is between the number 1 - 2.5, I would like to place the number 2 in that row under Q4 24. Data ID type Q1 24 Q2 24 Q3 24 Q4 24 AA hi 2.0 1.2 0.5 0.6 AA hello 0.7 2.0 0.6 0.6 AA bye 0.6 0.6 0.6 0.4 AA ok 0.3 0.4 0.2 0.2 Desired ID type Q1 24 Q2 24 Q3 24 Q4 24 AA hi 2.0 1.2 0.5 0.6 AA hello 0.7 2.0 0.6 0.6 AA bye 0.0 0.0 0.0 2.0 AA ok 0.0 0.0 0.0 2.0 Doing df.loc[df.iloc[:,2:].sum(axis=1)>1<2.5, ['Q1 24','Q2 24','Q3 24','Q4 24']]= 2 A SO member helped with the above script, but how would I only target that row under Q4 24. I am thinking I can utilize iloc again for this. Any suggestion is appreciated.
[ "You were on the right track with boolean indexing.\nI would use:\ndf.loc[df.filter(regex=r'^Q\\d').sum(axis=1).between(1, 2.5), 'Q4 24'] = 2\n\nOutput:\n ID type Q1 24 Q2 24 Q3 24 Q4 24\n0 AA hi 2.0 1.2 0.5 0.6\n1 AA hello 0.7 2.0 0.6 0.6\n2 AA bye 0.6 0.6 0.6 2.0\n3 AA ok 0.3 0.4 0.2 2.0\n\nadding 2 to last column and setting all the others to 0\nsel = df.filter(regex=r'^Q\\d')\n\ndf.loc[sel.sum(axis=1).between(1, 2.5), sel.columns] = [0]*(sel.shape[1]-1)+[2]\n\nOutput:\n ID type Q1 24 Q2 24 Q3 24 Q4 24\n0 AA hi 2.0 1.2 0.5 0.6\n1 AA hello 0.7 2.0 0.6 0.6\n2 AA bye 0.0 0.0 0.0 2.0\n3 AA ok 0.0 0.0 0.0 2.0\n\n", "As an alternative:\nimport numpy as np\ndf['Q4 24']=np.where((df[df.columns[2:]].sum(axis=1)<=2.5) & (df[df.columns[2:]].sum(axis=1)>=1),2,df['Q4 24'])\nprint(df)\n'''\n ID type Q1 24 Q2 24 Q3 24 Q4 24\n0 AA hi 2.0 1.2 0.5 0.6\n1 AA hello 0.7 2.0 0.6 0.6\n2 AA bye 0.6 0.6 0.6 2.0\n3 AA ok 0.3 0.4 0.2 2.0\n'''\n\n\n", "This will do what you are after including zeroing the other columns:\ndf.loc[df.filter(regex=r'^Q\\d').sum(axis=1).between(1, 2.5), ['Q1 24','Q2 24','Q3 24','Q4 24']] = 0, 0, 0, 2\n\n ID type Q1 24 Q2 24 Q3 24 Q4 24\n0 AA hi 2.0 1.2 0.5 0.6\n1 AA hello 0.7 2.0 0.6 0.6\n2 AA bye 0 0 0 2.0\n3 AA ok 0 0 0 2.0\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074482083_numpy_pandas_python.txt
Q: for loop is not iterating correctly I tried to iterate through this list and append the indexes of the parenthases, but it gave the wrong ones back. Code: t = "(= 2 (+ 4 5))" a = [] for each in t: if (each == '(') or (each == ')'): a.append(t.index(each)) else: pass print(t) print(a) Result: (= 2 (+ 4 5)) [0, 0, 11, 11] It should be: (= 2 (+ 4 5)) [0, 5, 11, 12] A: You can avoid making python search back through a list (You have t.index(each)) by using enumerate() to get the index directly: t = "(= 2 (+ 4 5))" a = [] for index,each in enumerate(t): if (each == '(') or (each == ')'): a.append(index) else: pass print(t) print(a) Output as requested
for loop is not iterating correctly
I tried to iterate through this list and append the indexes of the parenthases, but it gave the wrong ones back. Code: t = "(= 2 (+ 4 5))" a = [] for each in t: if (each == '(') or (each == ')'): a.append(t.index(each)) else: pass print(t) print(a) Result: (= 2 (+ 4 5)) [0, 0, 11, 11] It should be: (= 2 (+ 4 5)) [0, 5, 11, 12]
[ "You can avoid making python search back through a list (You have t.index(each)) by using enumerate() to get the index directly:\nt = \"(= 2 (+ 4 5))\"\na = []\nfor index,each in enumerate(t):\n if (each == '(') or (each == ')'):\n a.append(index)\n else:\n pass\nprint(t)\nprint(a)\n\nOutput as requested\n" ]
[ 1 ]
[]
[]
[ "append", "for_loop", "list", "python", "string" ]
stackoverflow_0074482165_append_for_loop_list_python_string.txt
Q: ModuleNotFoundError: No module named 'cmake', even through cmake is installed I am trying to install the Python Lib 'Mapping', but when it tries to install 'osqp' i get the following Error: ModuleNotFoundError: No module named 'cmake'. But 'cmake' is installed and when i run 'pip freeze' i find it, also i am able to use 'import cmake' without any errors. What could be the issue? Thanks. I tried to reinstall cmake and reboot the Laptop, but it didn't work.
ModuleNotFoundError: No module named 'cmake', even through cmake is installed
I am trying to install the Python Lib 'Mapping', but when it tries to install 'osqp' i get the following Error: ModuleNotFoundError: No module named 'cmake'. But 'cmake' is installed and when i run 'pip freeze' i find it, also i am able to use 'import cmake' without any errors. What could be the issue? Thanks. I tried to reinstall cmake and reboot the Laptop, but it didn't work.
[]
[]
[ "unfortunatelly, this problem is not very often. May you can try to reinstall it and clean the ide. All the best\n" ]
[ -1 ]
[ "cmake", "python" ]
stackoverflow_0074476006_cmake_python.txt
Q: Gurobi: get LHS (left-hand side) of a constraint As written HERE (or HERE), one can get the sense (<, =, >) and the RHS (right-hand side) of a constraint like this: for cnstr in model.getConstrs(): print(cnstr.sense, cnstr.rhs) How can one get the coefficients in a constraint? I checked the attributes of variables and models, but found nothing of the sort. A: Okay, it seems that one way is using the Model.getCoeff() function: for cnstr in pre.getConstrs(): for var in pre.getVars(): print(pre.getCoeff(cnstr, var), end=" ") A: The best way to do this is to walk the object from the LHS. Assuming your model consists of only linear constraints, this looks like the following: for cnstr in model.getConstrs(): print("Constraint %s: sense %s, RHS=%f" % (cnstr.ConstrName, cnstr.Sense, cnstr.RHS)) row = model.getRow(cnstr) for k in range(row.size()): print("Variable %s, coefficient %f" % (row.getVar(k).VarName, row.getCoeff(k)) You can also adapt this for quadratic constraints. A: After searching, i been looking for the example based in the table m = gp.Model(....) for c in m.getConstrs(): lhs = c.rhs - c.slack
Gurobi: get LHS (left-hand side) of a constraint
As written HERE (or HERE), one can get the sense (<, =, >) and the RHS (right-hand side) of a constraint like this: for cnstr in model.getConstrs(): print(cnstr.sense, cnstr.rhs) How can one get the coefficients in a constraint? I checked the attributes of variables and models, but found nothing of the sort.
[ "Okay, it seems that one way is using the Model.getCoeff() function:\nfor cnstr in pre.getConstrs():\n for var in pre.getVars():\n print(pre.getCoeff(cnstr, var), end=\" \")\n\n", "The best way to do this is to walk the object from the LHS. Assuming your model consists of only linear constraints, this looks like the following:\nfor cnstr in model.getConstrs():\n print(\"Constraint %s: sense %s, RHS=%f\" % (cnstr.ConstrName, cnstr.Sense, cnstr.RHS))\n row = model.getRow(cnstr)\n for k in range(row.size()):\n print(\"Variable %s, coefficient %f\" % (row.getVar(k).VarName, row.getCoeff(k))\n\nYou can also adapt this for quadratic constraints.\n", "After searching, i been looking for the example based in the table\n\n m = gp.Model(....)\n for c in m.getConstrs():\n lhs = c.rhs - c.slack \n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "constraints", "gurobi", "linear_programming", "python" ]
stackoverflow_0068776358_constraints_gurobi_linear_programming_python.txt
Q: Ho to merge 2 columns containing string dates and None into one column I've got this sample data frame. Each use has 2 rows. They have an arrival and a departure date, and one of them is always None. The dates are string. This is what my data currently looks like: traveller_id arrival departure 282840560712311 2022-10-20 None 282840560712311 None 2022-10-23 439863739170884 2022-12-22 None 439863739170884 None 2022-12-25 import pandas as pd import numpy as np df = pd.DataFrame(data = {'traveller_id': [712311, 712311, 170884, 170884] , 'arrival': ['2022-10-20', None, '2022-12-22', None] , 'departure': [None, '2022-10-23', None, '2022-12-25'] }) The goal is to have only 1 row per user, with dates in the other columns (and no None). It should look like this: traveller_id arrival departure 282840560712311 2022-10-20 2022-10-23 439863739170884 2022-12-22 2022-12-25 A: Replace None with pd.NaT and then do an agg with max after groupby traveller_id: df.replace({None:pd.NaT}).groupby('traveller_id', as_index=False).agg(max) output on your example from constructor: traveller_id arrival departure 0 170884 2022-12-22 2022-12-25 1 712311 2022-10-20 2022-10-23 I assumed the strings as dates. If they are strings, yes you can convert them as dates first and no need to replace None with pd.NaT. So with that: df['arrival'] = pd.to_datetime(df['arrival']) df['departure'] = pd.to_datetime(df['departure']) df = df.groupby('traveller_id', as_index=False).agg(max)
Ho to merge 2 columns containing string dates and None into one column
I've got this sample data frame. Each use has 2 rows. They have an arrival and a departure date, and one of them is always None. The dates are string. This is what my data currently looks like: traveller_id arrival departure 282840560712311 2022-10-20 None 282840560712311 None 2022-10-23 439863739170884 2022-12-22 None 439863739170884 None 2022-12-25 import pandas as pd import numpy as np df = pd.DataFrame(data = {'traveller_id': [712311, 712311, 170884, 170884] , 'arrival': ['2022-10-20', None, '2022-12-22', None] , 'departure': [None, '2022-10-23', None, '2022-12-25'] }) The goal is to have only 1 row per user, with dates in the other columns (and no None). It should look like this: traveller_id arrival departure 282840560712311 2022-10-20 2022-10-23 439863739170884 2022-12-22 2022-12-25
[ "Replace None with pd.NaT and then do an agg with max after groupby traveller_id:\ndf.replace({None:pd.NaT}).groupby('traveller_id', as_index=False).agg(max)\n\noutput on your example from constructor:\n traveller_id arrival departure\n0 170884 2022-12-22 2022-12-25\n1 712311 2022-10-20 2022-10-23\n\nI assumed the strings as dates. If they are strings, yes you can convert them as dates first and no need to replace None with pd.NaT.\nSo with that:\ndf['arrival'] = pd.to_datetime(df['arrival'])\ndf['departure'] = pd.to_datetime(df['departure'])\ndf = df.groupby('traveller_id', as_index=False).agg(max)\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074482180_dataframe_python.txt
Q: Peculiar pandas 'is' vs '==' behaviour with functions referencing data frame elements In writing a function that returns the exact (row, column) position of a known element in a data frame (is there an efficient built-in function already?), I came across the following strange behaviour. It is easiest to describe with an example. Use the following data frame: In [0] df = pd.DataFrame({'A': ['one', 'two', 'three'] , 'B': ['foo', 'bar', 'foo'], 'C':[1,2,3], 'D':[4,5,6]}, index = [0,1,2]) In [1] df Out [1]: A B C D 0 one foo 1 4 1 two bar 2 5 2 three foo 3 6 My original function to return an exact (row, col) tuple used "is" as I wanted to ensure I was referring to the correct object, rather than the first occurring object in the data frame that held the same numeric value so if I wanted the index of the number 4 in (0,'D'), I wanted to make sure I wasn't referencing a number 4 that happened be in (0,'A') for example. My original data frame was all floats, but I've used the simplified one above with strings and ints to highlight some of the strange behaviour, as well as written a simplified function to show the quirky behaviour. I create this function to return the element at a particular (row,col) location in the data frame. In [2] def testr(datframe,row,col): return datframe[col][row] Now using this function to test object reference equality (pointing to the same thing): In [3] df.loc[0,'B'] is testr(df,0,'B') Out [3] True All good. However, trying a numeric entry: In [4] df.loc[0,'C'] is testr(df,0,'C') Out [4] False This is confusing to me. I thought that my function was returning a reference to a particular element in the data frame and thus 'is' should return True, as in the case of a string element. Something is going on behind the scenes with the return from my function, and it appears that what is being returned is not the same object that is in the data frame, but a copy, when that element is a numeric. Note that substituting '==' for 'is' works fine for numeric elements (as one would expect). Can anyone assist me in understanding more deeply what is happening here? Many thanks. A: I thought that my function was returning a reference to a particular element in the data frame and thus 'is' should return True, as in the case of a string element. No. A new python object is created each time you retrieve the item, because it isn't stored as a python object (e.g. with an object dtype) it's stored in a primitive buffer of primitive, 64-bit (or possibly 32 bit) integers. This is similar to "automatic boxing" in OOP languages with primitive types (as opposed to reference types- note, Python itself has no such distinction, everything is always an object). So, consider: >>> import numpy as np >>> import sys >>> arr = np.array([1,2,3], dtype=np.int64) >>> arr.nbytes 24 >>> arr.nbytes == 3*8 True >>> e1 = arr[0] >>> sys.getsizeof(e1) # not 64 bits (8 bytes), it's actually a big python object 32 >>> e2 = arr[0] >>> e1 1 >>> type(e1) <class 'numpy.int64'> >>> e1 is e2 False
Peculiar pandas 'is' vs '==' behaviour with functions referencing data frame elements
In writing a function that returns the exact (row, column) position of a known element in a data frame (is there an efficient built-in function already?), I came across the following strange behaviour. It is easiest to describe with an example. Use the following data frame: In [0] df = pd.DataFrame({'A': ['one', 'two', 'three'] , 'B': ['foo', 'bar', 'foo'], 'C':[1,2,3], 'D':[4,5,6]}, index = [0,1,2]) In [1] df Out [1]: A B C D 0 one foo 1 4 1 two bar 2 5 2 three foo 3 6 My original function to return an exact (row, col) tuple used "is" as I wanted to ensure I was referring to the correct object, rather than the first occurring object in the data frame that held the same numeric value so if I wanted the index of the number 4 in (0,'D'), I wanted to make sure I wasn't referencing a number 4 that happened be in (0,'A') for example. My original data frame was all floats, but I've used the simplified one above with strings and ints to highlight some of the strange behaviour, as well as written a simplified function to show the quirky behaviour. I create this function to return the element at a particular (row,col) location in the data frame. In [2] def testr(datframe,row,col): return datframe[col][row] Now using this function to test object reference equality (pointing to the same thing): In [3] df.loc[0,'B'] is testr(df,0,'B') Out [3] True All good. However, trying a numeric entry: In [4] df.loc[0,'C'] is testr(df,0,'C') Out [4] False This is confusing to me. I thought that my function was returning a reference to a particular element in the data frame and thus 'is' should return True, as in the case of a string element. Something is going on behind the scenes with the return from my function, and it appears that what is being returned is not the same object that is in the data frame, but a copy, when that element is a numeric. Note that substituting '==' for 'is' works fine for numeric elements (as one would expect). Can anyone assist me in understanding more deeply what is happening here? Many thanks.
[ "\nI thought that my function was returning a reference to a particular\nelement in the data frame and thus 'is' should return True, as in the\ncase of a string element.\n\nNo. A new python object is created each time you retrieve the item, because it isn't stored as a python object (e.g. with an object dtype) it's stored in a primitive buffer of primitive, 64-bit (or possibly 32 bit) integers. This is similar to \"automatic boxing\" in OOP languages with primitive types (as opposed to reference types- note, Python itself has no such distinction, everything is always an object).\nSo, consider:\n>>> import numpy as np\n>>> import sys\n>>> arr = np.array([1,2,3], dtype=np.int64)\n>>> arr.nbytes\n24\n>>> arr.nbytes == 3*8\nTrue\n>>> e1 = arr[0]\n>>> sys.getsizeof(e1) # not 64 bits (8 bytes), it's actually a big python object\n32\n>>> e2 = arr[0]\n>>> e1\n1\n>>> type(e1)\n<class 'numpy.int64'>\n>>> e1 is e2\nFalse\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "function", "pandas", "python" ]
stackoverflow_0074482200_dataframe_function_pandas_python.txt
Q: Get xml value of ElementTree Element I would like to get the xml value of an element in ElementTree. For example, if I had the code: <?xml version="1.0" encoding="UTF-8"?> <item> <child>asd</child> hello world <ch>jkl</ch> </item> It would get me <child>asd</child> hello world <ch>jkl</ch> Here's what I tried so far: import xml.etree.ElementTree as ET root = ET.fromstring("""<?xml version="1.0" encoding="UTF-8"?> <item> <child>asd</child> hello world <ch>jkl</ch> </item>""") print(root.text) A: Try print(ET.tostring(root.find('.//child')).decode(),ET.tostring(root.find('.//ch')).decode()) Or, more readable: elems = ['child','ch'] for elem in elems: print(ET.tostring(doc.find(f'.//{elem}')).decode()) The output, based on the xml in your question, should be what you're looking for. A: Building on Jack Fleeting's answer, I created a solution I feel is more general, not just relating to the xml I inserted. import xml.etree.ElementTree as ET root = ET.fromstring("""<?xml version="1.0" encoding="UTF-8"?> <item> <child>asd</child> hello world <ch>jkl</ch> </item>""") for elem in root: print(ET.tostring(root.find(f'.//{elem.tag}')).decode())
Get xml value of ElementTree Element
I would like to get the xml value of an element in ElementTree. For example, if I had the code: <?xml version="1.0" encoding="UTF-8"?> <item> <child>asd</child> hello world <ch>jkl</ch> </item> It would get me <child>asd</child> hello world <ch>jkl</ch> Here's what I tried so far: import xml.etree.ElementTree as ET root = ET.fromstring("""<?xml version="1.0" encoding="UTF-8"?> <item> <child>asd</child> hello world <ch>jkl</ch> </item>""") print(root.text)
[ "Try\nprint(ET.tostring(root.find('.//child')).decode(),ET.tostring(root.find('.//ch')).decode())\n\nOr, more readable:\nelems = ['child','ch']\nfor elem in elems:\n print(ET.tostring(doc.find(f'.//{elem}')).decode())\n\nThe output, based on the xml in your question, should be what you're looking for.\n", "Building on Jack Fleeting's answer, I created a solution I feel is more general, not just relating to the xml I inserted.\nimport xml.etree.ElementTree as ET\nroot = ET.fromstring(\"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<item>\n<child>asd</child>\nhello world\n<ch>jkl</ch>\n</item>\"\"\")\nfor elem in root:\n print(ET.tostring(root.find(f'.//{elem.tag}')).decode())\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0074468730_python_xml.txt
Q: How to make a column header value into a date value and make the original value into it's own column named value I'm using Python/ Pandas. I'm receiving output that is coming in this format where the actual date value is in the column header of the csv enter image description here I need it to be in this format where there is a column "date" and "value" that hold the data enter image description here I was trying to use Pandas but I'm not sure exactly how to transpose this csv A: Actually you can use the melt method of a DataFrame, by choosing which columns will remain, and which one have to be set as values import pandas as pd df = pd.DataFrame.from_dict({'name': ['Profit', 'Loss'], 'Account Code': ['ABC', 'DEF'], 'Level Name': ['Winner', 'Loser'], '01/2022': [100, 200], '02/2022': [300, 400], '03/2022': [500, 600]}) df2 = df.melt(id_vars=['name', 'Account Code', 'Level Name',], var_name="Date", value_name="Value").sort_values(by=['name', 'Account Code', 'Level Name',]) Hope this helps A: This should produce your desired outcome. Please let me know if you need more clarification import pandas as pd from datetime import datetime df = pd.DataFrame({'Name':['Profit', 'Loss'], 'Account Code':['ABC', 'DEF'], 'Level Name':['Winner', 'Loser'], '01/2022':['100', '200'], '02/2022':['300', '400']}) new_df_dict = {'Name':[], 'Account Code' :[], 'Level Name':[], 'Value' : [], 'Date':[]} for i in range(len(df['Name'])): for date_ in df.columns.values[list(df.columns.values).index('Level Name')+1:]: new_df_dict['Name'].append(df['Name'][i]) new_df_dict['Account Code'].append(df['Account Code'][i]) new_df_dict['Level Name'].append(df['Name'][i]) new_df_dict['Value'].append(df[date_][i]) new_df_dict['Date'].append(date_) for dt in range(len(new_df_dict['Date'])): new_df_dict['Date'][dt] = datetime.strptime(new_df_dict['Date'][dt], '%m/%Y') new_df = pd.DataFrame(new_df_dict) **You can use df = pd.read_csv(filepath) to read in your data
How to make a column header value into a date value and make the original value into it's own column named value
I'm using Python/ Pandas. I'm receiving output that is coming in this format where the actual date value is in the column header of the csv enter image description here I need it to be in this format where there is a column "date" and "value" that hold the data enter image description here I was trying to use Pandas but I'm not sure exactly how to transpose this csv
[ "Actually you can use the melt method of a DataFrame, by choosing which columns will remain, and which one have to be set as values\nimport pandas as pd\n\ndf = pd.DataFrame.from_dict({'name': ['Profit', 'Loss'],\n 'Account Code': ['ABC', 'DEF'],\n 'Level Name': ['Winner', 'Loser'],\n '01/2022': [100, 200],\n '02/2022': [300, 400],\n '03/2022': [500, 600]})\n\ndf2 = df.melt(id_vars=['name', 'Account Code', 'Level Name',],\n var_name=\"Date\",\n value_name=\"Value\").sort_values(by=['name', 'Account Code', 'Level Name',])\n\nHope this helps\n", "This should produce your desired outcome. Please let me know if you need more clarification\nimport pandas as pd\nfrom datetime import datetime\n\ndf = pd.DataFrame({'Name':['Profit', 'Loss'],\n 'Account Code':['ABC', 'DEF'],\n 'Level Name':['Winner', 'Loser'],\n '01/2022':['100', '200'],\n '02/2022':['300', '400']})\n\nnew_df_dict = {'Name':[],\n 'Account Code' :[],\n 'Level Name':[],\n 'Value' : [],\n 'Date':[]}\n\nfor i in range(len(df['Name'])):\n for date_ in df.columns.values[list(df.columns.values).index('Level Name')+1:]:\n new_df_dict['Name'].append(df['Name'][i])\n new_df_dict['Account Code'].append(df['Account Code'][i])\n new_df_dict['Level Name'].append(df['Name'][i])\n new_df_dict['Value'].append(df[date_][i])\n new_df_dict['Date'].append(date_)\n\n\nfor dt in range(len(new_df_dict['Date'])):\n new_df_dict['Date'][dt] = datetime.strptime(new_df_dict['Date'][dt], '%m/%Y')\n\nnew_df = pd.DataFrame(new_df_dict)\n\n**You can use\ndf = pd.read_csv(filepath)\n\nto read in your data\n" ]
[ 1, 0 ]
[]
[]
[ "csv", "dataframe", "pandas", "python" ]
stackoverflow_0074481783_csv_dataframe_pandas_python.txt
Q: can only concatenate str (not "NoneType") to str BeautifulSoup hi everybody I make in my project a search on google with beautifulsoup and I received this message can only concatenate str (not "NoneType") to str when I try to search this is search.py from django.shortcuts import render, redirect import requests from bs4 import BeautifulSoup # done def google(s): USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36' headers = {"user-agent": USER_AGENT} r=None links = [] text = [] r = requests.get("https://www.google.com/search?q=" + s, headers=headers) soup = BeautifulSoup(r.content, "html.parser") for g in soup.find_all('div', class_='yuRUbf'): a = g.find('a') t = g.find('h3') links.append(a.get('href')) text.append(t.text) return links, text and this is views.py from django.shortcuts import render, redirect from netsurfers.search import google from bs4 import BeautifulSoup def home(request): return render(request,'home.html') def results(request): if request.method == "POST": result = request.POST.get('search') google_link,google_text = google(result) google_data = zip(google_link,google_text) if result == '': return redirect('home') else: return render(request,'results.html',{'google': google_data}) and this is urls.py from django.contrib import admin from django.urls import path,include from . import views urlpatterns = [ path('admin/', admin.site.urls), path('', views.home,name='home'), path('results/',views.results,name='Result') ] and this is a template home <form method='post' action="{% url 'Result' %}" class="d-flex" role="search"> {% csrf_token %} <input class="form-control me-2 " type="search" placeholder="ابحث وشارك بحثك مع الاخرين" aria-label="Search" style="width:22rem;"> <input type="submit" class="btn btn-outline-success" value="ابحث" > </form> and this is the template results {% for i,j in google %} <a href="{{ i }}" class="btn mt-3 w-100">{{ j }}</a><br> {% endfor %} I try to search with google with BeautifulSoup library but I got this message instead can only concatenate str (not "NoneType") to str A: your input template should have name property <input class="form-control me-2 " type="search" placeholder="ابحث وشارك بحثك مع الاخرين" aria-label="Search" style="width:22rem;" name="search">
can only concatenate str (not "NoneType") to str BeautifulSoup
hi everybody I make in my project a search on google with beautifulsoup and I received this message can only concatenate str (not "NoneType") to str when I try to search this is search.py from django.shortcuts import render, redirect import requests from bs4 import BeautifulSoup # done def google(s): USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36' headers = {"user-agent": USER_AGENT} r=None links = [] text = [] r = requests.get("https://www.google.com/search?q=" + s, headers=headers) soup = BeautifulSoup(r.content, "html.parser") for g in soup.find_all('div', class_='yuRUbf'): a = g.find('a') t = g.find('h3') links.append(a.get('href')) text.append(t.text) return links, text and this is views.py from django.shortcuts import render, redirect from netsurfers.search import google from bs4 import BeautifulSoup def home(request): return render(request,'home.html') def results(request): if request.method == "POST": result = request.POST.get('search') google_link,google_text = google(result) google_data = zip(google_link,google_text) if result == '': return redirect('home') else: return render(request,'results.html',{'google': google_data}) and this is urls.py from django.contrib import admin from django.urls import path,include from . import views urlpatterns = [ path('admin/', admin.site.urls), path('', views.home,name='home'), path('results/',views.results,name='Result') ] and this is a template home <form method='post' action="{% url 'Result' %}" class="d-flex" role="search"> {% csrf_token %} <input class="form-control me-2 " type="search" placeholder="ابحث وشارك بحثك مع الاخرين" aria-label="Search" style="width:22rem;"> <input type="submit" class="btn btn-outline-success" value="ابحث" > </form> and this is the template results {% for i,j in google %} <a href="{{ i }}" class="btn mt-3 w-100">{{ j }}</a><br> {% endfor %} I try to search with google with BeautifulSoup library but I got this message instead can only concatenate str (not "NoneType") to str
[ "your input template should have name property\n <input class=\"form-control me-2 \" type=\"search\" placeholder=\"ابحث وشارك بحثك مع الاخرين\" aria-label=\"Search\" style=\"width:22rem;\" name=\"search\">\n\n \n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "django", "html", "javascript", "python" ]
stackoverflow_0074482128_beautifulsoup_django_html_javascript_python.txt
Q: Log exception with traceback in Python How can I log my Python exceptions? try: do_something() except: # How can I log my exception here, complete with its traceback? A: Use logging.exception from within the except: handler/block to log the current exception along with the trace information, prepended with a message. import logging LOG_FILENAME = '/tmp/logging_example.out' logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG) logging.debug('This message should go to the log file') try: run_my_stuff() except: logging.exception('Got exception on main handler') raise Now looking at the log file, /tmp/logging_example.out: DEBUG:root:This message should go to the log file ERROR:root:Got exception on main handler Traceback (most recent call last): File "/tmp/teste.py", line 9, in <module> run_my_stuff() NameError: name 'run_my_stuff' is not defined A: Use exc_info options may be better, remains warning or error title: try: # coode in here except Exception as e: logging.error(e, exc_info=True) A: My job recently tasked me with logging all the tracebacks/exceptions from our application. I tried numerous techniques that others had posted online such as the one above but settled on a different approach. Overriding traceback.print_exception. I have a write up at http://www.bbarrows.com/ That would be much easier to read but Ill paste it in here as well. When tasked with logging all the exceptions that our software might encounter in the wild I tried a number of different techniques to log our python exception tracebacks. At first I thought that the python system exception hook, sys.excepthook would be the perfect place to insert the logging code. I was trying something similar to: import traceback import StringIO import logging import os, sys def my_excepthook(excType, excValue, traceback, logger=logger): logger.error("Logging an uncaught exception", exc_info=(excType, excValue, traceback)) sys.excepthook = my_excepthook This worked for the main thread but I soon found that the my sys.excepthook would not exist across any new threads my process started. This is a huge issue because most everything happens in threads in this project. After googling and reading plenty of documentation the most helpful information I found was from the Python Issue tracker. The first post on the thread shows a working example of the sys.excepthook NOT persisting across threads (as shown below). Apparently this is expected behavior. import sys, threading def log_exception(*args): print 'got exception %s' % (args,) sys.excepthook = log_exception def foo(): a = 1 / 0 threading.Thread(target=foo).start() The messages on this Python Issue thread really result in 2 suggested hacks. Either subclass Thread and wrap the run method in our own try except block in order to catch and log exceptions or monkey patch threading.Thread.run to run in your own try except block and log the exceptions. The first method of subclassing Thread seems to me to be less elegant in your code as you would have to import and use your custom Thread class EVERYWHERE you wanted to have a logging thread. This ended up being a hassle because I had to search our entire code base and replace all normal Threads with this custom Thread. However, it was clear as to what this Thread was doing and would be easier for someone to diagnose and debug if something went wrong with the custom logging code. A custome logging thread might look like this: class TracebackLoggingThread(threading.Thread): def run(self): try: super(TracebackLoggingThread, self).run() except (KeyboardInterrupt, SystemExit): raise except Exception, e: logger = logging.getLogger('') logger.exception("Logging an uncaught exception") The second method of monkey patching threading.Thread.run is nice because I could just run it once right after __main__ and instrument my logging code in all exceptions. Monkey patching can be annoying to debug though as it changes the expected functionality of something. The suggested patch from the Python Issue tracker was: def installThreadExcepthook(): """ Workaround for sys.excepthook thread bug From http://spyced.blogspot.com/2007/06/workaround-for-sysexcepthook-bug.html (https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1230540&group_id=5470). Call once from __main__ before creating any threads. If using psyco, call psyco.cannotcompile(threading.Thread.run) since this replaces a new-style class method. """ init_old = threading.Thread.__init__ def init(self, *args, **kwargs): init_old(self, *args, **kwargs) run_old = self.run def run_with_except_hook(*args, **kw): try: run_old(*args, **kw) except (KeyboardInterrupt, SystemExit): raise except: sys.excepthook(*sys.exc_info()) self.run = run_with_except_hook threading.Thread.__init__ = init It was not until I started testing my exception logging I realized that I was going about it all wrong. To test I had placed a raise Exception("Test") somewhere in my code. However, wrapping a a method that called this method was a try except block that printed out the traceback and swallowed the exception. This was very frustrating because I saw the traceback bring printed to STDOUT but not being logged. It was I then decided that a much easier method of logging the tracebacks was just to monkey patch the method that all python code uses to print the tracebacks themselves, traceback.print_exception. I ended up with something similar to the following: def add_custom_print_exception(): old_print_exception = traceback.print_exception def custom_print_exception(etype, value, tb, limit=None, file=None): tb_output = StringIO.StringIO() traceback.print_tb(tb, limit, tb_output) logger = logging.getLogger('customLogger') logger.error(tb_output.getvalue()) tb_output.close() old_print_exception(etype, value, tb, limit=None, file=None) traceback.print_exception = custom_print_exception This code writes the traceback to a String Buffer and logs it to logging ERROR. I have a custom logging handler set up the 'customLogger' logger which takes the ERROR level logs and send them home for analysis. A: You can log all uncaught exceptions on the main thread by assigning a handler to sys.excepthook, perhaps using the exc_info parameter of Python's logging functions: import sys import logging logging.basicConfig(filename='/tmp/foobar.log') def exception_hook(exc_type, exc_value, exc_traceback): logging.error( "Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback) ) sys.excepthook = exception_hook raise Exception('Boom') If your program uses threads, however, then note that threads created using threading.Thread will not trigger sys.excepthook when an uncaught exception occurs inside them, as noted in Issue 1230540 on Python's issue tracker. Some hacks have been suggested there to work around this limitation, like monkey-patching Thread.__init__ to overwrite self.run with an alternative run method that wraps the original in a try block and calls sys.excepthook from inside the except block. Alternatively, you could just manually wrap the entry point for each of your threads in try/except yourself. A: What I was looking for: import sys import traceback exc_type, exc_value, exc_traceback = sys.exc_info() traceback_in_var = traceback.format_tb(exc_traceback) See: https://docs.python.org/3/library/traceback.html A: You can get the traceback using a logger, at any level (DEBUG, INFO, ...). Note that using logging.exception, the level is ERROR. # test_app.py import sys import logging logging.basicConfig(level="DEBUG") def do_something(): raise ValueError(":(") try: do_something() except Exception: logging.debug("Something went wrong", exc_info=sys.exc_info()) DEBUG:root:Something went wrong Traceback (most recent call last): File "test_app.py", line 10, in <module> do_something() File "test_app.py", line 7, in do_something raise ValueError(":(") ValueError: :( EDIT: This works too (using python 3.6) logging.debug("Something went wrong", exc_info=True) A: Uncaught exception messages go to STDERR, so instead of implementing your logging in Python itself you could send STDERR to a file using whatever shell you're using to run your Python script. In a Bash script, you can do this with output redirection, as described in the BASH guide. Examples Append errors to file, other output to the terminal: ./test.py 2>> mylog.log Overwrite file with interleaved STDOUT and STDERR output: ./test.py &> mylog.log A: Here is a version that uses sys.excepthook import traceback import sys logger = logging.getLogger() def handle_excepthook(type, message, stack): logger.error(f'An unhandled exception occured: {message}. Traceback: {traceback.format_tb(stack)}') sys.excepthook = handle_excepthook A: This is how I do it. try: do_something() except: # How can I log my exception here, complete with its traceback? import traceback traceback.format_exc() # this will print a complete trace to stout. A: maybe not as stylish, but easier: #!/bin/bash log="/var/log/yourlog" /path/to/your/script.py 2>&1 | (while read; do echo "$REPLY" >> $log; done) A: To key off of others that may be getting lost in here, the way that works best with capturing it in logs is to use the traceback.format_exc() call and then split this string for each line in order to capture in the generated log file: import logging import sys import traceback try: ... except Exception as ex: # could be done differently, just showing you can split it apart to capture everything individually ex_t = type(ex).__name__ err = str(ex) err_msg = f'[{ex_t}] - {err}' logging.error(err_msg) # go through the trackback lines and individually add those to the log as an error for l in traceback.format_exc().splitlines(): logging.error(l)
Log exception with traceback in Python
How can I log my Python exceptions? try: do_something() except: # How can I log my exception here, complete with its traceback?
[ "Use logging.exception from within the except: handler/block to log the current exception along with the trace information, prepended with a message.\nimport logging\nLOG_FILENAME = '/tmp/logging_example.out'\nlogging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)\n\nlogging.debug('This message should go to the log file')\n\ntry:\n run_my_stuff()\nexcept:\n logging.exception('Got exception on main handler')\n raise\n\nNow looking at the log file, /tmp/logging_example.out:\nDEBUG:root:This message should go to the log file\nERROR:root:Got exception on main handler\nTraceback (most recent call last):\n File \"/tmp/teste.py\", line 9, in <module>\n run_my_stuff()\nNameError: name 'run_my_stuff' is not defined\n\n", "Use exc_info options may be better, remains warning or error title: \ntry:\n # coode in here\nexcept Exception as e:\n logging.error(e, exc_info=True)\n\n", "My job recently tasked me with logging all the tracebacks/exceptions from our application. I tried numerous techniques that others had posted online such as the one above but settled on a different approach. Overriding traceback.print_exception. \nI have a write up at http://www.bbarrows.com/ That would be much easier to read but Ill paste it in here as well.\nWhen tasked with logging all the exceptions that our software might encounter in the wild I tried a number of different techniques to log our python exception tracebacks. At first I thought that the python system exception hook, sys.excepthook would be the perfect place to insert the logging code. I was trying something similar to:\nimport traceback\nimport StringIO\nimport logging\nimport os, sys\n\ndef my_excepthook(excType, excValue, traceback, logger=logger):\n logger.error(\"Logging an uncaught exception\",\n exc_info=(excType, excValue, traceback))\n\nsys.excepthook = my_excepthook \n\nThis worked for the main thread but I soon found that the my sys.excepthook would not exist across any new threads my process started. This is a huge issue because most everything happens in threads in this project.\nAfter googling and reading plenty of documentation the most helpful information I found was from the Python Issue tracker.\nThe first post on the thread shows a working example of the sys.excepthook NOT persisting across threads (as shown below). Apparently this is expected behavior.\nimport sys, threading\n\ndef log_exception(*args):\n print 'got exception %s' % (args,)\nsys.excepthook = log_exception\n\ndef foo():\n a = 1 / 0\n\nthreading.Thread(target=foo).start()\n\nThe messages on this Python Issue thread really result in 2 suggested hacks. Either subclass Thread and wrap the run method in our own try except block in order to catch and log exceptions or monkey patch threading.Thread.run to run in your own try except block and log the exceptions.\nThe first method of subclassing Thread seems to me to be less elegant in your code as you would have to import and use your custom Thread class EVERYWHERE you wanted to have a logging thread. This ended up being a hassle because I had to search our entire code base and replace all normal Threads with this custom Thread. However, it was clear as to what this Thread was doing and would be easier for someone to diagnose and debug if something went wrong with the custom logging code. A custome logging thread might look like this:\nclass TracebackLoggingThread(threading.Thread):\n def run(self):\n try:\n super(TracebackLoggingThread, self).run()\n except (KeyboardInterrupt, SystemExit):\n raise\n except Exception, e:\n logger = logging.getLogger('')\n logger.exception(\"Logging an uncaught exception\")\n\nThe second method of monkey patching threading.Thread.run is nice because I could just run it once right after __main__ and instrument my logging code in all exceptions. Monkey patching can be annoying to debug though as it changes the expected functionality of something. The suggested patch from the Python Issue tracker was:\ndef installThreadExcepthook():\n \"\"\"\n Workaround for sys.excepthook thread bug\n From\nhttp://spyced.blogspot.com/2007/06/workaround-for-sysexcepthook-bug.html\n\n(https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1230540&group_id=5470).\n Call once from __main__ before creating any threads.\n If using psyco, call psyco.cannotcompile(threading.Thread.run)\n since this replaces a new-style class method.\n \"\"\"\n init_old = threading.Thread.__init__\n def init(self, *args, **kwargs):\n init_old(self, *args, **kwargs)\n run_old = self.run\n def run_with_except_hook(*args, **kw):\n try:\n run_old(*args, **kw)\n except (KeyboardInterrupt, SystemExit):\n raise\n except:\n sys.excepthook(*sys.exc_info())\n self.run = run_with_except_hook\n threading.Thread.__init__ = init\n\nIt was not until I started testing my exception logging I realized that I was going about it all wrong.\nTo test I had placed a\nraise Exception(\"Test\")\n\nsomewhere in my code. However, wrapping a a method that called this method was a try except block that printed out the traceback and swallowed the exception. This was very frustrating because I saw the traceback bring printed to STDOUT but not being logged. It was I then decided that a much easier method of logging the tracebacks was just to monkey patch the method that all python code uses to print the tracebacks themselves, traceback.print_exception.\nI ended up with something similar to the following:\ndef add_custom_print_exception():\n old_print_exception = traceback.print_exception\n def custom_print_exception(etype, value, tb, limit=None, file=None):\n tb_output = StringIO.StringIO()\n traceback.print_tb(tb, limit, tb_output)\n logger = logging.getLogger('customLogger')\n logger.error(tb_output.getvalue())\n tb_output.close()\n old_print_exception(etype, value, tb, limit=None, file=None)\n traceback.print_exception = custom_print_exception\n\nThis code writes the traceback to a String Buffer and logs it to logging ERROR. I have a custom logging handler set up the 'customLogger' logger which takes the ERROR level logs and send them home for analysis.\n", "You can log all uncaught exceptions on the main thread by assigning a handler to sys.excepthook, perhaps using the exc_info parameter of Python's logging functions:\nimport sys\nimport logging\n\nlogging.basicConfig(filename='/tmp/foobar.log')\n\ndef exception_hook(exc_type, exc_value, exc_traceback):\n logging.error(\n \"Uncaught exception\",\n exc_info=(exc_type, exc_value, exc_traceback)\n )\n\nsys.excepthook = exception_hook\n\nraise Exception('Boom')\n\nIf your program uses threads, however, then note that threads created using threading.Thread will not trigger sys.excepthook when an uncaught exception occurs inside them, as noted in Issue 1230540 on Python's issue tracker. Some hacks have been suggested there to work around this limitation, like monkey-patching Thread.__init__ to overwrite self.run with an alternative run method that wraps the original in a try block and calls sys.excepthook from inside the except block. Alternatively, you could just manually wrap the entry point for each of your threads in try/except yourself.\n", "What I was looking for:\nimport sys\nimport traceback\n\nexc_type, exc_value, exc_traceback = sys.exc_info()\ntraceback_in_var = traceback.format_tb(exc_traceback)\n\nSee: \n\nhttps://docs.python.org/3/library/traceback.html\n\n", "You can get the traceback using a logger, at any level (DEBUG, INFO, ...). Note that using logging.exception, the level is ERROR.\n# test_app.py\nimport sys\nimport logging\n\nlogging.basicConfig(level=\"DEBUG\")\n\ndef do_something():\n raise ValueError(\":(\")\n\ntry:\n do_something()\nexcept Exception:\n logging.debug(\"Something went wrong\", exc_info=sys.exc_info())\n\nDEBUG:root:Something went wrong\nTraceback (most recent call last):\n File \"test_app.py\", line 10, in <module>\n do_something()\n File \"test_app.py\", line 7, in do_something\n raise ValueError(\":(\")\nValueError: :(\n\nEDIT:\nThis works too (using python 3.6)\nlogging.debug(\"Something went wrong\", exc_info=True)\n\n", "Uncaught exception messages go to STDERR, so instead of implementing your logging in Python itself you could send STDERR to a file using whatever shell you're using to run your Python script. In a Bash script, you can do this with output redirection, as described in the BASH guide.\nExamples\nAppend errors to file, other output to the terminal:\n./test.py 2>> mylog.log\n\nOverwrite file with interleaved STDOUT and STDERR output:\n./test.py &> mylog.log\n\n", "Here is a version that uses sys.excepthook\nimport traceback\nimport sys\n\nlogger = logging.getLogger()\n\ndef handle_excepthook(type, message, stack):\n logger.error(f'An unhandled exception occured: {message}. Traceback: {traceback.format_tb(stack)}')\n\nsys.excepthook = handle_excepthook\n\n", "This is how I do it.\ntry:\n do_something()\nexcept:\n # How can I log my exception here, complete with its traceback?\n import traceback\n traceback.format_exc() # this will print a complete trace to stout.\n\n", "maybe not as stylish, but easier:\n#!/bin/bash\nlog=\"/var/log/yourlog\"\n/path/to/your/script.py 2>&1 | (while read; do echo \"$REPLY\" >> $log; done)\n\n", "To key off of others that may be getting lost in here, the way that works best with capturing it in logs is to use the traceback.format_exc() call and then split this string for each line in order to capture in the generated log file:\nimport logging\nimport sys\nimport traceback\n\ntry:\n ...\nexcept Exception as ex:\n # could be done differently, just showing you can split it apart to capture everything individually\n ex_t = type(ex).__name__\n err = str(ex)\n err_msg = f'[{ex_t}] - {err}'\n logging.error(err_msg)\n\n # go through the trackback lines and individually add those to the log as an error\n for l in traceback.format_exc().splitlines():\n logging.error(l)\n\n" ]
[ 297, 218, 74, 15, 10, 10, 3, 3, 2, 0, 0 ]
[ "Heres a simple example taken from the python 2.6 documentation:\nimport logging\nLOG_FILENAME = '/tmp/logging_example.out'\nlogging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG,)\n\nlogging.debug('This message should go to the log file')\n\n" ]
[ -3 ]
[ "error_handling", "exception", "logging", "python" ]
stackoverflow_0001508467_error_handling_exception_logging_python.txt
Q: Get Kubernetes node status using Python Client API Looking for some advice around how to get the status of a node using the Kubernetes client API for Python. I have the following: print("| Node Status | Node Name |") ret = v1.list_pod_for_all_namespaces(watch=False) for a in ret.items: ret2 = v1.read_node_status(a.spec.node_name) rawData = (ret2.status.conditions) However, ret2.status.conditions returns a malformed list/json object so it's proving difficult to search inside .conditions and retrieve the status and condition type. Has anyone written anything to retrieve the node status? A: I have a solution to my own question! Funny how the solution always comes when you think you're out of options! nodeStatus = (node.status.conditions) for i in nodeStatus: status = i.status type = i.type A: Thank you and this gave me a hint today to write the following as I was looking for a complete solution. config.load_kube_config() kube_client = client.CoreV1Api() node_list = kube_client.list_node(watch=False, pretty=True, limit=1000) if len(node_list.items) > 0: print("NODE\t\t\t\t\t\tSTATUS") for node in node_list.items: node_name = node.metadata.name node_status = "Not Ready" # Unknown, not ready, unhealthy, etc. node_scheduling = node.spec.unschedulable for condition in node.status.conditions: if condition.type == "Ready" and condition.status: node_status = "Ready" break if node_scheduling is None or not node_scheduling: print(f"{node_name} {node_status}") else: print(f"{node_name} {node_status},SchedulingDisabled") else: print("No nodes available in the cluster") and the output is, References: K8S Node Conditions Python Kube Client Object
Get Kubernetes node status using Python Client API
Looking for some advice around how to get the status of a node using the Kubernetes client API for Python. I have the following: print("| Node Status | Node Name |") ret = v1.list_pod_for_all_namespaces(watch=False) for a in ret.items: ret2 = v1.read_node_status(a.spec.node_name) rawData = (ret2.status.conditions) However, ret2.status.conditions returns a malformed list/json object so it's proving difficult to search inside .conditions and retrieve the status and condition type. Has anyone written anything to retrieve the node status?
[ "I have a solution to my own question! Funny how the solution always comes when you think you're out of options!\nnodeStatus = (node.status.conditions)\n\n for i in nodeStatus:\n status = i.status\n type = i.type\n\n", "Thank you and this gave me a hint today to write the following as I was looking for a complete solution.\nconfig.load_kube_config()\nkube_client = client.CoreV1Api()\n\nnode_list = kube_client.list_node(watch=False, pretty=True, limit=1000)\nif len(node_list.items) > 0:\n print(\"NODE\\t\\t\\t\\t\\t\\tSTATUS\")\n for node in node_list.items:\n node_name = node.metadata.name\n node_status = \"Not Ready\" # Unknown, not ready, unhealthy, etc.\n node_scheduling = node.spec.unschedulable\n for condition in node.status.conditions:\n if condition.type == \"Ready\" and condition.status:\n node_status = \"Ready\"\n break\n if node_scheduling is None or not node_scheduling:\n print(f\"{node_name} {node_status}\")\n else:\n print(f\"{node_name} {node_status},SchedulingDisabled\")\nelse:\n print(\"No nodes available in the cluster\")\n\nand the output is,\n\nReferences:\nK8S Node Conditions\nPython Kube Client Object\n" ]
[ 1, 0 ]
[]
[]
[ "pytest", "python" ]
stackoverflow_0060186766_pytest_python.txt
Q: Python intercept stdout, listen to write on stream, capture stdout live I want to capture stdout as it comes, to react every time it is written to. I've not been able to find anything like "io stream on-write listener" etc. How can I redirect stdout live? at the moment I have import sys import time from io import IOBase, StringIO class Tee: def __init__(self, target: IOBase): self._stdout = sys.stdout self.target = target def __enter__(self): sys.stdout = self.target def __exit__(self, *args, **kwargs): sys.stdout = self._stdout copy_here.seek(0) for line in copy_here.readlines(): print(line, end='') copy_here.seek(0) if __name__ == '__main__': copy_here = StringIO() with Tee(copy_here): print('one') print('two') time.sleep(1) print('three') print(copy_here.getvalue()) But this causes all the print outputs to be buffered until the context is exited, finally they are printed. Rather I want the output to be printed to stdout as it comes, at the same time as being copied to the stream. A: Eventually I came up with the idea of making a wrapper stream around the actual target stream, that passes method calls on after intercepting them and printing them to stdout first. This seems to work. import sys import time from io import IOBase, StringIO from types import SimpleNamespace class Tee: def __init__(self, target: IOBase): self._stdout = sys.stdout self.target = target self.wrapped_target = SimpleNamespace() for method in filter(lambda x: not x.startswith('_'), dir(sys.stdout)): setattr(self.wrapped_target, method, self._wrapped_method(method)) def _wrapped_method(self, stdout_method): def wrapped_method(*args, **kwargs): getattr(self.target, stdout_method)(*args, **kwargs) return getattr(self._stdout, stdout_method)(*args, **kwargs) return wrapped_method def __enter__(self): sys.stdout = self.wrapped_target def __exit__(self, *args, **kwargs): sys.stdout = self._stdout if __name__ == '__main__': copy_here = StringIO() with Tee(copy_here): print('one') print('two') time.sleep(1) print('three') print(copy_here.getvalue()) But it seems such overkill for a pretty simple problem.
Python intercept stdout, listen to write on stream, capture stdout live
I want to capture stdout as it comes, to react every time it is written to. I've not been able to find anything like "io stream on-write listener" etc. How can I redirect stdout live? at the moment I have import sys import time from io import IOBase, StringIO class Tee: def __init__(self, target: IOBase): self._stdout = sys.stdout self.target = target def __enter__(self): sys.stdout = self.target def __exit__(self, *args, **kwargs): sys.stdout = self._stdout copy_here.seek(0) for line in copy_here.readlines(): print(line, end='') copy_here.seek(0) if __name__ == '__main__': copy_here = StringIO() with Tee(copy_here): print('one') print('two') time.sleep(1) print('three') print(copy_here.getvalue()) But this causes all the print outputs to be buffered until the context is exited, finally they are printed. Rather I want the output to be printed to stdout as it comes, at the same time as being copied to the stream.
[ "Eventually I came up with the idea of making a wrapper stream around the actual target stream, that passes method calls on after intercepting them and printing them to stdout first.\nThis seems to work.\nimport sys\nimport time\n\nfrom io import IOBase, StringIO\nfrom types import SimpleNamespace\n\n\nclass Tee:\n def __init__(self, target: IOBase):\n self._stdout = sys.stdout\n self.target = target\n self.wrapped_target = SimpleNamespace()\n for method in filter(lambda x: not x.startswith('_'), dir(sys.stdout)):\n setattr(self.wrapped_target, method, self._wrapped_method(method))\n\n def _wrapped_method(self, stdout_method):\n def wrapped_method(*args, **kwargs):\n getattr(self.target, stdout_method)(*args, **kwargs)\n return getattr(self._stdout, stdout_method)(*args, **kwargs)\n return wrapped_method\n\n\n def __enter__(self):\n sys.stdout = self.wrapped_target\n\n def __exit__(self, *args, **kwargs):\n sys.stdout = self._stdout\n\n\nif __name__ == '__main__':\n copy_here = StringIO()\n with Tee(copy_here):\n print('one')\n print('two')\n time.sleep(1)\n print('three')\n print(copy_here.getvalue())\n\nBut it seems such overkill for a pretty simple problem.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "stream", "tee" ]
stackoverflow_0074481204_python_python_3.x_stream_tee.txt
Q: How a for in loop in python ends when there is no update statement in it? For example: #1 val = 5 for i in range(val) : print(i) When the range is exhausted i.e. last value reached how python knows for in loop ends . As in other languages #2 for(i=0;i<=5;i++){ print(i) } As in this exp. when i's values becomes larger than 5 false condition leads to termination of loop . I tried reading docs of python and browsed over google but no satisfying answer. So unable to get a picture of this . A: So this is actually a complicated question, but the very rough version of the answer is "the compiler/interpreter can do what it wants". It isn't actually running the human-readable text you write at all - instead it goes through a whole pipeline of transformations. At minimum, a lexer converts the text to a sequence of symbols, and then a parser turns that into a tree of language constructs; that may then be compiled into machine code or interpreted by a virtual machine. So, the python interpreter creates a structure that handles the underlying logic. Depending on the optimizations performed (those are really a black box, it's hard to say what they do), this may be producing structures logically equivalent to what a Java-like for loop would make, or it could actually create a data structure of numbers (that's what the range() function does on its own) and then iterate over them. Editing to give some more foundation for what this construct even means: Python iteration-style loops are different in how they're defined from C-style i++ sorts of loops. Python loops are intended to iterate on each element of a list or other sequence data structure - you can say, for instance, for name in listOfNames, and then use name in the following block. When you say for i in range(x), this is the pythonic way of doing something like the C-style loop. Think of it as the reverse of for(int i = 0; i < arr.length(); i++){ foo(arr[i[) } In that code block you're accessing each element of an indexible sequence arr by going through each valid index. You don't actually care about i - it's just a means to an end, a way to make sure you visit each element. Python assumes that's what you're trying to do: the python variant is for elem in arr: foo(elem) Which most people would agree is simpler, clearer and more elegant. However, there are times when you actually do want to explicitly go number by number. To do that with a python style, you create a list of all the numbers you'll want to visit - that's what the range function does. You'll mostly see it as part of a loop statement, but it can exist independently - you can say x = range(10), and x will hold a list that consists of the numbers 0-9 inclusive. So, where before you were incrementing a number to visit each item of a list, now you're taking a list of numbers to get incrementing values. "How it does this" is still explanation I gave above - the parser and interpreter know how to create the nitty-gritty logic that actually creates this sequence and step through it, or possibly transform it into some logically equivalent steps.
How a for in loop in python ends when there is no update statement in it?
For example: #1 val = 5 for i in range(val) : print(i) When the range is exhausted i.e. last value reached how python knows for in loop ends . As in other languages #2 for(i=0;i<=5;i++){ print(i) } As in this exp. when i's values becomes larger than 5 false condition leads to termination of loop . I tried reading docs of python and browsed over google but no satisfying answer. So unable to get a picture of this .
[ "So this is actually a complicated question, but the very rough version of the answer is \"the compiler/interpreter can do what it wants\".\nIt isn't actually running the human-readable text you write at all - instead it goes through a whole pipeline of transformations. At minimum, a lexer converts the text to a sequence of symbols, and then a parser turns that into a tree of language constructs; that may then be compiled into machine code or interpreted by a virtual machine.\nSo, the python interpreter creates a structure that handles the underlying logic. Depending on the optimizations performed (those are really a black box, it's hard to say what they do), this may be producing structures logically equivalent to what a Java-like for loop would make, or it could actually create a data structure of numbers (that's what the range() function does on its own) and then iterate over them.\nEditing to give some more foundation for what this construct even means:\nPython iteration-style loops are different in how they're defined from C-style i++ sorts of loops. Python loops are intended to iterate on each element of a list or other sequence data structure - you can say, for instance, for name in listOfNames, and then use name in the following block.\nWhen you say for i in range(x), this is the pythonic way of doing something like the C-style loop. Think of it as the reverse of\nfor(int i = 0; i < arr.length(); i++){\n foo(arr[i[)\n}\n\nIn that code block you're accessing each element of an indexible sequence arr by going through each valid index. You don't actually care about i - it's just a means to an end, a way to make sure you visit each element.\nPython assumes that's what you're trying to do: the python variant is\nfor elem in arr:\n foo(elem)\n\nWhich most people would agree is simpler, clearer and more elegant.\nHowever, there are times when you actually do want to explicitly go number by number. To do that with a python style, you create a list of all the numbers you'll want to visit - that's what the range function does. You'll mostly see it as part of a loop statement, but it can exist independently - you can say x = range(10), and x will hold a list that consists of the numbers 0-9 inclusive.\nSo, where before you were incrementing a number to visit each item of a list, now you're taking a list of numbers to get incrementing values.\n\"How it does this\" is still explanation I gave above - the parser and interpreter know how to create the nitty-gritty logic that actually creates this sequence and step through it, or possibly transform it into some logically equivalent steps.\n" ]
[ 0 ]
[]
[]
[ "for_loop", "increment", "loops", "python", "variables" ]
stackoverflow_0074482367_for_loop_increment_loops_python_variables.txt
Q: What is the easiest way to use a "real web server" with flask on windows, to replace the default one? From the flask documentation: While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well. Some of the options available for properly running Flask in production are documented here. I currently am using a small web app I wrote, that I only use on localhost, from my own personal use. It doesn't allow queries outside localhost, so it is not destined to be used online. It also use sqlite which I think put some limitations on threading when using flask. I am having issues loading several video files at the same time (I wrote a gallery app) using this method: @app.route('/content/<path:path>') def send_http_file(path): return send_from_directory(os.path.dirname(path), os.path.basename(path), cache_timeout=0) Firefox sends http 206 to "preload videos" (I think?), and most videos load fine, but at some point, the transfer of the video files seems to stall without any apparent reason. Opening the video link in another tab works perfectly fine, but not in the current tab. I was told the default flask server is inadequate (I also am curious to understand why this is, maybe because it's the python native http server?), which might result in some threading or deadlock issue. I am either looking for a solution to fix that inadequate server (seems unlikely, I guess?), or the easiest-to-install/lightweight server I can use and configure on Windows. I've looked at the documentation, but all of them seems to require third-party server installation, which might not run well on windows, or seem to be too complex to manage. Are there "real web servers" that come as a simple python module? I tried using my scripts with WSL, but file access was a bit slow when using os.listdir() and other file things. I am curious if the "inadequate default flask web server" also has the same issues, or if it's a issue specific to windows. I cannot use file:/// because of same origin policy. A: I'm not sure about easiest, but have you looked at the flask documentation here? https://flask.palletsprojects.com/en/2.0.x/deploying/ I havent tried it myself, but waitress appears to be a name that comes up quite a bit as well. https://github.com/Pylons/waitress edit: Just tried it myself and was SUPER simple. pip install waitress Your app should not be using the app.run() function. Just a function to return the app after config. waitress-serve --host 127.0.0.1 --call app:createApp
What is the easiest way to use a "real web server" with flask on windows, to replace the default one?
From the flask documentation: While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well. Some of the options available for properly running Flask in production are documented here. I currently am using a small web app I wrote, that I only use on localhost, from my own personal use. It doesn't allow queries outside localhost, so it is not destined to be used online. It also use sqlite which I think put some limitations on threading when using flask. I am having issues loading several video files at the same time (I wrote a gallery app) using this method: @app.route('/content/<path:path>') def send_http_file(path): return send_from_directory(os.path.dirname(path), os.path.basename(path), cache_timeout=0) Firefox sends http 206 to "preload videos" (I think?), and most videos load fine, but at some point, the transfer of the video files seems to stall without any apparent reason. Opening the video link in another tab works perfectly fine, but not in the current tab. I was told the default flask server is inadequate (I also am curious to understand why this is, maybe because it's the python native http server?), which might result in some threading or deadlock issue. I am either looking for a solution to fix that inadequate server (seems unlikely, I guess?), or the easiest-to-install/lightweight server I can use and configure on Windows. I've looked at the documentation, but all of them seems to require third-party server installation, which might not run well on windows, or seem to be too complex to manage. Are there "real web servers" that come as a simple python module? I tried using my scripts with WSL, but file access was a bit slow when using os.listdir() and other file things. I am curious if the "inadequate default flask web server" also has the same issues, or if it's a issue specific to windows. I cannot use file:/// because of same origin policy.
[ "I'm not sure about easiest, but have you looked at the flask documentation here?\nhttps://flask.palletsprojects.com/en/2.0.x/deploying/\nI havent tried it myself, but waitress appears to be a name that comes up quite a bit as well.\nhttps://github.com/Pylons/waitress\nedit: Just tried it myself and was SUPER simple.\npip install waitress\nYour app should not be using the app.run() function. Just a function to return the app after config.\nwaitress-serve --host 127.0.0.1 --call app:createApp\n" ]
[ 1 ]
[]
[]
[ "flask", "python", "webserver", "windows", "windows_subsystem_for_linux" ]
stackoverflow_0074482371_flask_python_webserver_windows_windows_subsystem_for_linux.txt