content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Convert list of elements into list of tuples to match the structure of another list of tuples Say that I have the following lists L = [("a0","a1"),("b0",),("b1","a1","b0"),("a0","a1"),("b0",)] M = ["u0", "u1", "u2", "u3", "u4", "u5", "u6", "u7" , "u8"] and I want to group the elements of M into a list of tuples N such that N has the same structure of L, i.e. N = [("u0", "u1"), ("u2",), ("u3", "u4", "u5"), ("u6", "u7") , ("u8",)] or, to be more precise, such that [len(L[ii]) == len(N[ii]) for ii, t in enumerate(L)] has all True elements and M == Q, where Q = [item for t in N for item in t] How to do that? A: it = iter(M) followed by res = [tuple(itertools.islice(it, len(t))) for t in L] should do the trick A: using for loop >>> L = [("a0","a1"),("b0",),("b1","a1","b0"),("a0","a1"),("b0",)] >>> M = ["u0", "u1", "u2", "u3", "u4", "u5", "u6", "u7" , "u8"] >>> R =[] >>> idx = 0 >>> for i in [len(j) for j in L]: ... R.append(tuple(M[idx:idx+i])) ... idx+=i ... >>> R [('u0', 'u1'), ('u2',), ('u3', 'u4', 'u5'), ('u6', 'u7'), ('u8',)] A: L = [("a0","a1"),("b0",),("b1","a1","b0"),("a0","a1"),("b0",)] M = ["u0", "u1", "u2", "u3", "u4", "u5", "u6", "u7" , "u8"] len_L_elements = [] for i in L: len_L_elements.append(len(i)) print(len_L_elements) res = [] c = 0 # It will handle element of M d = 0 # It will handle element of len_L_elemenst while c <= len(M)-1 and d <= len(len_L_elements)-1: temp_lis = [] # this will convert int tuple on time of append cnt = 0 # Initialize with 0 on one tuple creation while cnt <= len_L_elements[d]-1: temp_lis.append(M[c]) c+=1 cnt+=1 # Convert List into tuple temp_tuple = tuple(temp_lis) res.append(temp_tuple) d+=1 print(res)
Convert list of elements into list of tuples to match the structure of another list of tuples
Say that I have the following lists L = [("a0","a1"),("b0",),("b1","a1","b0"),("a0","a1"),("b0",)] M = ["u0", "u1", "u2", "u3", "u4", "u5", "u6", "u7" , "u8"] and I want to group the elements of M into a list of tuples N such that N has the same structure of L, i.e. N = [("u0", "u1"), ("u2",), ("u3", "u4", "u5"), ("u6", "u7") , ("u8",)] or, to be more precise, such that [len(L[ii]) == len(N[ii]) for ii, t in enumerate(L)] has all True elements and M == Q, where Q = [item for t in N for item in t] How to do that?
[ "it = iter(M)\n\nfollowed by\nres = [tuple(itertools.islice(it, len(t))) for t in L]\n\nshould do the trick\n", "using for loop\n>>> L = [(\"a0\",\"a1\"),(\"b0\",),(\"b1\",\"a1\",\"b0\"),(\"a0\",\"a1\"),(\"b0\",)]\n>>> M = [\"u0\", \"u1\", \"u2\", \"u3\", \"u4\", \"u5\", \"u6\", \"u7\" , \"u8\"]\n>>> R =[]\n>>> idx = 0\n>>> for i in [len(j) for j in L]:\n... R.append(tuple(M[idx:idx+i]))\n... idx+=i\n... \n>>> R\n[('u0', 'u1'), ('u2',), ('u3', 'u4', 'u5'), ('u6', 'u7'), ('u8',)]\n\n", "L = [(\"a0\",\"a1\"),(\"b0\",),(\"b1\",\"a1\",\"b0\"),(\"a0\",\"a1\"),(\"b0\",)]\nM = [\"u0\", \"u1\", \"u2\", \"u3\", \"u4\", \"u5\", \"u6\", \"u7\" , \"u8\"]\n\nlen_L_elements = []\n\nfor i in L:\n len_L_elements.append(len(i))\n \nprint(len_L_elements)\n\nres = []\nc = 0 # It will handle element of M\nd = 0 # It will handle element of len_L_elemenst\n\nwhile c <= len(M)-1 and d <= len(len_L_elements)-1:\n temp_lis = [] # this will convert int tuple on time of append\n cnt = 0 # Initialize with 0 on one tuple creation\n while cnt <= len_L_elements[d]-1:\n temp_lis.append(M[c])\n c+=1\n cnt+=1\n \n # Convert List into tuple\n temp_tuple = tuple(temp_lis)\n res.append(temp_tuple)\n d+=1\nprint(res)\n \n \n\n\n" ]
[ 8, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074487572_python.txt
Q: comparing keys:- list of nested dictionary I want to write a function that checks keys of dict1 (base dict) and compare it to keys of dict2 (list of nested dictionaries, can be one or multiple), such that it checks for the mandatory key and then optional keys(if and whatever are present) and returns the difference as a list. dict1 = {"name": str, #mandatory "details" : { #optional "class" : str, #optional "subjects" : { #optional "english" : bool, #optional "maths" : bool #optional } }} dict2 = [{"name": "SK", "details" : { "class" : "A"} }, {"name": "SK", "details" : { "class" : "A", "subjects" :{ "english" : True, "science" : False } }}] After comparing dict2 with dict1,The expected output is:- pass #no difference in keys in 1st dictionary ["science"] #the different key in second dictionary of dict2 A: Try out this recursive check function: def compare_dict_keys(d1, d2, diff: list): if isinstance(d2, dict): for key, expected_value in d2.items(): try: actual_value = d1[key] compare_dict_keys(actual_value, expected_value, diff) except KeyError: diff.append(key) else: pass dict1 vs dict2 difference = [] compare_dict_keys(dict1, dict2, difference) print(difference) # Output: ['science'] dict2 vs dict1 difference = [] compare_dict_keys(dict2, dict1, difference) print(difference) # Output: ['maths']
comparing keys:- list of nested dictionary
I want to write a function that checks keys of dict1 (base dict) and compare it to keys of dict2 (list of nested dictionaries, can be one or multiple), such that it checks for the mandatory key and then optional keys(if and whatever are present) and returns the difference as a list. dict1 = {"name": str, #mandatory "details" : { #optional "class" : str, #optional "subjects" : { #optional "english" : bool, #optional "maths" : bool #optional } }} dict2 = [{"name": "SK", "details" : { "class" : "A"} }, {"name": "SK", "details" : { "class" : "A", "subjects" :{ "english" : True, "science" : False } }}] After comparing dict2 with dict1,The expected output is:- pass #no difference in keys in 1st dictionary ["science"] #the different key in second dictionary of dict2
[ "Try out this recursive check function:\ndef compare_dict_keys(d1, d2, diff: list):\n if isinstance(d2, dict):\n for key, expected_value in d2.items():\n try:\n actual_value = d1[key]\n compare_dict_keys(actual_value, expected_value, diff)\n except KeyError:\n diff.append(key)\n else:\n pass\n\ndict1 vs dict2\ndifference = []\ncompare_dict_keys(dict1, dict2, difference)\nprint(difference)\n\n# Output: ['science']\n\ndict2 vs dict1\ndifference = []\ncompare_dict_keys(dict2, dict1, difference)\nprint(difference)\n\n# Output: ['maths']\n\n" ]
[ 0 ]
[]
[]
[ "comparison", "dictionary", "list", "nested", "python" ]
stackoverflow_0074488014_comparison_dictionary_list_nested_python.txt
Q: Scikit-Learn Linear Regression using Datetime Values and forecasting Below is a sample of the dataset. row_id datetime energy 1 2008-03-01 00:00:00 1259.985563 2 2008-03-01 01:00:00 1095.541500 3 2008-03-01 02:00:00 1056.247500 4 2008-03-01 03:00:00 1034.742000 5 2008-03-01 04:00:00 1026.334500 The dataset has datetime values and energy consumption for that hour in object and float64 dtypes. I want to predict the energy using the datetime column as the single feature. I used the following code train['datetime'] = pd.to_datetime(train['datetime']) X = train.iloc[:,0] y = train.iloc[:,-1] I could not pass the single feature as Series to the fit object as I got the following error. ValueError: Expected 2D array, got 1D array instead: array=['2008-03-01T00:00:00.000000000' '2008-03-01T01:00:00.000000000' '2008-03-01T02:00:00.000000000' ... '2018-12-31T21:00:00.000000000' '2018-12-31T22:00:00.000000000' '2018-12-31T23:00:00.000000000']. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. So I converted their shapes as suggested. X = np.array(X).reshape(-1,1) y = np.array(y).reshape(-1,1) from sklearn.linear_model import LinearRegression model_1 = LinearRegression() model_1.fit(X,y) test = pd.to_datetime(test['datetime']) test = np.array(test).reshape(-1,1) predictions = model_1.predict(test) The LinearRegression object fitted the feature X and target y without raising any error. But when I passed the test data to the predict method, it threw the following error. TypeError: The DType <class 'numpy.dtype[datetime64]'> could not be promoted by <class 'numpy.dtype[float64]'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (<class 'numpy.dtype[datetime64]'>, <class 'numpy.dtype[float64]'>) I can't wrap my head around this error. How can I use the datetime values as a single feature and apply simple linear regression to predict the target value and do TimeSeries forecasting? Where am I doing wrong? A: You can not train on a datetime format. If you want the model to learn datetime features then consider splitting it into day, month, weekday, weekofyear, hour etc to learn patterns with seasonality: from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score df = pd.DataFrame(data=[["2008-03-01 00:00:00",1259.985563],["2008-03-01 01:00:00",1095.541500],["2008-03-01 02:00:00",1056.247500],["2008-03-01 03:00:00",1034.742000],["2008-03-01 04:00:00",1026.334500]], columns=["datetime","energy"]) df["datetime"] = pd.to_datetime(df["datetime"]) features = ["year", "month", "day", "hour", "weekday", "weekofyear", "quarter"] df[features] = df.apply(lambda row: pd.Series({"year":row.datetime.year, "month":row.datetime.month, "day":row.datetime.day, "hour":row.datetime.hour, "weekday":row.datetime.weekday(), "weekofyear":row.datetime.weekofyear, "quarter":row.datetime.quarter }), axis=1) X = df[features] y = df[["energy"]] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) model = LinearRegression() model.fit(X_train, y_train) y_pred = model.predict(X_test) print(mean_squared_error(y_test, y_pred))
Scikit-Learn Linear Regression using Datetime Values and forecasting
Below is a sample of the dataset. row_id datetime energy 1 2008-03-01 00:00:00 1259.985563 2 2008-03-01 01:00:00 1095.541500 3 2008-03-01 02:00:00 1056.247500 4 2008-03-01 03:00:00 1034.742000 5 2008-03-01 04:00:00 1026.334500 The dataset has datetime values and energy consumption for that hour in object and float64 dtypes. I want to predict the energy using the datetime column as the single feature. I used the following code train['datetime'] = pd.to_datetime(train['datetime']) X = train.iloc[:,0] y = train.iloc[:,-1] I could not pass the single feature as Series to the fit object as I got the following error. ValueError: Expected 2D array, got 1D array instead: array=['2008-03-01T00:00:00.000000000' '2008-03-01T01:00:00.000000000' '2008-03-01T02:00:00.000000000' ... '2018-12-31T21:00:00.000000000' '2018-12-31T22:00:00.000000000' '2018-12-31T23:00:00.000000000']. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. So I converted their shapes as suggested. X = np.array(X).reshape(-1,1) y = np.array(y).reshape(-1,1) from sklearn.linear_model import LinearRegression model_1 = LinearRegression() model_1.fit(X,y) test = pd.to_datetime(test['datetime']) test = np.array(test).reshape(-1,1) predictions = model_1.predict(test) The LinearRegression object fitted the feature X and target y without raising any error. But when I passed the test data to the predict method, it threw the following error. TypeError: The DType <class 'numpy.dtype[datetime64]'> could not be promoted by <class 'numpy.dtype[float64]'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (<class 'numpy.dtype[datetime64]'>, <class 'numpy.dtype[float64]'>) I can't wrap my head around this error. How can I use the datetime values as a single feature and apply simple linear regression to predict the target value and do TimeSeries forecasting? Where am I doing wrong?
[ "You can not train on a datetime format. If you want the model to learn datetime features then consider splitting it into day, month, weekday, weekofyear, hour etc to learn patterns with seasonality:\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import mean_squared_error, r2_score\n\ndf = pd.DataFrame(data=[[\"2008-03-01 00:00:00\",1259.985563],[\"2008-03-01 01:00:00\",1095.541500],[\"2008-03-01 02:00:00\",1056.247500],[\"2008-03-01 03:00:00\",1034.742000],[\"2008-03-01 04:00:00\",1026.334500]], columns=[\"datetime\",\"energy\"])\ndf[\"datetime\"] = pd.to_datetime(df[\"datetime\"])\nfeatures = [\"year\", \"month\", \"day\", \"hour\", \"weekday\", \"weekofyear\", \"quarter\"]\ndf[features] = df.apply(lambda row: pd.Series({\"year\":row.datetime.year, \"month\":row.datetime.month, \"day\":row.datetime.day, \"hour\":row.datetime.hour, \"weekday\":row.datetime.weekday(), \"weekofyear\":row.datetime.weekofyear, \"quarter\":row.datetime.quarter }), axis=1)\n\nX = df[features]\ny = df[[\"energy\"]]\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\nmodel = LinearRegression()\nmodel.fit(X_train, y_train)\n\ny_pred = model.predict(X_test)\n\nprint(mean_squared_error(y_test, y_pred))\n\n" ]
[ 1 ]
[]
[]
[ "datetime", "forecasting", "pandas", "python", "scikit_learn" ]
stackoverflow_0074485762_datetime_forecasting_pandas_python_scikit_learn.txt
Q: Pandas .min() not getting lowest value per week I have a dateframe with, for every hour for each day, the amount of gas and electricity used: elec gas day_of_week DuringBusinessHours ts 2022-04-30 01:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 02:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 03:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 04:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 05:00:00+02:00 4.0000001192092896 0.0 5 False 2022-04-30 06:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 07:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 08:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 09:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 10:00:00+02:00 3.200000047683716 0.3000000119209289 5 False 2022-04-30 11:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 12:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 13:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 14:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 15:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 16:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 17:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 18:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 19:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 20:00:00+02:00 3.200000077486038 0.0 5 False 2022-04-30 21:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 22:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 23:00:00+02:00 3.6000000834465027 0.0 5 False 2022-05-01 00:00:00+02:00 3.199999988079071 0.0 6 False 2022-05-01 01:00:00+02:00 3.200000047683716 0.0 6 False 2022-05-01 02:00:00+02:00 3.200000047683716 0.0 6 False 2022-05-01 03:00:00+02:00 3.6000000834465027 0.0 6 False 2022-05-01 04:00:00+02:00 3.6000000834465027 1.2000000476837158 6 False 2022-05-01 05:00:00+02:00 3.6000000834465027 0.4000000059604645 6 False 2022-05-01 06:00:00+02:00 3.6000000834465027 0.6000000238418579 6 False For each week, I would like to get the lowest electricity value and display it in a new dataframe with the corresponding hour and day. But so far for each method I have used, it does not come up with the right minimum value. For example: lowestUsage = BusinessUsageDf.groupby([pd.Grouper(level='ts', freq='W-SAT')])['elec'].min() lowestUsage.head(5) Gives: ts 2022-04-23 00:00:00+02:00 3.200000047683716 2022-04-30 00:00:00+02:00 10.00000023841858 2022-05-07 00:00:00+02:00 10.400000095367432 2022-05-14 00:00:00+02:00 10.00000011920929 2022-05-21 00:00:00+02:00 10.00000023841858 Freq: W-SAT, Name: elektra, dtype: object But the lowest value of the week between 04-30 and 05-07 is not 10.400.. because according to the data, this should be 3.100.. I also tried: lowestUsageWeekDf = BusinessUsageDf.resample("W").min() But this is neither giving the correct minimum value. What is going on in here? A: Try: lowestUsage = BusinessUsageDf.groupby(pd.Grouper(key='ts', freq='W-SAT'))['elec'].min() lowestUsage.head(5)
Pandas .min() not getting lowest value per week
I have a dateframe with, for every hour for each day, the amount of gas and electricity used: elec gas day_of_week DuringBusinessHours ts 2022-04-30 01:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 02:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 03:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 04:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 05:00:00+02:00 4.0000001192092896 0.0 5 False 2022-04-30 06:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 07:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 08:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 09:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 10:00:00+02:00 3.200000047683716 0.3000000119209289 5 False 2022-04-30 11:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 12:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 13:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 14:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 15:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 16:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 17:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 18:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 19:00:00+02:00 3.6000000834465027 0.0 5 False 2022-04-30 20:00:00+02:00 3.200000077486038 0.0 5 False 2022-04-30 21:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 22:00:00+02:00 3.200000047683716 0.0 5 False 2022-04-30 23:00:00+02:00 3.6000000834465027 0.0 5 False 2022-05-01 00:00:00+02:00 3.199999988079071 0.0 6 False 2022-05-01 01:00:00+02:00 3.200000047683716 0.0 6 False 2022-05-01 02:00:00+02:00 3.200000047683716 0.0 6 False 2022-05-01 03:00:00+02:00 3.6000000834465027 0.0 6 False 2022-05-01 04:00:00+02:00 3.6000000834465027 1.2000000476837158 6 False 2022-05-01 05:00:00+02:00 3.6000000834465027 0.4000000059604645 6 False 2022-05-01 06:00:00+02:00 3.6000000834465027 0.6000000238418579 6 False For each week, I would like to get the lowest electricity value and display it in a new dataframe with the corresponding hour and day. But so far for each method I have used, it does not come up with the right minimum value. For example: lowestUsage = BusinessUsageDf.groupby([pd.Grouper(level='ts', freq='W-SAT')])['elec'].min() lowestUsage.head(5) Gives: ts 2022-04-23 00:00:00+02:00 3.200000047683716 2022-04-30 00:00:00+02:00 10.00000023841858 2022-05-07 00:00:00+02:00 10.400000095367432 2022-05-14 00:00:00+02:00 10.00000011920929 2022-05-21 00:00:00+02:00 10.00000023841858 Freq: W-SAT, Name: elektra, dtype: object But the lowest value of the week between 04-30 and 05-07 is not 10.400.. because according to the data, this should be 3.100.. I also tried: lowestUsageWeekDf = BusinessUsageDf.resample("W").min() But this is neither giving the correct minimum value. What is going on in here?
[ "Try:\nlowestUsage = BusinessUsageDf.groupby(pd.Grouper(key='ts', freq='W-SAT'))['elec'].min()\nlowestUsage.head(5)\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "min", "pandas", "python" ]
stackoverflow_0074487811_datetime_min_pandas_python.txt
Q: How to group at the for loop through the second condition unfortunately I'm having trouble creating a correct display of mianowiecie values. I have such a DF: Group Match Team A 1 A1 A 1 A2 A 2 A3 A 2 A4 I have this code: for group in set(world_cup['Group']): print('___Starting group {}:___'.format(group)) for home, away in combinations(world_cup.query('Group == "{}"'.format(group)).index, 2): #conditions.... I added a third loop so that instead of selecting group it selects game. However, I do not get the expected result because it shows all the matches in one group first and then the same matches in other groups. I would like to get something like this: ___Starting group A:__ A1-A2 A3-A4 Now I get something like this ___Starting group A:__ A1-A2 A1-A3 A1-A4 A2-A3 A2-A4 A3-A4 Thank everyone for help. A: I hope I've understood you correctly. You can .groupby() and then .agg the values: out = df.groupby(["Group", "Match"]).agg("-".join) print(out) Prints: Team Group Match A 1 A1-A2 2 A3-A4 out = df.groupby(["Group", "Match"]).agg("-".join) tmp = {} for idx, row in out.iterrows(): tmp.setdefault(idx[0], []).append(row["Team"]) for k, v in tmp.items(): print("___Starting group {}:___".format(k)) for vv in v: print(vv) Prints: ___Starting group A:___ A1-A2 A3-A4
How to group at the for loop through the second condition
unfortunately I'm having trouble creating a correct display of mianowiecie values. I have such a DF: Group Match Team A 1 A1 A 1 A2 A 2 A3 A 2 A4 I have this code: for group in set(world_cup['Group']): print('___Starting group {}:___'.format(group)) for home, away in combinations(world_cup.query('Group == "{}"'.format(group)).index, 2): #conditions.... I added a third loop so that instead of selecting group it selects game. However, I do not get the expected result because it shows all the matches in one group first and then the same matches in other groups. I would like to get something like this: ___Starting group A:__ A1-A2 A3-A4 Now I get something like this ___Starting group A:__ A1-A2 A1-A3 A1-A4 A2-A3 A2-A4 A3-A4 Thank everyone for help.
[ "I hope I've understood you correctly. You can .groupby() and then .agg the values:\nout = df.groupby([\"Group\", \"Match\"]).agg(\"-\".join)\nprint(out)\n\nPrints:\n Team\nGroup Match \nA 1 A1-A2\n 2 A3-A4\n\n\nout = df.groupby([\"Group\", \"Match\"]).agg(\"-\".join)\n\ntmp = {}\nfor idx, row in out.iterrows():\n tmp.setdefault(idx[0], []).append(row[\"Team\"])\n\nfor k, v in tmp.items():\n print(\"___Starting group {}:___\".format(k))\n for vv in v:\n print(vv)\n\nPrints:\n___Starting group A:___\nA1-A2\nA3-A4\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "group", "python", "python_3.x" ]
stackoverflow_0074488178_for_loop_group_python_python_3.x.txt
Q: Check a number if prime using python I want to create a procedure show if a given number prime what i have tried so far : def premier(a): isPrimary=False for i in range(2,a//2): if(a%i==0): isPrimary=True break if(isPrimary==True): print(a,'est un nbre premier') else: print(a,'non premier') c = int(input("Donner un nbre")) premier(c) test failed : Donner un nbre8 8 est un nbre premier which is not prime A: i think you're talking about prime numbers right ? - so numbers only divisable by themselfs and 1. In that case you could use: def is_prime(n): for i in range(2,n): if (n % i) == 0: return False return True
Check a number if prime using python
I want to create a procedure show if a given number prime what i have tried so far : def premier(a): isPrimary=False for i in range(2,a//2): if(a%i==0): isPrimary=True break if(isPrimary==True): print(a,'est un nbre premier') else: print(a,'non premier') c = int(input("Donner un nbre")) premier(c) test failed : Donner un nbre8 8 est un nbre premier which is not prime
[ "i think you're talking about prime numbers right ? - so numbers only divisable by themselfs and 1. In that case you could use:\ndef is_prime(n):\n for i in range(2,n):\n if (n % i) == 0:\n return False\n return True\n\n" ]
[ 0 ]
[ "Vous vous trompez d'état. Ce sera a%i != 0 s'il n'est pas égal à zéro alors premier si zéro alors pas premier. Ou vous pouvez définir isPrimary= False dans la condition que vous avez donnée. J'espère que cela fonctionnera.\nIn english:\nYou make a mistake in in condition. It will be a%i != 0 if it not equal zero then prime if zero then not prime. Or you can set isPrimary= False inside the condition you gave. I hope it will work.\n def premier(a):\n isPrimary=False\n for i in range(2,a//2):\n if(a%i!=0):\n isPrimary=True\n break\n if(isPrimary==True):\n print(a,'est un nbre premier')\n else:\n print(a,'non premier')\n c = int(input(\"Donner un nbre\"))\n premier(c)\n\nOr,\ndef premier(a):\n isPrimary=False\n for i in range(2,a//2):\n if(a%i==0):\n isPrimary=False\n break\n if(isPrimary==True):\n print(a,'est un nbre premier')\n else:\n print(a,'non premier')\n c = int(input(\"Donner un nbre\"))\n premier(c)\n\n" ]
[ -1 ]
[ "algorithm", "procedure", "python" ]
stackoverflow_0074487534_algorithm_procedure_python.txt
Q: Python coding error as it canot defined after i make def i did imported self but it show NameError: name 'self' is not defined #implementation class KMeans: def __init__(self, n_cluster=8, max_iter=300): self.n_cluster = n_cluster self.max_iter = max_iter # Randomly select centroid start points, uniformly distributed across the domain of the dataset min_, max_ = np.min(X_train, axis=0), np.max(X_train, axis=0) self.centroids = [uniform(min_, max_) for _ in range(self.n_clusters)] but show NameError Traceback (most recent call last) Input In [50], in <cell line: 9>() 7 # Randomly select centroid start points, uniformly distributed across the domain of the dataset 8 min_, max_ = np.min(X_train, axis=0), np.max(X_train, axis=0) ----> 9 self.centroids = [uniform(min_, max_) for _ in range(self.n_clusters)] NameError: name 'self' is not defined A: You should learn more about OOP in Python (here for example) self is a reference to the current instance of the class. So it can be used only inside of instance method. You are trying to reach reference of an object without object itself. You should define your function as a method of your class and then initialize some instance. After that you will be able to access its methods. UPD some example of method: from random import uniform import numpy as np class KMeans: def __init__(self, n_cluster=8, max_iter=300): self.n_cluster = n_cluster self.max_iter = max_iter def get_centroids(self, x_train): # Randomly select centroid start points, uniformly distributed across the domain of the dataset min_, max_ = np.min(x_train, axis=0), np.max(x_train, axis=0) self.centroids = [uniform(min_, max_) for _ in range(self.n_cluster)] return self.centroids some_object = KMeans() some_object.get_centroids([1, 2, 3]) print(some_object.centroids) Are you trying to do something like this?
Python coding error as it canot defined after i make def
i did imported self but it show NameError: name 'self' is not defined #implementation class KMeans: def __init__(self, n_cluster=8, max_iter=300): self.n_cluster = n_cluster self.max_iter = max_iter # Randomly select centroid start points, uniformly distributed across the domain of the dataset min_, max_ = np.min(X_train, axis=0), np.max(X_train, axis=0) self.centroids = [uniform(min_, max_) for _ in range(self.n_clusters)] but show NameError Traceback (most recent call last) Input In [50], in <cell line: 9>() 7 # Randomly select centroid start points, uniformly distributed across the domain of the dataset 8 min_, max_ = np.min(X_train, axis=0), np.max(X_train, axis=0) ----> 9 self.centroids = [uniform(min_, max_) for _ in range(self.n_clusters)] NameError: name 'self' is not defined
[ "You should learn more about OOP in Python (here for example)\nself is a reference to the current instance of the class. So it can be used only inside of instance method.\nYou are trying to reach reference of an object without object itself.\nYou should define your function as a method of your class and then initialize some instance. After that you will be able to access its methods.\nUPD some example of method:\n\nfrom random import uniform\n\nimport numpy as np\n\n\nclass KMeans:\n def __init__(self, n_cluster=8, max_iter=300):\n self.n_cluster = n_cluster\n self.max_iter = max_iter\n\n def get_centroids(self, x_train):\n # Randomly select centroid start points, uniformly distributed across the domain of the dataset\n min_, max_ = np.min(x_train, axis=0), np.max(x_train, axis=0)\n self.centroids = [uniform(min_, max_) for _ in range(self.n_cluster)]\n return self.centroids\n\nsome_object = KMeans()\nsome_object.get_centroids([1, 2, 3])\nprint(some_object.centroids)\n\nAre you trying to do something like this?\n" ]
[ 2 ]
[]
[]
[ "nameerror", "python" ]
stackoverflow_0074488272_nameerror_python.txt
Q: SNS mocking with moto is not working correctly In my unit test: def test_my_function_that_publishes_to_sns(): conn = boto3.client("sns", region_name="us-east-1") mock_topic = conn.create_topic(Name="mock-topic") topic_arn = mock_topic.get("TopicArn") os.environ["SNS_TOPIC"] = topic_arn # call my_function my_module.my_method() The the function being tested # inside my_module, my_function... sns_client.publish( TopicArn=os.environ["SNS_TOPIC"], Message="my message", ) I get the error: botocore.errorfactory.NotFoundException: An error occurred (NotFound) when calling the Publish operation: Endpoint with arn arn:aws:sns:us-east-1:123456789012:mock-topic not found Doesn't make sense, that's the topic moto is suppose to have created and mocked. Why is it saying it doesn't exist? If I call conn.publish(TopicArn=topic_arn, Message="sdfsdsdf") inside of the unit test itself it seems to mock it, but it doesn't mock it for my_module.my_method() which the unit test executes. Maybe it's destroying the mocked topic too soon? EDIT I tried this every which way and I get the exact same error: # Using context manager def test_my_function_that_publishes_to_sns(): with mock_sns(): conn = boto3.client("sns", region_name="us-east-1") mock_topic = conn.create_topic(Name="mocktopic") topic_arn = mock_topic.get("TopicArn") os.environ["SNS_TOPIC"] = topic_arn # call my_function my_module.my_method() # Using decorator @mock_sns def test_my_function_that_publishes_to_sns(): conn = boto3.client("sns", region_name="us-east-1") mock_topic = conn.create_topic(Name="mocktopic") topic_arn = mock_topic.get("TopicArn") os.environ["SNS_TOPIC"] = topic_arn # call my_function my_module.my_method() # Using decorator and context manager @mock_sns def test_my_function_that_publishes_to_sns(): with mock_sns(): conn = boto3.client("sns", region_name="us-east-1") mock_topic = conn.create_topic(Name="mocktopic") topic_arn = mock_topic.get("TopicArn") os.environ["SNS_TOPIC"] = topic_arn # call my_function my_module.my_method() Opened GitHub issue as well: https://github.com/spulec/moto/issues/3027 A: issue was my_module.my_method() wasn't setting a region just doing client = boto3.client("sns") It could not find it because it was defaulting to a diff region than us-east-1 which was hard coded into the unit test A: maybe it will help you keep all modules in a single class and put a decorator @mock_sns on the class too for mocking the sns, also put decorator @mock_sns on the function where you are initializing you connection to sns. Example: @mock_sns class TestSnsMock(unittest.TestCase): @classmethod @mock_sns def setUpClass(cls): cls.conn = boto3.client("sns", region_name="us-east-1") cls.conn.create_topic(Name="some-topic") cls.response = cls.conn.list_topics() cls.topic_arn = cls.response["Topics"][0]["TopicArn"] def test_publish_sns(self): message = "here is same message" self.sns_client.publish(TopicArn=self.topic_arn, Message=message) if __name__ == "__main__": unittest.main() A: Sample code below. I hope it helps somebody. The suggested fix about setting the Region was not my issue. If you are still stuck, this video is great. Approach: Create a mocked Boto3 Resource ( not a Boto3 Client ). Set mock SNS Topic ARN in this new resource. Overwrite the SNS Topic ARN environment var for the test. Get a Boto3 Client that calls Publish to the mocked SNS Topic ARN. I hit the below error because I set the Topic ARN to mock_topic and not arn:aws:sns:eu-west-1:123456789012:mock_topic: botocore.errorfactory.NotFoundException: An error occurred (NotFound) when calling the Publish operation: Endpoint does not exist """ import main import boto3 import pytest import botocore from moto import mock_sns # http://docs.getmoto.org/en/latest/docs/getting_started.html ##################################################################### # test_main.py ##################################################################### @pytest.fixture() def mock_message(): return { "foo": "1st wonderful message.", "bar": "2nd wonderful message.", "baz": "3rd wonderful message.", } @pytest.fixture() def mock_sns_client(): return sns_publish.get_sns_client() def test_get_mocked_sns_client(mock_sns_client): assert isinstance(mock_sns_client, botocore.client.BaseClient) mock_topic_name = "mock_topic" @mock_sns def test_mock_send_sns(mock_message, monkeypatch, mock_sns_client): """ 1. Create a mocked Boto3 Resource ( not a Boto3 Client ). 2. Set mock SNS Topic ARN in this new resource. 3. Overwrite the SNS Topic ARN environment var for the test. """ sns_resource = boto3.resource( "sns", region_name=os.environ.get("AWS_REGION") ) topic = sns_resource.create_topic( Name=mock_topic_name ) assert mock_topic_name in topic.arn monkeypatch.setenv('SNS_TOPIC_ARN', topic.arn) assert os.environ.get("SNS_TOPIC_ARN") == topic.arn response = sns_publish.send_sns(mock_sns_client, mock_message) assert isinstance(response, dict) message_id = response.get("MessageId", None) assert isinstance(message_id, str) ##################################################################### # main.py # split the get Client and Publish for simpler testing ##################################################################### import boto3 import json import botocore import os from conf.base_logger import logger # split the get Client and Publish for simpler testing def get_sns_client(): return boto3.client("sns", region_name=os.environ.get("AWS_REGION")) def send_sns(sns_client, message: dict) -> dict: if not isinstance(message, dict): logger.info("message to send Slack is not in expected format") return None if not isinstance(sns_client, botocore.client.BaseClient): logger.info("something wrong with the SNS client") return None return sns_client.publish( TargetArn=os.environ.get("SNS_TOPIC_ARN"), Message=json.dumps({'default': json.dumps(message, indent=4, sort_keys=True)}), Subject='Foo\'s stats', MessageStructure='json' )
SNS mocking with moto is not working correctly
In my unit test: def test_my_function_that_publishes_to_sns(): conn = boto3.client("sns", region_name="us-east-1") mock_topic = conn.create_topic(Name="mock-topic") topic_arn = mock_topic.get("TopicArn") os.environ["SNS_TOPIC"] = topic_arn # call my_function my_module.my_method() The the function being tested # inside my_module, my_function... sns_client.publish( TopicArn=os.environ["SNS_TOPIC"], Message="my message", ) I get the error: botocore.errorfactory.NotFoundException: An error occurred (NotFound) when calling the Publish operation: Endpoint with arn arn:aws:sns:us-east-1:123456789012:mock-topic not found Doesn't make sense, that's the topic moto is suppose to have created and mocked. Why is it saying it doesn't exist? If I call conn.publish(TopicArn=topic_arn, Message="sdfsdsdf") inside of the unit test itself it seems to mock it, but it doesn't mock it for my_module.my_method() which the unit test executes. Maybe it's destroying the mocked topic too soon? EDIT I tried this every which way and I get the exact same error: # Using context manager def test_my_function_that_publishes_to_sns(): with mock_sns(): conn = boto3.client("sns", region_name="us-east-1") mock_topic = conn.create_topic(Name="mocktopic") topic_arn = mock_topic.get("TopicArn") os.environ["SNS_TOPIC"] = topic_arn # call my_function my_module.my_method() # Using decorator @mock_sns def test_my_function_that_publishes_to_sns(): conn = boto3.client("sns", region_name="us-east-1") mock_topic = conn.create_topic(Name="mocktopic") topic_arn = mock_topic.get("TopicArn") os.environ["SNS_TOPIC"] = topic_arn # call my_function my_module.my_method() # Using decorator and context manager @mock_sns def test_my_function_that_publishes_to_sns(): with mock_sns(): conn = boto3.client("sns", region_name="us-east-1") mock_topic = conn.create_topic(Name="mocktopic") topic_arn = mock_topic.get("TopicArn") os.environ["SNS_TOPIC"] = topic_arn # call my_function my_module.my_method() Opened GitHub issue as well: https://github.com/spulec/moto/issues/3027
[ "issue was my_module.my_method() wasn't setting a region just doing client = boto3.client(\"sns\")\nIt could not find it because it was defaulting to a diff region than us-east-1 which was hard coded into the unit test\n", "maybe it will help you \nkeep all modules in a single class and put a decorator @mock_sns on the class too for mocking the sns, also put decorator @mock_sns on the function where you are initializing you connection to sns.\nExample:\n@mock_sns\nclass TestSnsMock(unittest.TestCase):\n\n @classmethod\n @mock_sns\n def setUpClass(cls):\n cls.conn = boto3.client(\"sns\", region_name=\"us-east-1\")\n cls.conn.create_topic(Name=\"some-topic\")\n cls.response = cls.conn.list_topics()\n cls.topic_arn = cls.response[\"Topics\"][0][\"TopicArn\"]\n\n def test_publish_sns(self):\n message = \"here is same message\"\n self.sns_client.publish(TopicArn=self.topic_arn, Message=message)\n\n\nif __name__ == \"__main__\":\n unittest.main()\n\n", "Sample code below. I hope it helps somebody. The suggested fix about setting the Region was not my issue. If you are still stuck, this video is great.\nApproach:\n\nCreate a mocked Boto3 Resource ( not a Boto3 Client ).\nSet mock SNS Topic ARN in this new resource.\nOverwrite the SNS Topic ARN environment var for the test.\nGet a Boto3 Client that calls Publish to the mocked SNS Topic ARN.\n\nI hit the below error because I set the Topic ARN to mock_topic and not arn:aws:sns:eu-west-1:123456789012:mock_topic:\n\nbotocore.errorfactory.NotFoundException: An error occurred (NotFound) when calling the Publish operation: Endpoint does not exist\n\"\"\"\n\nimport main\nimport boto3\nimport pytest\nimport botocore\nfrom moto import mock_sns\n\n# http://docs.getmoto.org/en/latest/docs/getting_started.html\n#####################################################################\n# test_main.py\n#####################################################################\n\n@pytest.fixture()\ndef mock_message():\n return {\n \"foo\": \"1st wonderful message.\",\n \"bar\": \"2nd wonderful message.\",\n \"baz\": \"3rd wonderful message.\",\n }\n\n\n@pytest.fixture()\ndef mock_sns_client():\n return sns_publish.get_sns_client()\n\n\ndef test_get_mocked_sns_client(mock_sns_client):\n assert isinstance(mock_sns_client, botocore.client.BaseClient)\n\n\nmock_topic_name = \"mock_topic\"\n\n\n@mock_sns\ndef test_mock_send_sns(mock_message, monkeypatch, mock_sns_client):\n \"\"\"\n 1. Create a mocked Boto3 Resource ( not a Boto3 Client ).\n 2. Set mock SNS Topic ARN in this new resource.\n 3. Overwrite the SNS Topic ARN environment var for the test.\n \"\"\"\n sns_resource = boto3.resource(\n \"sns\",\n region_name=os.environ.get(\"AWS_REGION\")\n )\n\n topic = sns_resource.create_topic(\n Name=mock_topic_name\n )\n assert mock_topic_name in topic.arn\n monkeypatch.setenv('SNS_TOPIC_ARN', topic.arn)\n assert os.environ.get(\"SNS_TOPIC_ARN\") == topic.arn\n\n response = sns_publish.send_sns(mock_sns_client, mock_message)\n\n assert isinstance(response, dict)\n message_id = response.get(\"MessageId\", None)\n assert isinstance(message_id, str)\n\n\n#####################################################################\n# main.py\n# split the get Client and Publish for simpler testing\n#####################################################################\n\nimport boto3\nimport json\nimport botocore\nimport os\nfrom conf.base_logger import logger\n\n\n# split the get Client and Publish for simpler testing\ndef get_sns_client():\n return boto3.client(\"sns\", region_name=os.environ.get(\"AWS_REGION\"))\n\n\ndef send_sns(sns_client, message: dict) -> dict:\n if not isinstance(message, dict):\n logger.info(\"message to send Slack is not in expected format\")\n return None\n if not isinstance(sns_client, botocore.client.BaseClient):\n logger.info(\"something wrong with the SNS client\")\n return None\n return sns_client.publish(\n TargetArn=os.environ.get(\"SNS_TOPIC_ARN\"),\n Message=json.dumps({'default': json.dumps(message, indent=4, sort_keys=True)}),\n Subject='Foo\\'s stats',\n MessageStructure='json'\n )\n\n" ]
[ 3, 3, 0 ]
[]
[]
[ "amazon_web_services", "boto", "moto", "python", "unit_testing" ]
stackoverflow_0062015260_amazon_web_services_boto_moto_python_unit_testing.txt
Q: Sort label of a pie chart in a specific way I want to make a pie chart plot to display a survey result. I'll try to keep it very simple. On a question of my survey, there were several response type string, like this example : Question : "do you practice magic" Possibles responses : "yes", "not sure", "not intentionnally", "no" Then I make a pie chart with the proportion of response in each type. But on the pie chart, the response are sorted in a way that I don't want to : What I want, in clockwise direction, from 0Β° (0Β° degree is at the 'right' of the circle if I'm right) : 'no', 'not sure', 'not intentionally', 'yes'. For me, it require a specific sorting of index that I can't figure, because it's not alphabetical, not ascending, nothing, just "my" way. For information the dataframe : q1 : do you pratice magic ? no 6 not intentionally 2 not sure 1 yes 3 dtype: int64 And if I list the index : In : df2.index Out: Index(['no', 'not intentionally', 'not sure', 'yes'], dtype='object', name='q1 : do you pratice magic ?') And here is my code for the plot : fig = plt.figure(figsize=(8, 6),dpi=60) fig.suptitle(('Some title'),fontsize=20, fontweight='bold', **csfont) ############################# ax = fig.add_subplot() colors = iter(cm.rainbow(np.linspace(0, 4))) label_font=18 startang=0 ax.pie(df2, labels=df2.index, autopct='%1.0f%%', wedgeprops=dict(width=0.4, edgecolor="w",linewidth= 1, linestyle= '-', antialiased= True), colors=colors, startangle=startang, textprops={'fontsize': label_font, **csfont}, pctdistance=0.8, labeldistance=1.15, ) ax.axis('equal') ################## Any idea ? A: Oh dear, I just find it ! just create a list of your specific order : reorderindex = ['no', 'not sure', 'not intentionally', 'yes'.] # the old one tocompare : ['no', 'not intentionally', 'not sure', 'yes'] and reindex with it : df2 = df2.reindex(reorderindex) Yolo !
Sort label of a pie chart in a specific way
I want to make a pie chart plot to display a survey result. I'll try to keep it very simple. On a question of my survey, there were several response type string, like this example : Question : "do you practice magic" Possibles responses : "yes", "not sure", "not intentionnally", "no" Then I make a pie chart with the proportion of response in each type. But on the pie chart, the response are sorted in a way that I don't want to : What I want, in clockwise direction, from 0Β° (0Β° degree is at the 'right' of the circle if I'm right) : 'no', 'not sure', 'not intentionally', 'yes'. For me, it require a specific sorting of index that I can't figure, because it's not alphabetical, not ascending, nothing, just "my" way. For information the dataframe : q1 : do you pratice magic ? no 6 not intentionally 2 not sure 1 yes 3 dtype: int64 And if I list the index : In : df2.index Out: Index(['no', 'not intentionally', 'not sure', 'yes'], dtype='object', name='q1 : do you pratice magic ?') And here is my code for the plot : fig = plt.figure(figsize=(8, 6),dpi=60) fig.suptitle(('Some title'),fontsize=20, fontweight='bold', **csfont) ############################# ax = fig.add_subplot() colors = iter(cm.rainbow(np.linspace(0, 4))) label_font=18 startang=0 ax.pie(df2, labels=df2.index, autopct='%1.0f%%', wedgeprops=dict(width=0.4, edgecolor="w",linewidth= 1, linestyle= '-', antialiased= True), colors=colors, startangle=startang, textprops={'fontsize': label_font, **csfont}, pctdistance=0.8, labeldistance=1.15, ) ax.axis('equal') ################## Any idea ?
[ "Oh dear, I just find it !\njust create a list of your specific order :\nreorderindex = ['no', 'not sure', 'not intentionally', 'yes'.]\n# the old one tocompare : ['no', 'not intentionally', 'not sure', 'yes']\n\nand reindex with it :\ndf2 = df2.reindex(reorderindex)\n\nYolo !\n" ]
[ 0 ]
[]
[]
[ "dataframe", "indexing", "pie_chart", "python", "survey" ]
stackoverflow_0074487974_dataframe_indexing_pie_chart_python_survey.txt
Q: How can the code being allowed to continue from one question to another question even though the input answer had wrong for 3 attempts? import time import random #declare variables and constant guessingelement = ["Hydrogen", "Magnesium", "Cobalt", "Mercury", "Aluminium", "Uranium", "Antimony"] nicephrases = ["Nice job", "Marvellous", "Wonderful", "Bingo", "Dynamite"] guess = "" guess_count = 0 guess_limit = 3 out_of_guesses = False guess_no = 0 score = 0 #set the maximum number of questions for looping and random pick an element from the list before deleting it for i in range(7): randomelement = random.choice(guessingelement) guessingelement.remove(randomelement) time.sleep(2) #tips of the element if randomelement == "Hydrogen" and not out_of_guesses: print("Tip 1: It is the most flammable of all the known substances.") print("Tip 2: It reacts with oxides and chlorides of many metals, like copper, lead, mercury, to produce free metals.") print("Tip 3: It reacts with oxygen to form water.") #test the number of tries so that it doesn't exceed 3 times if answer is wrong while guess != randomelement and not(out_of_guesses): if guess_count < guess_limit: guess = input("Enter guess: ") guess_count += 1 else: out_of_guesses = True #add score, praise when answer is correct and encourage when answer is wrong for 3 times if out_of_guesses: print("Out of Guesses, NICE EFFORT!") else: print(random.choice(nicephrases), ", YOU GET IT!") score = score + 1 #tips of the element if randomelement == "Magnesium" and not out_of_guesses: print("Tip 1: It has the atomic number of 12.") print("Tip 2: It's oxide can be extracted into free metal through electrolysis.") print("Tip 3: It is a type of metal.") Same as first questions' code.. and so on.... In the progress of changing:` #tips of the element if randomelement == "Hydrogen": print("Tip 1: It is the most flammable of all the known substances.") print("Tip 2: It reacts with oxides and chlorides of many metals, like copper, lead, mercury, to produce free metals.") print("Tip 3: It reacts with oxygen to form water.") #test the number of tries so that it doesn't exceed 3 times if answer is wrong while guess != randomelement: if guess_count < guess_limit: guess = input("Enter guess: ") guess_count += 1 else: print(random.choice(wronganswers)) #add score, praise when answer is correct and encourage when answer is wrong for 3 times else: print(random.choice(nicephrases), ", YOU GET IT!") score = score + 1 However, after 3 times of attempts, it keeps printing the elements from the wronganswers list non-stop, and can't proceed to the next question. The output I expected is that it will show an element from the list when the input answer is wrong and proceed to the next question. A: I hope this helps. I wrote this code for myself in a new way. I have used recursion to keep the guess happening and simple used a while loop that will break when max attempts go beyond 3. import random elements = ["hydrogen", "magnesium", "cobalt", "mercury", "aluminium", "uranium", "antimony"] nice_phrases = ["Nice job", "Marvellous", "Wonderful", "Bingo", "Dynamite"] # I went ahead and created a dictionary of lists for storing the Hints hints = { 'hydrogen': [ "Tip 1: It is the most flammable of all the known substances.", "Tip 2: It reacts with oxides and chlorides of many metals, " "like copper, lead, mercury, to produce free metals.", "Tip 3: It reacts with oxygen to form water." ], 'magnesium': [ "Tip 1: It has the atomic number of 12.", "Tip 2: It's oxide can be extracted into free metal through electrolysis.", "Tip 3: It is a type of metal." ], } score = 0 def guess_again(): global score random_element = random.choice(elements) max_attempts = 3 # this will remove the element from occurring again for x in elements: if x == random_element: elements.remove(x) while max_attempts > 0: user_guess = input("Take a Guess").lower() if user_guess == random_element: print(f"{random.choice(nice_phrases)}, you got it!") score += 1 # If the answer is right calling the function again will continue the game guess_again() else: max_attempts -= 1 print("That was a wrong guess. Here is a Hint") if random_element in hints: print(hints[random_element]) else: print("Sorry no hints available at the moment") if max_attempts == 0: print("Sorry your out of guesses") print(f"{random.choice} was the element") guess_again() If the Answer is right it will select the next element and the game continues until all the elements in the list are finished. I have coded it to give 3 attempts for each elements. If you want the maximum 3 attempts for the entire duration of the game just declare max_attempts outside the function and give it global scope like done for score
How can the code being allowed to continue from one question to another question even though the input answer had wrong for 3 attempts?
import time import random #declare variables and constant guessingelement = ["Hydrogen", "Magnesium", "Cobalt", "Mercury", "Aluminium", "Uranium", "Antimony"] nicephrases = ["Nice job", "Marvellous", "Wonderful", "Bingo", "Dynamite"] guess = "" guess_count = 0 guess_limit = 3 out_of_guesses = False guess_no = 0 score = 0 #set the maximum number of questions for looping and random pick an element from the list before deleting it for i in range(7): randomelement = random.choice(guessingelement) guessingelement.remove(randomelement) time.sleep(2) #tips of the element if randomelement == "Hydrogen" and not out_of_guesses: print("Tip 1: It is the most flammable of all the known substances.") print("Tip 2: It reacts with oxides and chlorides of many metals, like copper, lead, mercury, to produce free metals.") print("Tip 3: It reacts with oxygen to form water.") #test the number of tries so that it doesn't exceed 3 times if answer is wrong while guess != randomelement and not(out_of_guesses): if guess_count < guess_limit: guess = input("Enter guess: ") guess_count += 1 else: out_of_guesses = True #add score, praise when answer is correct and encourage when answer is wrong for 3 times if out_of_guesses: print("Out of Guesses, NICE EFFORT!") else: print(random.choice(nicephrases), ", YOU GET IT!") score = score + 1 #tips of the element if randomelement == "Magnesium" and not out_of_guesses: print("Tip 1: It has the atomic number of 12.") print("Tip 2: It's oxide can be extracted into free metal through electrolysis.") print("Tip 3: It is a type of metal.") Same as first questions' code.. and so on.... In the progress of changing:` #tips of the element if randomelement == "Hydrogen": print("Tip 1: It is the most flammable of all the known substances.") print("Tip 2: It reacts with oxides and chlorides of many metals, like copper, lead, mercury, to produce free metals.") print("Tip 3: It reacts with oxygen to form water.") #test the number of tries so that it doesn't exceed 3 times if answer is wrong while guess != randomelement: if guess_count < guess_limit: guess = input("Enter guess: ") guess_count += 1 else: print(random.choice(wronganswers)) #add score, praise when answer is correct and encourage when answer is wrong for 3 times else: print(random.choice(nicephrases), ", YOU GET IT!") score = score + 1 However, after 3 times of attempts, it keeps printing the elements from the wronganswers list non-stop, and can't proceed to the next question. The output I expected is that it will show an element from the list when the input answer is wrong and proceed to the next question.
[ "I hope this helps. I wrote this code for myself in a new way. I have used recursion to keep the guess happening and simple used a while loop that will break when max attempts go beyond 3.\nimport random\n\nelements = [\"hydrogen\", \"magnesium\", \"cobalt\", \"mercury\", \"aluminium\", \"uranium\", \"antimony\"]\nnice_phrases = [\"Nice job\", \"Marvellous\", \"Wonderful\", \"Bingo\", \"Dynamite\"]\n\n# I went ahead and created a dictionary of lists for storing the Hints\nhints = {\n 'hydrogen':\n [\n \"Tip 1: It is the most flammable of all the known substances.\",\n \"Tip 2: It reacts with oxides and chlorides of many metals, \"\n \"like copper, lead, mercury, to produce free metals.\",\n \"Tip 3: It reacts with oxygen to form water.\"\n ],\n 'magnesium':\n [\n \"Tip 1: It has the atomic number of 12.\",\n \"Tip 2: It's oxide can be extracted into free metal through electrolysis.\",\n \"Tip 3: It is a type of metal.\"\n ],\n}\n\nscore = 0\n\n\ndef guess_again():\n global score\n random_element = random.choice(elements)\n max_attempts = 3\n\n # this will remove the element from occurring again\n for x in elements:\n if x == random_element:\n elements.remove(x)\n\n while max_attempts > 0:\n user_guess = input(\"Take a Guess\").lower()\n\n if user_guess == random_element:\n print(f\"{random.choice(nice_phrases)}, you got it!\")\n score += 1\n # If the answer is right calling the function again will continue the game\n guess_again()\n else:\n max_attempts -= 1\n print(\"That was a wrong guess. Here is a Hint\")\n if random_element in hints:\n print(hints[random_element])\n else:\n print(\"Sorry no hints available at the moment\")\n if max_attempts == 0:\n print(\"Sorry your out of guesses\")\n print(f\"{random.choice} was the element\")\n\n\nguess_again()\n\nIf the Answer is right it will select the next element and the game continues until all the elements in the list are finished. I have coded it to give 3 attempts for each elements. If you want the maximum 3 attempts for the entire duration of the game just declare max_attempts outside the function and give it global scope like done for score\n" ]
[ 0 ]
[]
[]
[ "python", "while_loop" ]
stackoverflow_0074487482_python_while_loop.txt
Q: How can I visualize a dataset containing 5 independent and 1 dependent variable? I have a DataFrame containing 576 rows with these variables, where the first 5 is the independent variable and the last is the dependent one. The line below shows the range of the variables. I am looking for an effective way to visualize the dataset. I want to show the optimum fitness value for the 5 variables. The range of the variables: Ξ±=[0.4, 0.7] c1=[1, 2] c2=[1, 2] ww=[0.6, 1.25] p_rate=[0.05, 0.3] fitness =[1000,10000] I would be grateful if anyone could help me with this visualization. Thank you I tried this method but the result is very hard to understand. https://github.com/ostwalprasad/PythonMultiDimensionalPlots/blob/master/src/6D.py A: I can recommend parallel coordinates, plotly has a good implementation here. https://plotly.com/python/parallel-coordinates-plot/
How can I visualize a dataset containing 5 independent and 1 dependent variable?
I have a DataFrame containing 576 rows with these variables, where the first 5 is the independent variable and the last is the dependent one. The line below shows the range of the variables. I am looking for an effective way to visualize the dataset. I want to show the optimum fitness value for the 5 variables. The range of the variables: Ξ±=[0.4, 0.7] c1=[1, 2] c2=[1, 2] ww=[0.6, 1.25] p_rate=[0.05, 0.3] fitness =[1000,10000] I would be grateful if anyone could help me with this visualization. Thank you I tried this method but the result is very hard to understand. https://github.com/ostwalprasad/PythonMultiDimensionalPlots/blob/master/src/6D.py
[ "I can recommend parallel coordinates, plotly has a good implementation here.\nhttps://plotly.com/python/parallel-coordinates-plot/\n\n" ]
[ 0 ]
[]
[]
[ "python", "visualization" ]
stackoverflow_0074487582_python_visualization.txt
Q: How can I break out of telegram bot loop application.run_polling()? def bot_start(): application = ApplicationBuilder().token("api_key").build() async def stop(update, context): await context.bot.send_message(chat_id=update.message.chat_id, text='Terminating Bot...') await application.stop() await Updater.shutdown(application.bot) await application.shutdown() async def error(update, context): err = f"Update: {update}\nError: {context.error}" logging.error(err, exc_info=context.error) await context.bot.send_message(chat_id=user_id, text=err) application.add_handler(CommandHandler('stop', stop)) application.add_error_handler(error) application.run_polling() I tried everything I could to stop it and I couldnt as it's not letting other lines of code that comes after calling bot_start() run. It basically never reaches them. Please help X_X A: Application.run_polling is a convenience methods that starts everything and keeps the bot running until you signal the process to shut down. It's mainly intended to be used if the Application is the only long-running thing in your python process. If you want to run other things alongside your bot, you can instead manually call the methods listed in the docs of run_polling. You may also want to have a look at this example, where this is showcased for a setup for a custom webhook server is used instead of PTBs built-in one. Disclaimer: I'm currently the maintainer of python-telegram-bot.
How can I break out of telegram bot loop application.run_polling()?
def bot_start(): application = ApplicationBuilder().token("api_key").build() async def stop(update, context): await context.bot.send_message(chat_id=update.message.chat_id, text='Terminating Bot...') await application.stop() await Updater.shutdown(application.bot) await application.shutdown() async def error(update, context): err = f"Update: {update}\nError: {context.error}" logging.error(err, exc_info=context.error) await context.bot.send_message(chat_id=user_id, text=err) application.add_handler(CommandHandler('stop', stop)) application.add_error_handler(error) application.run_polling() I tried everything I could to stop it and I couldnt as it's not letting other lines of code that comes after calling bot_start() run. It basically never reaches them. Please help X_X
[ "Application.run_polling is a convenience methods that starts everything and keeps the bot running until you signal the process to shut down. It's mainly intended to be used if the Application is the only long-running thing in your python process. If you want to run other things alongside your bot, you can instead manually call the methods listed in the docs of run_polling. You may also want to have a look at this example, where this is showcased for a setup for a custom webhook server is used instead of PTBs built-in one.\n\nDisclaimer: I'm currently the maintainer of python-telegram-bot.\n" ]
[ 1 ]
[]
[]
[ "python", "python_telegram_bot", "telegram" ]
stackoverflow_0074484933_python_python_telegram_bot_telegram.txt
Q: Why am I getting this matplotlib error for plotting a categorical variable? I feel stupid but I cannot seem to fix this error or find any solution online. Why do I keep getting the following error no matter how I try to plot it using matplotlib? For instance even the following code gives me the same error - names = list(fig1['day']) values = list(fig1['count']) fig, axs = plt.subplots(figsize=(25, 10)) axs.bar(names, values, color='plum') matplotlib.category: Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting. UPDATE: I found the solution - https://discourse.matplotlib.org/t/why-am-i-getting-this-matplotlib-error-for-plotting-a-categorical-variable/21758/2 A: Actually the right solution is this, because is more generic, applies not only to calendar objects: fig, ax = plt.subplots() names = ['Monday', 'Tuesday', 'Wednesday'] # you may try with: # names = ['1111', '2222', '3333'] # This variable is a range (numerical values) and we pass it for number of ticks on X axis: rln = range(len(names)) values = [230, 112, 12] ax.bar(rln, values) # Finally, we are setting ticks labels on X axis: plt.xticks(rln, names) plt.show()
Why am I getting this matplotlib error for plotting a categorical variable?
I feel stupid but I cannot seem to fix this error or find any solution online. Why do I keep getting the following error no matter how I try to plot it using matplotlib? For instance even the following code gives me the same error - names = list(fig1['day']) values = list(fig1['count']) fig, axs = plt.subplots(figsize=(25, 10)) axs.bar(names, values, color='plum') matplotlib.category: Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting. UPDATE: I found the solution - https://discourse.matplotlib.org/t/why-am-i-getting-this-matplotlib-error-for-plotting-a-categorical-variable/21758/2
[ "Actually the right solution is this, because is more generic, applies not only to calendar objects:\nfig, ax = plt.subplots()\n\nnames = ['Monday', 'Tuesday', 'Wednesday']\n# you may try with: \n# names = ['1111', '2222', '3333']\n\n# This variable is a range (numerical values) and we pass it for number of ticks on X axis:\nrln = range(len(names))\n\nvalues = [230, 112, 12]\nax.bar(rln, values)\n\n# Finally, we are setting ticks labels on X axis:\nplt.xticks(rln, names)\nplt.show()\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0065526338_matplotlib_python.txt
Q: How to index specific elements of a linear model summary output - pandas I have the following linear model output. I want to index a particular value, printing only the R-squared value (0.028) but am not sure how to do this. Would be so grateful for a helping hand! resultmodeldistancevariation2sleepsummary OLS Regression Results Dep. Variable: distance R-squared: 0.028 Model: OLS Adj. R-squared: 0.016 Method: Least Squares F-statistic: 2.338 Date: Fri, 18 Nov 2022 Prob (F-statistic): 0.00773 Time: 10:06:14 Log-Likelihood: -1274.1 No. Observations: 907 AIC: 2572. Df Residuals: 895 BIC: 2630. Df Model: 11 Covariance Type: nonrobust I would be so grateful for a helping hand! A: I have solved the issue by adding the following code: newerresults = resultmodeldistancevariation2sleepsummary.tables[0] newerdata = pd.DataFrame(newerresults) print(newerdata.iloc[0,3]) Convert to table - then to dataframe - then index :)
How to index specific elements of a linear model summary output - pandas
I have the following linear model output. I want to index a particular value, printing only the R-squared value (0.028) but am not sure how to do this. Would be so grateful for a helping hand! resultmodeldistancevariation2sleepsummary OLS Regression Results Dep. Variable: distance R-squared: 0.028 Model: OLS Adj. R-squared: 0.016 Method: Least Squares F-statistic: 2.338 Date: Fri, 18 Nov 2022 Prob (F-statistic): 0.00773 Time: 10:06:14 Log-Likelihood: -1274.1 No. Observations: 907 AIC: 2572. Df Residuals: 895 BIC: 2630. Df Model: 11 Covariance Type: nonrobust I would be so grateful for a helping hand!
[ "I have solved the issue by adding the following code:\nnewerresults = resultmodeldistancevariation2sleepsummary.tables[0]\nnewerdata = pd.DataFrame(newerresults)\nprint(newerdata.iloc[0,3])\n\nConvert to table - then to dataframe - then index\n:)\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "pandas", "python", "regression", "statistics" ]
stackoverflow_0074488168_jupyter_notebook_pandas_python_regression_statistics.txt
Q: How to change Azure App Service Python version from 3.9.7 to 3.9.12? I am trying to deploy an application on Azure App Service. I have created a deployment with Python 3.9.7, but my app needs Python 3.9.12. How do I upgrade python's version? In Azure App Service > Configuration > General Settings > Python minor version, there is no 3.9.12 available. So I have to upgrade it by SSH. But, I don't understand Linux. Can anyone tell me a command to upgrade my python version to 3.9.12 using bash? A: Created the Python Web App from the Azure Portal with the version 3.9 and shown as 3.9.7 in the SSH Console. In the Configuration, it shown me only these versions in minor and major dropdown lists: Using this Azure CLI cmdlet az webapp config set ..., I have set the Python version from 3.9.7 to 3.9.12 but not reflected specifically like 3.9.X in the SSH Console. It is showing the current python version of that web app with the az webapp config show cmdlet: az webapp config set --resource-group HariTestRG --name krishpywebapp101 --linux-fx-version "PYTHON|3.9.12" Result: az webapp config show --resource-group HariTestRG --name krishpywebapp101 --query linuxFxVersion A: Using the portal/blessed images(available images) you cannot upgrade your webapp runtime to particular python minor version. Currently available minor python versions that can be deployed from portal are 3.7.12,3.8.12,3.9.7,3.10.4. If you want to run a particular runtime minor version (python 3.9.12) in your web app it is always recommended to use custom docker image using that you can run the unsupported versions as well as mentioned in this documentation. Here is the reference documentation to create custom docker container on Azure app service which has sample Dockerfile as well. Refer to this Docker Hub registry for python images and select a particular image based on your requirement. Note: I have looked at the official docker hub registry for python as well unfortunately we do not have any image that support v3.9.12 we have images for V3.9.15.
How to change Azure App Service Python version from 3.9.7 to 3.9.12?
I am trying to deploy an application on Azure App Service. I have created a deployment with Python 3.9.7, but my app needs Python 3.9.12. How do I upgrade python's version? In Azure App Service > Configuration > General Settings > Python minor version, there is no 3.9.12 available. So I have to upgrade it by SSH. But, I don't understand Linux. Can anyone tell me a command to upgrade my python version to 3.9.12 using bash?
[ "\nCreated the Python Web App from the Azure Portal with the version 3.9 and shown as 3.9.7 in the SSH Console.\nIn the Configuration, it shown me only these versions in minor and major dropdown lists:\n\nUsing this Azure CLI cmdlet az webapp config set ..., I have set the Python version from 3.9.7 to 3.9.12 but not reflected specifically like 3.9.X in the SSH Console. It is showing the current python version of that web app with the az webapp config show cmdlet:\naz webapp config set --resource-group HariTestRG --name krishpywebapp101 --linux-fx-version \"PYTHON|3.9.12\"\n\n\n\nResult:\naz webapp config show --resource-group HariTestRG --name krishpywebapp101 --query linuxFxVersion\n\n\n", "Using the portal/blessed images(available images) you cannot upgrade your webapp runtime to particular python minor version.\nCurrently available minor python versions that can be deployed from portal are 3.7.12,3.8.12,3.9.7,3.10.4.\nIf you want to run a particular runtime minor version (python 3.9.12) in your web app it is always recommended to use custom docker image using that you can run the unsupported versions as well as mentioned in this documentation.\n\nHere is the reference documentation to create custom docker container on Azure app service which has sample Dockerfile as well.\nRefer to this Docker Hub registry for python images and select a particular image based on your requirement.\nNote: I have looked at the official docker hub registry for python as well unfortunately we do not have any image that support v3.9.12 we have images for V3.9.15.\n" ]
[ 1, 0 ]
[]
[]
[ "azure", "azure_web_app_service", "bash", "linux", "python" ]
stackoverflow_0074478132_azure_azure_web_app_service_bash_linux_python.txt
Q: Python SDK Azure Computer Vision: 'bytes' object has no attribute 'read' I am currently developing simple demo how to capture some text over the object such as license plate, Bus number, etc using combination Azure custom vision and Azure OCR. I have issue when sending image to Azure OCR like below: 'bytes' object has no attribute 'read' Simply by capturing frame from camera and send it to Azure OCR Read using Python SDK. Anyone has similar issue like above? How to fix it? and the best way to send frame into Azure OCR Read below some snippet from my code (Let say the frame already cropped from custom vision boundaries process): highest_prob = predictions[0] image_text = detect_text(frame, highest_prob) to function: def detect_text(image, highest_prob): # Convert image to byte string img_str = cv2.imencode(".jpg", image)[1].tostring() #Call API with image and raw response (allows you to get the operation location) recognize_printed_results = computervision_client.read_in_stream(img_str, raw=True) Capturing camera using: cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() if not ret: break A: Computer vision libraries cannot be accessible from the root environment and we need to get the libraries to access inside the virtual environment. With respect to CV2, upgrade the version of computer vision which solves the issue. The read operation needs to have some upgrade in the form of computer vision library support. pip install --upgrade azure-cognitiveservices-vision-computervision use ComputerVisionClient.analyze_image() to read the list of images. Not as an individual entity.
Python SDK Azure Computer Vision: 'bytes' object has no attribute 'read'
I am currently developing simple demo how to capture some text over the object such as license plate, Bus number, etc using combination Azure custom vision and Azure OCR. I have issue when sending image to Azure OCR like below: 'bytes' object has no attribute 'read' Simply by capturing frame from camera and send it to Azure OCR Read using Python SDK. Anyone has similar issue like above? How to fix it? and the best way to send frame into Azure OCR Read below some snippet from my code (Let say the frame already cropped from custom vision boundaries process): highest_prob = predictions[0] image_text = detect_text(frame, highest_prob) to function: def detect_text(image, highest_prob): # Convert image to byte string img_str = cv2.imencode(".jpg", image)[1].tostring() #Call API with image and raw response (allows you to get the operation location) recognize_printed_results = computervision_client.read_in_stream(img_str, raw=True) Capturing camera using: cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() if not ret: break
[ "Computer vision libraries cannot be accessible from the root environment and we need to get the libraries to access inside the virtual environment. With respect to CV2, upgrade the version of computer vision which solves the issue. The read operation needs to have some upgrade in the form of computer vision library support.\npip install --upgrade azure-cognitiveservices-vision-computervision\n\nuse ComputerVisionClient.analyze_image() to read the list of images. Not as an individual entity.\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_cognitive_services", "camera", "python", "real_time" ]
stackoverflow_0074474621_azure_azure_cognitive_services_camera_python_real_time.txt
Q: BadRequestKeyError(key) werkzeug.exceptions. 400 The browser (or proxy) sent a request that this server could not understand. KeyError: 'name' I've a problem with sending from a html form to a python flask application. First the html code: <form id="signup-form" class="bg-white rounded" autocomplete="no" id="signup-form" action="/signup" method="post"> <h2 class="mt-0 mb-0 text-center">Sign Up For</h2> <h2 class="mb-4 mt-0 text-center">Free Trial</h2> <input type="text" name="name" class="form-control mb-2" id="fieldForName" placeholder="Name" autocomplete="new-name" required> <input type="email" name="email" class="form-control mb-2" id="fieldForEmail" placeholder="Email" autocomplete="new-email" required> <input type="password" name="password" class="form-control mb-2" id="fieldForPassword" placeholder="Password" autocomplete="new-pass" required> <div id="output" class="pb-2 row align-items-center justify-content-center text-danger font-weight-bold">{{ message }}</div> <button type="submit" class="btn btn-primary btn-lg btn-block" autocomplete="new-submit"> <div id="register-text">Register</div> {% include 'partials/spinner.html' %} </button> </form> Second the python code: @user_api.route("/signup", methods=["POST"]) def signup(): print("RequestForm: " + str(request.form)) name = str(request.form["name"]) mail = str(request.form["email"]) password = str(request.form["password"]) json_request = jsonify( name=name, email=mail, password=password ) action.signup(json_request) return redirect("/login_page", code=302) Third the error message: RequestForm: ImmutableMultiDict([]) 127.0.0.1 - - [17/Nov/2022 14:37:46] "POST /signup HTTP/1.1" 500 - Traceback (most recent call last): File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 2548, in __call__ return self.wsgi_app(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 2528, in wsgi_app response = self.handle_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 2525, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/app/Service/service_calls/call_user_service.py", line 24, in signup name = str(request.form["name"]) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/werkzeug/datastructures.py", line 375, in __getitem__ raise exceptions.BadRequestKeyError(key) werkzeug.exceptions.BadRequestKeyError: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand. KeyError: 'name' Can you please help me by giving me at least a hint. Thank you! I've tried to rename name of the input fields and I've done some reorganizing of the html. Now I want to get the submit values inside the dictionary to have the right data in the post html call. A: !!!!Solved!!!! The overgiven post message is a json-string. So I have handle it like this. def signup(self, request): # Get data from AJAX request data = request.get_json(force=True) email = data['email'] password = data['password'] name = data['name'] message, status_code = db_access.create_user(email=email, password=password, name=name) return message, status_code Thank you again.
BadRequestKeyError(key) werkzeug.exceptions. 400 The browser (or proxy) sent a request that this server could not understand. KeyError: 'name'
I've a problem with sending from a html form to a python flask application. First the html code: <form id="signup-form" class="bg-white rounded" autocomplete="no" id="signup-form" action="/signup" method="post"> <h2 class="mt-0 mb-0 text-center">Sign Up For</h2> <h2 class="mb-4 mt-0 text-center">Free Trial</h2> <input type="text" name="name" class="form-control mb-2" id="fieldForName" placeholder="Name" autocomplete="new-name" required> <input type="email" name="email" class="form-control mb-2" id="fieldForEmail" placeholder="Email" autocomplete="new-email" required> <input type="password" name="password" class="form-control mb-2" id="fieldForPassword" placeholder="Password" autocomplete="new-pass" required> <div id="output" class="pb-2 row align-items-center justify-content-center text-danger font-weight-bold">{{ message }}</div> <button type="submit" class="btn btn-primary btn-lg btn-block" autocomplete="new-submit"> <div id="register-text">Register</div> {% include 'partials/spinner.html' %} </button> </form> Second the python code: @user_api.route("/signup", methods=["POST"]) def signup(): print("RequestForm: " + str(request.form)) name = str(request.form["name"]) mail = str(request.form["email"]) password = str(request.form["password"]) json_request = jsonify( name=name, email=mail, password=password ) action.signup(json_request) return redirect("/login_page", code=302) Third the error message: RequestForm: ImmutableMultiDict([]) 127.0.0.1 - - [17/Nov/2022 14:37:46] "POST /signup HTTP/1.1" 500 - Traceback (most recent call last): File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 2548, in __call__ return self.wsgi_app(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 2528, in wsgi_app response = self.handle_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 2525, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/flask/app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/app/Service/service_calls/call_user_service.py", line 24, in signup name = str(request.form["name"]) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/robertgroll/VSCRepos/landingpage/env/lib/python3.11/site-packages/werkzeug/datastructures.py", line 375, in __getitem__ raise exceptions.BadRequestKeyError(key) werkzeug.exceptions.BadRequestKeyError: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand. KeyError: 'name' Can you please help me by giving me at least a hint. Thank you! I've tried to rename name of the input fields and I've done some reorganizing of the html. Now I want to get the submit values inside the dictionary to have the right data in the post html call.
[ "!!!!Solved!!!!\nThe overgiven post message is a json-string. So I have handle it like this.\ndef signup(self, request):\n # Get data from AJAX request\n data = request.get_json(force=True)\n \n email = data['email']\n password = data['password']\n name = data['name']\n\n message, status_code = db_access.create_user(email=email,\n password=password,\n name=name)\n\n return message, status_code\n\nThank you again.\n" ]
[ 0 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0074476933_flask_python.txt
Q: Python Apache Beam error "InvalidSchema: No connection adapters were found for" when request api url with spaces Following example from Apache Beam Pipeline to read from REST API runs locally but not on Dataflow pipeline requests data from api with response = requests.get(url, auth=HTTPDigestAuth(self.USER, self.PASSWORD), headers=headers) where url string url = "https://host:port/car('power%203')/speed" Pipeline fails with error, notice extra \ around 'power%203: InvalidSchema: No connection adapters were found for '(("https://host:post/car(\'power%203\')/speed",),)' [while running 'fetch API data'] Idea is to develop and test pipelines locally and then run production on gcp dataflow. Request works outside pipeline, but fails inside Python Apache Beam pipeline. Pipeline executed on DirectRunner from WSL2 Ubuntu conda pyhton 3.9 environment or cloud jupyter hub still returns same error. Please find full pipeline example below: import logging import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions, StandardOptions import requests import json from requests.auth import HTTPDigestAuth class get_api_data(beam.DoFn): def __init__(self, url): self.url = url, self.USER = 'user' self.PASSWORD = 'password' def process(self, buffer=[]): logging.info(self.url) headers = { 'Prefer': f'data.maxpagesize=2000', } response = requests.get(self.url, auth=HTTPDigestAuth(self.USER, self.PASSWORD), headers=headers) buffer = response.json()['value'] return buffer class Split(beam.DoFn): def process(self, element): try: etag = element['etag'] car_id = element['carID'] power = element['power'] speed = element['speed'] except ValueError as e: logging.error(e) return [{ 'etag': str(etag), 'car_id': str(car_id), 'power': int(power), 'speed': float(speed), }] def run(argv=None): url = "https://host:port/car('power%203')/speed" p1 = beam.Pipeline(options=pipeline_options) ingest_data = ( p1 | 'Start Pipeline' >> beam.Create([None]) | 'fetch API data' >> beam.ParDo(get_api_data(url)) | 'split records' >> beam.ParDo(Split()) | 'write to text' >> beam.io.WriteToText("./test_v2.csv") ) result = p1.run() if __name__ == '__main__': logging.getLogger().setLevel(logging.INFO) run() It got me really confused and I would be grateful if someone could share any suggestions or comments on why url string got distorted. A: Remove the comma next to url in get_api_data class - it should fix the problem class get_api_data(beam.DoFn): def __init__(self, url): self.url = url self.USER = 'user' self.PASSWORD = 'password'
Python Apache Beam error "InvalidSchema: No connection adapters were found for" when request api url with spaces
Following example from Apache Beam Pipeline to read from REST API runs locally but not on Dataflow pipeline requests data from api with response = requests.get(url, auth=HTTPDigestAuth(self.USER, self.PASSWORD), headers=headers) where url string url = "https://host:port/car('power%203')/speed" Pipeline fails with error, notice extra \ around 'power%203: InvalidSchema: No connection adapters were found for '(("https://host:post/car(\'power%203\')/speed",),)' [while running 'fetch API data'] Idea is to develop and test pipelines locally and then run production on gcp dataflow. Request works outside pipeline, but fails inside Python Apache Beam pipeline. Pipeline executed on DirectRunner from WSL2 Ubuntu conda pyhton 3.9 environment or cloud jupyter hub still returns same error. Please find full pipeline example below: import logging import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions, StandardOptions import requests import json from requests.auth import HTTPDigestAuth class get_api_data(beam.DoFn): def __init__(self, url): self.url = url, self.USER = 'user' self.PASSWORD = 'password' def process(self, buffer=[]): logging.info(self.url) headers = { 'Prefer': f'data.maxpagesize=2000', } response = requests.get(self.url, auth=HTTPDigestAuth(self.USER, self.PASSWORD), headers=headers) buffer = response.json()['value'] return buffer class Split(beam.DoFn): def process(self, element): try: etag = element['etag'] car_id = element['carID'] power = element['power'] speed = element['speed'] except ValueError as e: logging.error(e) return [{ 'etag': str(etag), 'car_id': str(car_id), 'power': int(power), 'speed': float(speed), }] def run(argv=None): url = "https://host:port/car('power%203')/speed" p1 = beam.Pipeline(options=pipeline_options) ingest_data = ( p1 | 'Start Pipeline' >> beam.Create([None]) | 'fetch API data' >> beam.ParDo(get_api_data(url)) | 'split records' >> beam.ParDo(Split()) | 'write to text' >> beam.io.WriteToText("./test_v2.csv") ) result = p1.run() if __name__ == '__main__': logging.getLogger().setLevel(logging.INFO) run() It got me really confused and I would be grateful if someone could share any suggestions or comments on why url string got distorted.
[ "Remove the comma next to url in get_api_data class - it should fix the problem\nclass get_api_data(beam.DoFn):\n def __init__(self, url):\n self.url = url\n self.USER = 'user' \n self.PASSWORD = 'password'\n\n" ]
[ 1 ]
[]
[]
[ "apache_beam", "google_cloud_dataflow", "python", "python_requests" ]
stackoverflow_0074487643_apache_beam_google_cloud_dataflow_python_python_requests.txt
Q: how can I fix TypeError: can't set attributes of built-in/extension type 'cimpl.Consumer' example.py def simple(): msg = consumer.poll(timeout=int(timeout)) if msg is None: break if msg.error(): if (msg.error().code() == KafkaError.UNKNOWN_TOPIC_OR_PART): response_code = 409 self.logger.debug("Error reading message : {}".format(msg.error())) break when i mock (consumer.poll) it showing error, TypeError: can't set attributes of built_in/extension type 'cimpl.Consumer' @mock.patch('confluent_kafka.Consumer.poll') def test_simple(mock_poll): mock_poll.return_value A: As the error message says, you can't patch the C extension class. As a remedy, you can derive the class like this.(It shows the new style syntax for a fixture. Using an annotation is deprecated.) import confluent_kafka import Consumer as _Consumer class Consumer(_Consumer): pass def get_cls_full_name(cls): return cls.__module__ + '.' + cls.__name__ def test_consumer(mocker): mock_poll = mocker.patch(get_cls_full_name(Consumer) + '.poll') ...
how can I fix TypeError: can't set attributes of built-in/extension type 'cimpl.Consumer'
example.py def simple(): msg = consumer.poll(timeout=int(timeout)) if msg is None: break if msg.error(): if (msg.error().code() == KafkaError.UNKNOWN_TOPIC_OR_PART): response_code = 409 self.logger.debug("Error reading message : {}".format(msg.error())) break when i mock (consumer.poll) it showing error, TypeError: can't set attributes of built_in/extension type 'cimpl.Consumer' @mock.patch('confluent_kafka.Consumer.poll') def test_simple(mock_poll): mock_poll.return_value
[ "As the error message says, you can't patch the C extension class. As a remedy, you can derive the class like this.(It shows the new style syntax for a fixture. Using an annotation is deprecated.)\nimport confluent_kafka import Consumer as _Consumer\n\nclass Consumer(_Consumer): pass\n\ndef get_cls_full_name(cls):\n return cls.__module__ + '.' + cls.__name__\n\ndef test_consumer(mocker):\n mock_poll = mocker.patch(get_cls_full_name(Consumer) + '.poll')\n ...\n\n" ]
[ 0 ]
[]
[]
[ "apache_kafka", "confluent_kafka_python", "kafka_consumer_api", "pytest_mock", "python" ]
stackoverflow_0073290315_apache_kafka_confluent_kafka_python_kafka_consumer_api_pytest_mock_python.txt
Q: find all occurrences between 2 values in non default pattern I am stumbling into an issue with a regex search in python So I have: testVariable = re.findall(r'functest(.*?)1', 'functest exampleOne [2] functest exampleTwo [1] functest exampleOne throw [2] functest exampleThree [1]') Current Output is: [' exampleOne [2] functest exampleTwo [', ' exampleOne throw [2] functest exampleThree ['] But what I want is to find all occurences between β€˜functest’ & 1' <or 2, or 3 based on need> so output should be like: ['exampleTwo [, exampleThree ['] this because both above are between functest & 1 as I need. Anyone have any idea? A: If there can not be any digits in between matching the first occurrence of 1 or 3: \bfunctest\b\s*(\D*)[13]\b The pattern matches: \bfunctest\b\s* Match the word functest followed by optional whitespace chars (\D*) Capture Optional non digits in group 1 [13] Match either 1 or 3 \b A word boundary See a regex demo. Or you can exclude matching the square brackets before matching a digit using a negated character class: \bfunctest\b\s*([^][]*\[)[13]] See another regex demo. Example import re pattern = r"\bfunctest\b\s*([^][]*\[)239]" s = "functest exampleOne [2] functest exampleTwo [239] functest exampleOne throw [2] functest exampleThree [1] functest exampleFour [2] functest exampleFive [239]" print(re.findall(pattern, s)) Output ['exampleTwo [', 'exampleFive ['] A: Found a way by using the following. It still includes functest, but at least does the job testVariable = re.findall(r'functest(?:(?!functest).)*?239', 'functest exampleOne [2] functest exampleTwo [239] functest exampleOne throw [2] functest exampleThree [1] functest exampleFour [2] functest exampleFive [239]') Output: ['functest exampleTwo [239', 'functest exampleFive [239']
find all occurrences between 2 values in non default pattern
I am stumbling into an issue with a regex search in python So I have: testVariable = re.findall(r'functest(.*?)1', 'functest exampleOne [2] functest exampleTwo [1] functest exampleOne throw [2] functest exampleThree [1]') Current Output is: [' exampleOne [2] functest exampleTwo [', ' exampleOne throw [2] functest exampleThree ['] But what I want is to find all occurences between β€˜functest’ & 1' <or 2, or 3 based on need> so output should be like: ['exampleTwo [, exampleThree ['] this because both above are between functest & 1 as I need. Anyone have any idea?
[ "If there can not be any digits in between matching the first occurrence of 1 or 3:\n\\bfunctest\\b\\s*(\\D*)[13]\\b\n\nThe pattern matches:\n\n\\bfunctest\\b\\s* Match the word functest followed by optional whitespace chars\n(\\D*) Capture Optional non digits in group 1\n[13] Match either 1 or 3\n\\b A word boundary\n\nSee a regex demo.\nOr you can exclude matching the square brackets before matching a digit using a negated character class:\n\\bfunctest\\b\\s*([^][]*\\[)[13]]\n\nSee another regex demo.\nExample\nimport re\n\npattern = r\"\\bfunctest\\b\\s*([^][]*\\[)239]\"\n\ns = \"functest exampleOne [2] functest exampleTwo [239] functest exampleOne throw [2] functest exampleThree [1] functest exampleFour [2] functest exampleFive [239]\"\n\nprint(re.findall(pattern, s))\n\nOutput\n['exampleTwo [', 'exampleFive [']\n\n", "Found a way by using the following. It still includes functest, but at least does the job\ntestVariable = re.findall(r'functest(?:(?!functest).)*?239', 'functest exampleOne [2] functest exampleTwo [239] functest exampleOne throw [2] functest exampleThree [1] functest exampleFour [2] functest exampleFive [239]')\nOutput:\n['functest exampleTwo [239', 'functest exampleFive [239']\n" ]
[ 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074488046_python_regex.txt
Q: How to iterate over multiple list and calculate every alternate values in it? I have list where it has values of multiple persons, I want to calculate every alternate values in list of lists, how can I achieve it? list looks like below for j in persons: person_list.append(persons[int(j)][0]+persons[int(j)][2]) person_list.append(persons[int(j)][1]+persons[int(j)][3]) print("person list", person_list) Above is the code which I tried, but it's not working person [[222, 1, 255, 54], [105, 1, 135, 48], [397, 310, 521, 594]] I have to calculate 222+255 and 1+54 and similarly for other lists. A: You can do something like this: >>> x = [[222, 1, 255, 54], [105, 1, 135, 48], [397, 310, 521, 594]] >>> [[sum(i[::2]), sum(i[1::2])] for i in x] [[477, 55], [240, 49], [918, 904]] This will go over and sum the even-indexed values together and the same with the odd-indexed values.
How to iterate over multiple list and calculate every alternate values in it?
I have list where it has values of multiple persons, I want to calculate every alternate values in list of lists, how can I achieve it? list looks like below for j in persons: person_list.append(persons[int(j)][0]+persons[int(j)][2]) person_list.append(persons[int(j)][1]+persons[int(j)][3]) print("person list", person_list) Above is the code which I tried, but it's not working person [[222, 1, 255, 54], [105, 1, 135, 48], [397, 310, 521, 594]] I have to calculate 222+255 and 1+54 and similarly for other lists.
[ "You can do something like this:\n>>> x = [[222, 1, 255, 54], [105, 1, 135, 48], [397, 310, 521, 594]]\n>>> [[sum(i[::2]), sum(i[1::2])] for i in x]\n[[477, 55], [240, 49], [918, 904]]\n\nThis will go over and sum the even-indexed values together and the same with the odd-indexed values.\n" ]
[ 1 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074488593_list_python.txt
Q: Access google storage client using dictionary I have a service account in a form of dictionary. Below is the service account service_account = { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "PRIVATE KEY", "client_email": "email", "client_id": "111111", "auth_uri": "https://auth.com", "token_uri": "https://token.com", "auth_provider_x509_cert_url": "https://certs.com", "client_x509_cert_url": "https://www.cert.com" } The above details are simulated. I want to access the google storage using the above dictionary but not using ".json" file. Below is the code that I am trying from google.cloud import storage storage_client = storage.Client.from_service_account_json(service_account) bucket = storage_client.bucket(bucket_name) blob = bucket.blob(blob_name) file_data = json.loads(blob.download_as_string()) Getting the below error storage_client = storage.Client.from_service_account_json(service_account) File "/usr/local/lib/python3.9/site-packages/google/cloud/client.py", line 106, in from_service_account_json with io.open(json_credentials_path, "r", encoding="utf-8") as json_fi: TypeError: expected str, bytes or os.PathLike object, not dict A: It looks like this can be done with the from_service_account_info() method instead of from_service_account_json.
Access google storage client using dictionary
I have a service account in a form of dictionary. Below is the service account service_account = { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "PRIVATE KEY", "client_email": "email", "client_id": "111111", "auth_uri": "https://auth.com", "token_uri": "https://token.com", "auth_provider_x509_cert_url": "https://certs.com", "client_x509_cert_url": "https://www.cert.com" } The above details are simulated. I want to access the google storage using the above dictionary but not using ".json" file. Below is the code that I am trying from google.cloud import storage storage_client = storage.Client.from_service_account_json(service_account) bucket = storage_client.bucket(bucket_name) blob = bucket.blob(blob_name) file_data = json.loads(blob.download_as_string()) Getting the below error storage_client = storage.Client.from_service_account_json(service_account) File "/usr/local/lib/python3.9/site-packages/google/cloud/client.py", line 106, in from_service_account_json with io.open(json_credentials_path, "r", encoding="utf-8") as json_fi: TypeError: expected str, bytes or os.PathLike object, not dict
[ "It looks like this can be done with the\nfrom_service_account_info() method instead of from_service_account_json.\n" ]
[ 0 ]
[]
[]
[ "google_cloud_platform", "google_cloud_storage", "python" ]
stackoverflow_0074488268_google_cloud_platform_google_cloud_storage_python.txt
Q: 4D Matrix operation in Python - conversion from MATLAB I'm trying to translate this MATLAB code to Python: MATLAB: V_c = delta* max(V_L, repmat(V_A_c,[N_p 1]) - NM ) where these are 4D arrays: V_c is the continuation value for in different states, (should have shape 81, 75, 15, 31) V_L is the initial value, (has shapes 81, 75, 15, 31) V_A_c is the value of adjustment under optimal choice (has shape 1, 75, 15, 31) NM is a number (NM = ΞΊ*np.exp(P0)) N_p is the length of the grid This is my attempted python code PYTHON: V_c = delta * np.amax(V_L,(np.tile(V_A_c,(([N_p,1]))))-NM) I get errors, that the lengths of the arrays are different, and that only integer scalar arrays can be converted to a scalar index. In Python, my V_L and V_A_c has the same values and shapes as the MATLAB gives, but I still can't compute V_c. Any suggestions? A: The error you mention, comes from the fact that you want to element-wise compare two tensor (matrix). use np.maximum. considering the tile operation is correct and N_p is 85 in you example: V_c = delta * np.maximum(V_L, np.tile(V_A_c,(N_p,1,1,1)) - NM )
4D Matrix operation in Python - conversion from MATLAB
I'm trying to translate this MATLAB code to Python: MATLAB: V_c = delta* max(V_L, repmat(V_A_c,[N_p 1]) - NM ) where these are 4D arrays: V_c is the continuation value for in different states, (should have shape 81, 75, 15, 31) V_L is the initial value, (has shapes 81, 75, 15, 31) V_A_c is the value of adjustment under optimal choice (has shape 1, 75, 15, 31) NM is a number (NM = ΞΊ*np.exp(P0)) N_p is the length of the grid This is my attempted python code PYTHON: V_c = delta * np.amax(V_L,(np.tile(V_A_c,(([N_p,1]))))-NM) I get errors, that the lengths of the arrays are different, and that only integer scalar arrays can be converted to a scalar index. In Python, my V_L and V_A_c has the same values and shapes as the MATLAB gives, but I still can't compute V_c. Any suggestions?
[ "The error you mention, comes from the fact that you want to element-wise compare two tensor (matrix). use np.maximum.\nconsidering the tile operation is correct and N_p is 85 in you example:\nV_c = delta * np.maximum(V_L, np.tile(V_A_c,(N_p,1,1,1)) - NM ) \n\n" ]
[ 1 ]
[]
[]
[ "dynamic_programming", "matlab", "numpy_ndarray", "python" ]
stackoverflow_0074488411_dynamic_programming_matlab_numpy_ndarray_python.txt
Q: Selenium IDE: Export to Python when using brackets '(' & ')' in xpaths I have XPath statements in Selenium IDE that don't seem possible to export to Python due to the usage of brackets - any ideas for what to do to get around this to complete the export to Python? Its statements like: (//td[@role='presentation'])[3] The XPath statements seem sound from a syntax perspective since they work both when I play the script in selenium and when I search for the element in the browser with the path. When I removed all XPath statements with brackets the export to Python worked. I can't use the element IDs because these are dynamically defined and change every other execution Thanks in advance ! A: (//td[@role='presentation'])[3] expression can be enclosed with ", as following: "(//td[@role='presentation'])[3]"
Selenium IDE: Export to Python when using brackets '(' & ')' in xpaths
I have XPath statements in Selenium IDE that don't seem possible to export to Python due to the usage of brackets - any ideas for what to do to get around this to complete the export to Python? Its statements like: (//td[@role='presentation'])[3] The XPath statements seem sound from a syntax perspective since they work both when I play the script in selenium and when I search for the element in the browser with the path. When I removed all XPath statements with brackets the export to Python worked. I can't use the element IDs because these are dynamically defined and change every other execution Thanks in advance !
[ "(//td[@role='presentation'])[3] expression can be enclosed with \", as following:\n\"(//td[@role='presentation'])[3]\"\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium_ide", "xpath" ]
stackoverflow_0074488631_python_selenium_ide_xpath.txt
Q: python selenium- how to get only some of the information inside a HTML element? The website bellow will she the scores of a all the soccer matches and this one is an example, im trying to get the teams that have played and the scores. photo this is the code for the one above: code I tried getting the whole and it worked, the only thing i can't figure out is how to get the score and teams out of it. pages url: https://www.fotmob.com/?date=20221118&q= A: First Identify the parent anchor tag and then iterate the parent element to find the specific child element. Scores are not available for all the matches since some of them have not started yet. Use try..except block in that case. driver.get('https://www.fotmob.com/?date=20221118&q=') elements=WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "div.show-webkit-scroll a[class*='MatchWrapper']"))) for element in elements: print(element.find_element(By.CSS_SELECTOR, "span[class*='TeamName']:nth-of-type(1)").text) try: element.find_element(By.CSS_SELECTOR, "span[class$='-score']") print(element.find_element(By.CSS_SELECTOR, "span[class$='-score']").text) except: print("No score record found") print(element.find_element(By.CSS_SELECTOR, "span[class*='TeamName']:nth-of-type(2)").text) print("===============================") You need to import below libraries. from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By Output: Cameroon 0 - 0 Panama =============================== Belgium No score record found Egypt =============================== Bahrain No score record found Serbia =============================== Lommel No score record found Genk U23 =============================== Virton No score record found SL 16 FC =============================== Rozova Dolina No score record found Ludogorets Razgrad =============================== Nanjing City FC 0 - 5 Shanghai Port =============================== Suzhou Dongwu 2 - 2 Wuhan Yangtze River =============================== Dalian Professional FC 0 - 1 Zhejiang Professional =============================== Slovacko No score record found Mlada Boleslav =============================== Karvina No score record found Slavia Prague =============================== Skive No score record found Esbjerg fB =============================== Portsmouth No score record found Derby =============================== Latvia U21 No score record found Estonia U21 =============================== Spain U21 No score record found Japan U21 =============================== Portugal U21 No score record found Czech Republic U21 =============================== SC Weiche Flensburg No score record found Hamburger SV II =============================== FCA Walldorf No score record found Bahlinger SC =============================== Kickers Offenbach No score record found Eintracht Trier =============================== TSG Balingen No score record found TuS RW Koblenz =============================== Borussia MΓΆnchengladbach II No score record found Schalke 04 II =============================== Oberhausen No score record found FC Bocholt =============================== Kalamata No score record found Panachaiki =============================== Aizawl FC 0 - 1 Gokulam FC =============================== Neroca FC No score record found Sudeva FC =============================== East Bengal FC No score record found Odisha FC =============================== Persik No score record found Persita =============================== Persib Bandung No score record found Bhayangkara FC =============================== Fajr Sepasi 0 - 0 Shahrdari Hamedan =============================== Mes Shahr Babak 0 - 0 Khalij Fars Mahshahr =============================== Saipa 0 - 0 Arman Gohar =============================== Shahrdari Astara 0 - 0 Esteghlal Molasani =============================== Shams Azar 0 - 0 Chooka =============================== Van Pars Naqsh Jahan No score record found Omid Vahdat Khorasan =============================== Esteghlal Kh No score record found Chadormalu Ardakan SC =============================== Pars Jonoubi No score record found Darya Caspian Babol =============================== FC Eindhoven No score record found Telstar =============================== Helmond Sport No score record found De Graafschap =============================== Ballymena United No score record found Linfield =============================== Larne No score record found Dungannon Swifts =============================== Lechia Gdansk No score record found Gornik Zabrze =============================== B-SAD No score record found Boavista =============================== Arouca No score record found Feirense =============================== Albirex Niigata FC No score record found Balestier Khalsa FC =============================== Granada No score record found Albacete =============================== Xamax No score record found Wil =============================== Bellinzona No score record found Thun =============================== Yeni Malatyaspor No score record found Pendikspor =============================== Zorya No score record found FC Olexandriya =============================== Connah's Quay Nomads No score record found Caernarfon =============================== Aberystwyth No score record found Pontypridd United ===============================
python selenium- how to get only some of the information inside a HTML element?
The website bellow will she the scores of a all the soccer matches and this one is an example, im trying to get the teams that have played and the scores. photo this is the code for the one above: code I tried getting the whole and it worked, the only thing i can't figure out is how to get the score and teams out of it. pages url: https://www.fotmob.com/?date=20221118&q=
[ "First Identify the parent anchor tag and then iterate the parent element to find the specific child element.\nScores are not available for all the matches since some of them have not started yet. Use try..except block in that case.\ndriver.get('https://www.fotmob.com/?date=20221118&q=')\nelements=WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, \"div.show-webkit-scroll a[class*='MatchWrapper']\")))\nfor element in elements:\n print(element.find_element(By.CSS_SELECTOR, \"span[class*='TeamName']:nth-of-type(1)\").text)\n try:\n element.find_element(By.CSS_SELECTOR, \"span[class$='-score']\")\n print(element.find_element(By.CSS_SELECTOR, \"span[class$='-score']\").text)\n except:\n print(\"No score record found\")\n print(element.find_element(By.CSS_SELECTOR, \"span[class*='TeamName']:nth-of-type(2)\").text)\n print(\"===============================\")\n\nYou need to import below libraries.\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\n\nOutput:\nCameroon\n0 - 0\nPanama\n===============================\nBelgium\nNo score record found\nEgypt\n===============================\nBahrain\nNo score record found\nSerbia\n===============================\nLommel\nNo score record found\nGenk U23\n===============================\nVirton\nNo score record found\nSL 16 FC\n===============================\nRozova Dolina\nNo score record found\nLudogorets Razgrad\n===============================\nNanjing City FC\n0 - 5\nShanghai Port\n===============================\nSuzhou Dongwu\n2 - 2\nWuhan Yangtze River\n===============================\nDalian Professional FC\n0 - 1\nZhejiang Professional\n===============================\nSlovacko\nNo score record found\nMlada Boleslav\n===============================\nKarvina\nNo score record found\nSlavia Prague\n===============================\nSkive\nNo score record found\nEsbjerg fB\n===============================\nPortsmouth\nNo score record found\nDerby\n===============================\nLatvia U21\nNo score record found\nEstonia U21\n===============================\nSpain U21\nNo score record found\nJapan U21\n===============================\nPortugal U21\nNo score record found\nCzech Republic U21\n===============================\nSC Weiche Flensburg\nNo score record found\nHamburger SV II\n===============================\nFCA Walldorf\nNo score record found\nBahlinger SC\n===============================\nKickers Offenbach\nNo score record found\nEintracht Trier\n===============================\nTSG Balingen\nNo score record found\nTuS RW Koblenz\n===============================\nBorussia MΓΆnchengladbach II\nNo score record found\nSchalke 04 II\n===============================\nOberhausen\nNo score record found\nFC Bocholt\n===============================\nKalamata\nNo score record found\nPanachaiki\n===============================\nAizawl FC\n0 - 1\nGokulam FC\n===============================\nNeroca FC\nNo score record found\nSudeva FC\n===============================\nEast Bengal FC\nNo score record found\nOdisha FC\n===============================\nPersik\nNo score record found\nPersita\n===============================\nPersib Bandung\nNo score record found\nBhayangkara FC\n===============================\nFajr Sepasi\n0 - 0\nShahrdari Hamedan\n===============================\nMes Shahr Babak\n0 - 0\nKhalij Fars Mahshahr\n===============================\nSaipa\n0 - 0\nArman Gohar\n===============================\nShahrdari Astara\n0 - 0\nEsteghlal Molasani\n===============================\nShams Azar\n0 - 0\nChooka\n===============================\nVan Pars Naqsh Jahan\nNo score record found\nOmid Vahdat Khorasan\n===============================\nEsteghlal Kh\nNo score record found\nChadormalu Ardakan SC\n===============================\nPars Jonoubi\nNo score record found\nDarya Caspian Babol\n===============================\nFC Eindhoven\nNo score record found\nTelstar\n===============================\nHelmond Sport\nNo score record found\nDe Graafschap\n===============================\nBallymena United\nNo score record found\nLinfield\n===============================\nLarne\nNo score record found\nDungannon Swifts\n===============================\nLechia Gdansk\nNo score record found\nGornik Zabrze\n===============================\nB-SAD\nNo score record found\nBoavista\n===============================\nArouca\nNo score record found\nFeirense\n===============================\nAlbirex Niigata FC\nNo score record found\nBalestier Khalsa FC\n===============================\nGranada\nNo score record found\nAlbacete\n===============================\nXamax\nNo score record found\nWil\n===============================\nBellinzona\nNo score record found\nThun\n===============================\nYeni Malatyaspor\nNo score record found\nPendikspor\n===============================\nZorya\nNo score record found\nFC Olexandriya\n===============================\nConnah's Quay Nomads\nNo score record found\nCaernarfon\n===============================\nAberystwyth\nNo score record found\nPontypridd United\n===============================\n\n" ]
[ 0 ]
[]
[]
[ "css_selectors", "python", "selenium", "webdriver", "webdriverwait" ]
stackoverflow_0074488282_css_selectors_python_selenium_webdriver_webdriverwait.txt
Q: How do I get the number `(0 + infj)` in python? I'm using Python 3.9.15 and trying to get the number (0 + infj), i.e. the imaginary part is infinite and the real part is zero. However, I tried several alternatives but all of them gave (nan + infj) instead of (0 + infj). >>> float('inf') * 1j (nan+infj) >>> float('inf') * 1j + 0 (nan+infj) >>> import numpy as np >>> np.inf * 1j (nan+infj) >>> np.inf * 1j + 0 (nan+infj) How do I get the number (0 + infj)? A: You can use built-in complex function: complex(0,float('inf'))
How do I get the number `(0 + infj)` in python?
I'm using Python 3.9.15 and trying to get the number (0 + infj), i.e. the imaginary part is infinite and the real part is zero. However, I tried several alternatives but all of them gave (nan + infj) instead of (0 + infj). >>> float('inf') * 1j (nan+infj) >>> float('inf') * 1j + 0 (nan+infj) >>> import numpy as np >>> np.inf * 1j (nan+infj) >>> np.inf * 1j + 0 (nan+infj) How do I get the number (0 + infj)?
[ "You can use built-in complex function:\ncomplex(0,float('inf'))\n\n" ]
[ 2 ]
[]
[]
[ "complex_numbers", "python", "python_3.x" ]
stackoverflow_0074488694_complex_numbers_python_python_3.x.txt
Q: How do you make a 2d array from a text file in python and traverse it to get an average of the floats? I have to get input from a text file of a sector and all of it's sales, it has to be stored in a 2d array has to be able to write an average function for data I tried it in java but I want to know how in python. A: I suggest using the Pandas library in Python. It has several handy functions for example creating a DataFrame (a 2D array which allows all sorts of manipulations and calculations). You can install it using PIP in your CMD with the following command: python3 -m pip install pandas Your code should look something like this: import pandas as pd # Which pandas function you use depends on the file type you're working with df = pd.read_excel("sector_data.xlsx") # If there's a dividends column in your data, the following would calculate the mean. dividend_mean = df["Dividends"].mean() I suggest looking at some info about Pandas. It's a strong library. https://pandas.pydata.org/docs/user_guide/10min.html
How do you make a 2d array from a text file in python and traverse it to get an average of the floats?
I have to get input from a text file of a sector and all of it's sales, it has to be stored in a 2d array has to be able to write an average function for data I tried it in java but I want to know how in python.
[ "I suggest using the Pandas library in Python. It has several handy functions for example creating a DataFrame (a 2D array which allows all sorts of manipulations and calculations).\nYou can install it using PIP in your CMD with the following command:\npython3 -m pip install pandas\n\nYour code should look something like this:\nimport pandas as pd\n\n# Which pandas function you use depends on the file type you're working with\ndf = pd.read_excel(\"sector_data.xlsx\")\n\n# If there's a dividends column in your data, the following would calculate the mean.\ndividend_mean = df[\"Dividends\"].mean()\n\nI suggest looking at some info about Pandas. It's a strong library.\nhttps://pandas.pydata.org/docs/user_guide/10min.html\n" ]
[ 0 ]
[]
[]
[ "2d", "arrays", "python" ]
stackoverflow_0074488400_2d_arrays_python.txt
Q: overwriting the column rows basis the condition Existing Dataframe : Id condition1 condition2 score A attempt pass 0 A attempt fail 0 B attempt pass 0 B attempt level_1 0 B attempt fail 0 C attempt fail 0 D attempt fail 0 Expected Dataframe : Id condition1 condition2 score A attempt pass 1 A attempt fail 1 B attempt pass 1 B attempt level_1 1 B attempt fail 1 C attempt fail 0 D attempt fail 0 I am looking to tag score in every row of unique Id as 1 if in any row below condition is satisfied : condition1 == 'attempt' & condition2 =='pass'. A: You can try: m1 = df['condition1'].eq('attempt') m2 = df['condition2'].eq('pass') | df['condition2'].eq('level_1') df['score'] = (m1 & m2) df['score'] = df.groupby('Id')['score'].transform(lambda x: x.any().astype(int)) Id condition1 condition2 score 0 A attempt pass 1 1 A attempt fail 1 2 B attempt pass 1 3 B attempt level_1 1 4 B attempt fail 1 5 C attempt fail 0 6 D attempt fail 0
overwriting the column rows basis the condition
Existing Dataframe : Id condition1 condition2 score A attempt pass 0 A attempt fail 0 B attempt pass 0 B attempt level_1 0 B attempt fail 0 C attempt fail 0 D attempt fail 0 Expected Dataframe : Id condition1 condition2 score A attempt pass 1 A attempt fail 1 B attempt pass 1 B attempt level_1 1 B attempt fail 1 C attempt fail 0 D attempt fail 0 I am looking to tag score in every row of unique Id as 1 if in any row below condition is satisfied : condition1 == 'attempt' & condition2 =='pass'.
[ "You can try:\nm1 = df['condition1'].eq('attempt')\nm2 = df['condition2'].eq('pass') | df['condition2'].eq('level_1')\n\ndf['score'] = (m1 & m2)\ndf['score'] = df.groupby('Id')['score'].transform(lambda x: x.any().astype(int))\n\n Id condition1 condition2 score\n0 A attempt pass 1\n1 A attempt fail 1\n2 B attempt pass 1\n3 B attempt level_1 1\n4 B attempt fail 1\n5 C attempt fail 0\n6 D attempt fail 0\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074488713_dataframe_pandas_python.txt
Q: Copy a non-empty folder in python How to copy a non-empty folder in Python ? I am want use like unix command " cp -r " Ex.. sourceFolder = /tmp/Folder1/file1.txt destinationFolder = /tmp/Folder2/ after copying sourceFolder is look like " /tmp/Folder2/Folder1/file1.txt " I am trying for copy shutil.copytree('/tmp/Folder1/file.txt', '/tmp/Folder2/')
Copy a non-empty folder in python
How to copy a non-empty folder in Python ? I am want use like unix command " cp -r " Ex.. sourceFolder = /tmp/Folder1/file1.txt destinationFolder = /tmp/Folder2/ after copying sourceFolder is look like " /tmp/Folder2/Folder1/file1.txt " I am trying for copy shutil.copytree('/tmp/Folder1/file.txt', '/tmp/Folder2/')
[]
[]
[ "import os\n# Copy Created Feature folder and file into \n\n\ncoping = os.system('cp -rf /tmp/'+folder1+' /tmp/'+folder2+'/')\n\n" ]
[ -2 ]
[ "list", "python", "shutil" ]
stackoverflow_0074488293_list_python_shutil.txt
Q: blosc.MAX_BUFFERSIZE error while trying to guess if a dask dataframe is empty I want to perform a test on the emptiness of a dask dataframe. So I have this dask dataframe ddf, a local ray cluster, and dask configured to use ray as backend. I've seen here that there is no empty property and that I have to perform the following code len(ddf.index) == 0 This results in ValueError: bytesobj cannot be larger than 2147483631 bytes, triggered by the following code (located in blosc.toplevel) def _check_input_length(input_name, input_len): if input_len > blosc.MAX_BUFFERSIZE: raise ValueError("%s cannot be larger than %d bytes" % (input_name, blosc.MAX_BUFFERSIZE)) I have tried to get just one element out of the index, which will obviously answer the question that it's not empty, but this causes the same error to be triggered. a = ddf.index.tail(1) b = ddf.index.head(1) Why I am having this error? How I could achieve my initial goal? A: The issue was that the number of partitions on the dask dataframe was 1. I've used ddf.repartition(npartitions=32) to solve my issue. partition_size="100MB" is the recommended way to go.
blosc.MAX_BUFFERSIZE error while trying to guess if a dask dataframe is empty
I want to perform a test on the emptiness of a dask dataframe. So I have this dask dataframe ddf, a local ray cluster, and dask configured to use ray as backend. I've seen here that there is no empty property and that I have to perform the following code len(ddf.index) == 0 This results in ValueError: bytesobj cannot be larger than 2147483631 bytes, triggered by the following code (located in blosc.toplevel) def _check_input_length(input_name, input_len): if input_len > blosc.MAX_BUFFERSIZE: raise ValueError("%s cannot be larger than %d bytes" % (input_name, blosc.MAX_BUFFERSIZE)) I have tried to get just one element out of the index, which will obviously answer the question that it's not empty, but this causes the same error to be triggered. a = ddf.index.tail(1) b = ddf.index.head(1) Why I am having this error? How I could achieve my initial goal?
[ "The issue was that the number of partitions on the dask dataframe was 1.\nI've used ddf.repartition(npartitions=32) to solve my issue.\npartition_size=\"100MB\" is the recommended way to go.\n" ]
[ 0 ]
[]
[]
[ "blosc", "dask", "pandas", "python", "ray" ]
stackoverflow_0074319762_blosc_dask_pandas_python_ray.txt
Q: How to change ttk button background and foreground when hover it in tkinter I'm trying to change ttk.tkinter button background to black and foreground colour to white when mouse is hover it. Have tried highlightbackground and activebackground but doesn't yield the result I'm looking for. import tkinter as tk import tkinter.ttk as ttk root = tk.Tk() style = ttk.Style(root) #style.theme_use("clam") style.configure('TButton', foreground="black", highlightthickness=5, highlightbackground='#3E4149', highlightforeground="white", activebackground="black") btr = ttk.Button(root, text="TEST BUTTON") btr.pack() root.mainloop() A: ttk Button appearances are driven by themes (3D/Color-alt/classic/default, Color-clam). Not setting/others leaves buttons flat/grey and settings don't change things. To make a ttk TButton change colors can be achieved using map. 3D appearance requires borderwidth.Only Classic forms an outer ring using highlight. Similar answer see: Python: Changing ttk button color depending on current color? import tkinter as tk import tkinter.ttk as ttk root = tk.Tk() style = ttk.Style() style.theme_use("classic") style.map("C.TButton", foreground=[('!active', 'black'),('pressed', 'red'), ('active', 'white')], background=[ ('!active','grey75'),('pressed', 'green'), ('active', 'black')] ) btr = ttk.Button(root, text="TEST BUTTON", style="C.TButton") btr.grid(column=0,row=0,sticky='nsew'); root.mainloop() A: Try using the map function with your style, as described here: https://docs.python.org/3/library/tkinter.ttk.html import tkinter as tk import tkinter.ttk as ttk root = tk.Tk() style = ttk.Style(root) #style.theme_use("clam") style.map("C.TButton", foreground=[('pressed', 'red'), ('active', 'blue')], background=[('pressed', '!disabled', 'black'), ('active', 'white')] ) btr = ttk.Button(root, text="TEST BUTTON", style="C.TButton") btr.pack() root.mainloop() Register the style map with the button. I hope this helps. A: you have to try this I was having this problem before I learned this Code import tkinter from tkinter import ttk from tkinter import * import tkinter.ttk f=Tk() style = ttk.Style() style.configure("BW.TLabel", foreground="blue", background="red") l1 = ttk.Label(f,text="Test", style="BW.TLabel") l2 = ttk.Label(f,text="Test", style="BW.TLabel") l1.pack() l2.pack() f.mainloop( you must see this documentation in python.org website [[it will learn you a lot of things like that I wrote1]1
How to change ttk button background and foreground when hover it in tkinter
I'm trying to change ttk.tkinter button background to black and foreground colour to white when mouse is hover it. Have tried highlightbackground and activebackground but doesn't yield the result I'm looking for. import tkinter as tk import tkinter.ttk as ttk root = tk.Tk() style = ttk.Style(root) #style.theme_use("clam") style.configure('TButton', foreground="black", highlightthickness=5, highlightbackground='#3E4149', highlightforeground="white", activebackground="black") btr = ttk.Button(root, text="TEST BUTTON") btr.pack() root.mainloop()
[ "ttk Button appearances are driven by themes (3D/Color-alt/classic/default, Color-clam). Not setting/others leaves buttons flat/grey and settings don't change things.\nTo make a ttk TButton change colors can be achieved using map. 3D appearance requires borderwidth.Only Classic forms an outer ring using highlight.\nSimilar answer see: Python: Changing ttk button color depending on current color?\nimport tkinter as tk\nimport tkinter.ttk as ttk\nroot = tk.Tk()\nstyle = ttk.Style()\nstyle.theme_use(\"classic\")\n\nstyle.map(\"C.TButton\",\n foreground=[('!active', 'black'),('pressed', 'red'), ('active', 'white')],\n background=[ ('!active','grey75'),('pressed', 'green'), ('active', 'black')]\n )\nbtr = ttk.Button(root, text=\"TEST BUTTON\", style=\"C.TButton\")\nbtr.grid(column=0,row=0,sticky='nsew');\nroot.mainloop()\n\n", "Try using the map function with your style, as described here:\nhttps://docs.python.org/3/library/tkinter.ttk.html\nimport tkinter as tk\nimport tkinter.ttk as ttk\n\n\nroot = tk.Tk()\n\nstyle = ttk.Style(root)\n#style.theme_use(\"clam\")\n\n\nstyle.map(\"C.TButton\",\n foreground=[('pressed', 'red'), ('active', 'blue')],\n background=[('pressed', '!disabled', 'black'), ('active', 'white')]\n )\n\nbtr = ttk.Button(root, text=\"TEST BUTTON\", style=\"C.TButton\")\nbtr.pack()\n\nroot.mainloop()\n\nRegister the style map with the button.\nI hope this helps.\n", "you have to try this I was having this problem before I learned this Code\nimport tkinter \nfrom tkinter import ttk\nfrom tkinter import *\nimport tkinter.ttk\n\n\nf=Tk()\n\nstyle = ttk.Style()\nstyle.configure(\"BW.TLabel\", foreground=\"blue\", \nbackground=\"red\")\n\nl1 = ttk.Label(f,text=\"Test\", style=\"BW.TLabel\")\nl2 = ttk.Label(f,text=\"Test\", style=\"BW.TLabel\")\nl1.pack()\nl2.pack()\nf.mainloop(\n\nyou must see this documentation in python.org website\n[[it will learn you a lot of things like that I wrote1]1\n" ]
[ 8, 2, 0 ]
[]
[]
[ "python", "tkinter", "ttk" ]
stackoverflow_0057186536_python_tkinter_ttk.txt
Q: Python - Tell if there is a non consecutive date in pandas dataframe I have a pandas data frame with dates. I need to know if every other date pair is consecutive. 2 1988-01-01 3 2015-01-31 4 2015-02-01 5 2015-05-31 6 2015-06-01 7 2021-11-16 11 2021-11-17 12 2022-10-05 8 2022-10-06 9 2022-10-12 10 2022-10-13 # How to build this example dataframe df=pd.DataFrame({'date':pd.to_datetime(['1988-01-01','2015-01-31','2015-02-01', '2015-05-31','2015-06-01', '2021-11-16', '2021-11-17', '2022-10-05', '2022-10-06', '2022-10-12', '2022-10-13'])}) Each pair should be consecutive. I have tried different sorting but everything I see relates to the entire series being consecutive. I need to compare each pair of dates after the first date. cb_gap = cb_sorted.sort_values('dates').groupby('dates').diff() > pd.to_timedelta('1 day') What I need to see is this... 2 1988-01-01 <- Ignore the start date 3 2015-01-31 <- these dates have no gap 4 2015-02-01 5 2015-05-31 <- these dates have no gap 6 2015-06-01 7 2021-11-16 <- these have a gap!!!! 11 2021-11-18 12 2022-10-05 <- these have no gap 8 2022-10-06 9 2022-10-12 A: here is one way to do it btw, what is your expected output? the answer get you the difference b/w the consecutive dates skipping the first row and populate diff column # make date into datetime df['date'] = pd.to_datetime(df['date']) # create two intermediate DF skipping the first and taking alternate values # and concat them along x-axis df2=pd.concat([df.iloc[1:].iloc[::2].reset_index()[['id','date']], df.iloc[2:].iloc[::2].reset_index()[['id','date']] ],axis=1 ) # take the difference of second date from the first one df2['diff']=df2.iloc[:,3]-df2.iloc[:,1] df2 id date id date diff 0 3 2015-01-31 4 2015-02-01 1 days 1 5 2015-05-31 6 2015-06-01 1 days 2 7 2021-11-16 11 2021-11-17 1 days 3 12 2022-10-05 8 2022-10-06 1 days 4 9 2022-10-12 10 2022-10-13 1 days A: One way is to use shift and compute differences. pd.DataFrame({'date':df.date,'diff':df.date.shift(-1)-df.date})[1::2] returns date diff 1 2015-01-31 1 days 3 2015-05-31 1 days 5 2021-11-16 1 days 7 2022-10-05 1 days 9 2022-10-12 1 days It is also faster Method Timeit Naveed's 4.23 ms This one 0.93 ms
Python - Tell if there is a non consecutive date in pandas dataframe
I have a pandas data frame with dates. I need to know if every other date pair is consecutive. 2 1988-01-01 3 2015-01-31 4 2015-02-01 5 2015-05-31 6 2015-06-01 7 2021-11-16 11 2021-11-17 12 2022-10-05 8 2022-10-06 9 2022-10-12 10 2022-10-13 # How to build this example dataframe df=pd.DataFrame({'date':pd.to_datetime(['1988-01-01','2015-01-31','2015-02-01', '2015-05-31','2015-06-01', '2021-11-16', '2021-11-17', '2022-10-05', '2022-10-06', '2022-10-12', '2022-10-13'])}) Each pair should be consecutive. I have tried different sorting but everything I see relates to the entire series being consecutive. I need to compare each pair of dates after the first date. cb_gap = cb_sorted.sort_values('dates').groupby('dates').diff() > pd.to_timedelta('1 day') What I need to see is this... 2 1988-01-01 <- Ignore the start date 3 2015-01-31 <- these dates have no gap 4 2015-02-01 5 2015-05-31 <- these dates have no gap 6 2015-06-01 7 2021-11-16 <- these have a gap!!!! 11 2021-11-18 12 2022-10-05 <- these have no gap 8 2022-10-06 9 2022-10-12
[ "here is one way to do it\nbtw, what is your expected output? the answer get you the difference b/w the consecutive dates skipping the first row and populate diff column\n# make date into datetime\ndf['date'] = pd.to_datetime(df['date'])\n\n# create two intermediate DF skipping the first and taking alternate values\n# and concat them along x-axis\ndf2=pd.concat([df.iloc[1:].iloc[::2].reset_index()[['id','date']],\n df.iloc[2:].iloc[::2].reset_index()[['id','date']]\n ],axis=1 )\n\n# take the difference of second date from the first one\ndf2['diff']=df2.iloc[:,3]-df2.iloc[:,1]\ndf2\n\n\n id date id date diff\n0 3 2015-01-31 4 2015-02-01 1 days\n1 5 2015-05-31 6 2015-06-01 1 days\n2 7 2021-11-16 11 2021-11-17 1 days\n3 12 2022-10-05 8 2022-10-06 1 days\n4 9 2022-10-12 10 2022-10-13 1 days\n\n", "One way is to use shift and compute differences.\npd.DataFrame({'date':df.date,'diff':df.date.shift(-1)-df.date})[1::2]\n\nreturns\n date diff\n1 2015-01-31 1 days\n3 2015-05-31 1 days\n5 2021-11-16 1 days\n7 2022-10-05 1 days\n9 2022-10-12 1 days\n\nIt is also faster\n\n\n\n\nMethod\nTimeit\n\n\n\n\nNaveed's\n4.23 ms\n\n\nThis one\n0.93 ms\n\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "dataframe", "date", "pandas", "python" ]
stackoverflow_0074483104_dataframe_date_pandas_python.txt
Q: Solving system of nonlinear complex equations in Python I'm trying to solve a problem with 8 unknowns and 8 complex equations. I've tried to use fsolve but I get the error message: error: Result from function call is not a proper array of floats. From what I've now read fsolve doesn't support complex equations and hence my questions, how would I solve systems of complex non-linear equations in Python? PS: I've seen the suggestion to split my problem up into imaginary and real part and use fsolve on those separately but that is too cumbersome. This is the relevant snippet of my code: A=1 def equations(p): B,C,D,F,G,H,I,J = p return ( A+B-C-D, 1j*k0*(A-B) -1j*k1*(C-D), B*exp(1j*k1*a1) + D*exp(-1j*k1*a1) - F*exp(1j*k0*a1) - G*exp(-1j*k0*a1), 1j*k1* ( C*exp(1j*k1*a1) - D*exp(-1j*k1*a1) ) - 1j*k0*( F*exp(1j*k0*a1) - G*exp(-1j*k0*a1) ), F*exp(1j*k0*a1L) + G*exp(-1j*k0*a1L) - H*exp(-k2*a1L) - I*exp(k2*a1L), 1j*k0*( F*exp(1j*k0*a1L) - G*exp(-1j*k0*a1L) )- k2*( -H*exp(-k2*a1L) + I*exp(k2*a1L) ), H*exp(-k2*a12L) + I*exp(k2*a12L) - J*exp(1j*k0*a12L), k2*( -H*exp(-k2*a12L) + I*exp(k2*a12L) ) - 1j*k0*J*exp(1j*k0*a12L) ) B, C, D, F, G, H, I, J = fsolve(equations, (-1,-1,-1,-1,-1,-1,-1,-1)) A: Algorithm in which the code below was written algorithm for Newton’s Method for Systems # Author : Carlos Eduardo da Silva Lima # Theme : Newton’s Method for Systems (real or complex) # Language: Python # IDE : Google Colab # Data : 18/11/2022 ###################################################################### # This Part contains the imports of packages needed for this project # ###################################################################### import numpy as np import matplotlib.pyplot as plt from numpy.linalg import inv, norm, multi_dot from scipy.optimize import fsolve ###################### # Enter problem data # ###################### x0 = np.array([1.0+0.0j, 1.0+0.0j, 1.0+0.0j]) # Initial guess for the possible root of the set of nonlinear equations entered in F(x) TOL = 1e-49 # Stipulated minimum tolerance N = 10 # Number of required maximum iterations ############################ # Newton-Raphson algorithm # ############################ def newtonRapshonSistem(F,J,x0,TOL,N): x = x0 # First kick k = 1 while(k<=N): # Ccalculation of the product between the inverse of the Jacobian matrix (J(x)^(-1)) and the vector F(x) (T a transpose) y = -((inv(J(x)))@(F(x).T)) x += (y.T) # absolute value norm erro_abs = np.linalg.norm(np.abs(y)) if erro_abs<TOL: break k += 1 # Exit # print(f"Number of iterations: {k}") print(f"Absolute error: {erro_abs}\n") print("\nSoluΓ§Γ£o\n") for l in range(0,np.size(x),1): print(f"x[{l}] = {x[l]:.4}\n") # print(f'x[{l}] = {np.real(x[l]):.4} + {np.imag(x[l]):.4}j') return x ################# # Function F(x) # ################# def F(x): # definition of variables (Arrays) x1,x2,x3 = x # definition of the set of nonlinear equations f1 = x1+x2-10000 f2 = x1*np.exp(-1j*x3*5) + x2*np.exp(1j*x3*5) - 12000 f3 = x1*np.exp(-1j*x3*10) + x2*np.exp(1j*x3*10) - 8000 return np.array([f1, f2, f3], dtype=np.complex128) ############ # Jacobian # ############ def J(x): # definition of variables (Arrays) x1,x2,x3 = x # Jacobean matrix elements df1_dx1 = 1 df1_dx2 = 1 df1_dx3 = 0 df2_dx1 = np.exp(-1j*x3*5) df2_dx2 = np.exp(1j*x3*5) df2_dx3 = x1*(-1j*5)*np.exp(-1j*x3*5)+x2*(1j*5)*np.exp(1j*x3*5) df3_dx1 = np.exp(-1j*x3*10) df3_dx2 = np.exp(1j*x3*10) df3_dx3 = x1*(-1j*10)*np.exp(-1j*x3*10) + x2*(1j*10)*np.exp(1j*x3*10) matriz_jacobiana = np.array([ [df1_dx1, df1_dx2, df1_dx3], [df2_dx1, df2_dx2, df2_dx3], [df3_dx1, df3_dx2, df3_dx3]], dtype=np.complex128) return matriz_jacobiana # Calculate the roots s = newtonRapshonSistem(F,J,x0,TOL,N) # Application of the result obtained in x in F. F(s) Finally! If you don't agree, or if you find any errors, please let me know. In the most I hope to help you and the community the community. Up until :-).
Solving system of nonlinear complex equations in Python
I'm trying to solve a problem with 8 unknowns and 8 complex equations. I've tried to use fsolve but I get the error message: error: Result from function call is not a proper array of floats. From what I've now read fsolve doesn't support complex equations and hence my questions, how would I solve systems of complex non-linear equations in Python? PS: I've seen the suggestion to split my problem up into imaginary and real part and use fsolve on those separately but that is too cumbersome. This is the relevant snippet of my code: A=1 def equations(p): B,C,D,F,G,H,I,J = p return ( A+B-C-D, 1j*k0*(A-B) -1j*k1*(C-D), B*exp(1j*k1*a1) + D*exp(-1j*k1*a1) - F*exp(1j*k0*a1) - G*exp(-1j*k0*a1), 1j*k1* ( C*exp(1j*k1*a1) - D*exp(-1j*k1*a1) ) - 1j*k0*( F*exp(1j*k0*a1) - G*exp(-1j*k0*a1) ), F*exp(1j*k0*a1L) + G*exp(-1j*k0*a1L) - H*exp(-k2*a1L) - I*exp(k2*a1L), 1j*k0*( F*exp(1j*k0*a1L) - G*exp(-1j*k0*a1L) )- k2*( -H*exp(-k2*a1L) + I*exp(k2*a1L) ), H*exp(-k2*a12L) + I*exp(k2*a12L) - J*exp(1j*k0*a12L), k2*( -H*exp(-k2*a12L) + I*exp(k2*a12L) ) - 1j*k0*J*exp(1j*k0*a12L) ) B, C, D, F, G, H, I, J = fsolve(equations, (-1,-1,-1,-1,-1,-1,-1,-1))
[ "Algorithm in which the code below was written\nalgorithm for Newton’s Method for Systems\n\n# Author : Carlos Eduardo da Silva Lima\n# Theme : Newton’s Method for Systems (real or complex)\n# Language: Python\n# IDE : Google Colab\n# Data : 18/11/2022\n\n######################################################################\n# This Part contains the imports of packages needed for this project #\n######################################################################\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy.linalg import inv, norm, multi_dot\nfrom scipy.optimize import fsolve\n\n######################\n# Enter problem data #\n######################\nx0 = np.array([1.0+0.0j, 1.0+0.0j, 1.0+0.0j]) # Initial guess for the possible root of the set of nonlinear equations entered in F(x)\nTOL = 1e-49 # Stipulated minimum tolerance\nN = 10 # Number of required maximum iterations\n\n############################\n# Newton-Raphson algorithm #\n############################\ndef newtonRapshonSistem(F,J,x0,TOL,N):\n x = x0 # First kick\n k = 1\n while(k<=N):\n\n # Ccalculation of the product between the inverse of the Jacobian matrix (J(x)^(-1)) and the vector F(x) (T a transpose)\n y = -((inv(J(x)))@(F(x).T))\n x += (y.T)\n\n # absolute value norm\n erro_abs = np.linalg.norm(np.abs(y))\n if erro_abs<TOL:\n break\n k += 1\n\n # Exit #\n print(f\"Number of iterations: {k}\")\n print(f\"Absolute error: {erro_abs}\\n\")\n print(\"\\nSoluΓ§Γ£o\\n\")\n for l in range(0,np.size(x),1):\n print(f\"x[{l}] = {x[l]:.4}\\n\")\n # print(f'x[{l}] = {np.real(x[l]):.4} + {np.imag(x[l]):.4}j')\n return x\n \n\n#################\n# Function F(x) #\n#################\ndef F(x):\n # definition of variables (Arrays)\n x1,x2,x3 = x\n\n # definition of the set of nonlinear equations\n f1 = x1+x2-10000\n f2 = x1*np.exp(-1j*x3*5) + x2*np.exp(1j*x3*5) - 12000\n f3 = x1*np.exp(-1j*x3*10) + x2*np.exp(1j*x3*10) - 8000\n\n return np.array([f1, f2, f3], dtype=np.complex128)\n\n############\n# Jacobian #\n############\ndef J(x):\n # definition of variables (Arrays)\n x1,x2,x3 = x\n\n # Jacobean matrix elements\n df1_dx1 = 1\n df1_dx2 = 1\n df1_dx3 = 0\n\n df2_dx1 = np.exp(-1j*x3*5)\n df2_dx2 = np.exp(1j*x3*5)\n df2_dx3 = x1*(-1j*5)*np.exp(-1j*x3*5)+x2*(1j*5)*np.exp(1j*x3*5)\n\n df3_dx1 = np.exp(-1j*x3*10)\n df3_dx2 = np.exp(1j*x3*10)\n df3_dx3 = x1*(-1j*10)*np.exp(-1j*x3*10) + x2*(1j*10)*np.exp(1j*x3*10)\n\n matriz_jacobiana = np.array([\n [df1_dx1, df1_dx2, df1_dx3], \n [df2_dx1, df2_dx2, df2_dx3], \n [df3_dx1, df3_dx2, df3_dx3]], dtype=np.complex128)\n\n return matriz_jacobiana\n\n# Calculate the roots\ns = newtonRapshonSistem(F,J,x0,TOL,N)\n\n# Application of the result obtained in x in F.\nF(s)\n\nFinally! If you don't agree, or if you find any errors, please let me know. In the most I hope to help you and the community the community. Up until :-).\n" ]
[ 0 ]
[]
[]
[ "complex_numbers", "equation_solving", "nonlinear_functions", "python" ]
stackoverflow_0058302415_complex_numbers_equation_solving_nonlinear_functions_python.txt
Q: How to use Flask API to post two variables via Postman and run a function using them in that call? I have the following function : ` def file(DOCname,TABLEid): directory = DOCname parent_dir = "E:\\Tables\\Documents\\"+TABLEid path = os.path.join(parent_dir, directory) try: os.makedirs(path, exist_ok = True) print("Directory '%s' created successfully" % directory) except OSError as error: print("Directory '%s' can not be created" % directory) ` Now I want to use a Flask API and call this function to run with two variables that I will provide via Postman, DOCname and TABLEid, but I'm not sure how to run this at the same time I make an API call ? I tried to run the file under a post request but nothing seems to happen. A: It can be done in following way. The data from the Postman should be send through the forms . from flask import Flask from flask import request app = Flask(__name__) @app.route("/",methods=["POST"]) def file(): dic_data = request.form DOCname= dic_data["DOCname"] TABLEid = dic_data["TABLEid"] directory = DOCname parent_dir = "E:\\Tables\\Documents\\"+TABLEid path = os.path.join(parent_dir, directory) try: os.makedirs(path, exist_ok = True) print("Directory '%s' created successfully" % directory) except OSError as error: print("Directory '%s' can not be created" % directory) A: First you'd need to define your API call. For instance whether you'd be using json. This example assumes json messaging so that the endpoint will accept a POST request and json message of: { "docname": "mydoc", "tableid": "tid0001" } If you look at the Flask quickstart guide: You can simply make a basic route which takes a POST request like so from flask import Flask, request, jsonify app = Flask(__name__) @app.route("/", methods['POST']) def endpoint1(): my_args = request.get_json() try: success, message = file(my_args["docname"],my_args["tableid"]) except KeyError: success = False message = "Invalid arguments" return jsonify({"success": success, "msg": message}) def file(DOCname,TABLEid): directory = DOCname parent_dir = "E:\\Tables\\Documents\\"+TABLEid path = os.path.join(parent_dir, directory) try: os.makedirs(path, exist_ok = True) return (True, "Directory '%s' created successfully" % directory) except OSError as error: return (False, "Directory '%s' can not be created" % directory) and run with python -m flask --app myscriptname run While you can test with postman, you should be able to also demo this with a basic curl command like: curl -X POST -H "Content-Type: application/json" -d '{\ "docname": "mydoc",\ "tableid": "tid001"\ }' http://localhost:5000/
How to use Flask API to post two variables via Postman and run a function using them in that call?
I have the following function : ` def file(DOCname,TABLEid): directory = DOCname parent_dir = "E:\\Tables\\Documents\\"+TABLEid path = os.path.join(parent_dir, directory) try: os.makedirs(path, exist_ok = True) print("Directory '%s' created successfully" % directory) except OSError as error: print("Directory '%s' can not be created" % directory) ` Now I want to use a Flask API and call this function to run with two variables that I will provide via Postman, DOCname and TABLEid, but I'm not sure how to run this at the same time I make an API call ? I tried to run the file under a post request but nothing seems to happen.
[ "It can be done in following way. The data from the Postman should be send\nthrough the forms .\nfrom flask import Flask\nfrom flask import request\napp = Flask(__name__)\n\n@app.route(\"/\",methods=[\"POST\"])\ndef file():\n dic_data = request.form \n DOCname= dic_data[\"DOCname\"]\n TABLEid = dic_data[\"TABLEid\"]\n\n directory = DOCname\n parent_dir = \"E:\\\\Tables\\\\Documents\\\\\"+TABLEid\n path = os.path.join(parent_dir, directory)\n\n try:\n os.makedirs(path, exist_ok = True)\n print(\"Directory '%s' created successfully\" % directory)\n except OSError as error:\n print(\"Directory '%s' can not be created\" % directory)\n\n\n", "First you'd need to define your API call. For instance whether you'd be using json. This example assumes json messaging so that the endpoint will accept a POST request and json message of:\n{ \n \"docname\": \"mydoc\",\n \"tableid\": \"tid0001\" \n}\n\nIf you look at the Flask quickstart guide:\nYou can simply make a basic route which takes a POST request like so\nfrom flask import Flask, request, jsonify\n\napp = Flask(__name__)\n\n@app.route(\"/\", methods['POST'])\ndef endpoint1():\n my_args = request.get_json()\n try:\n success, message = file(my_args[\"docname\"],my_args[\"tableid\"])\n except KeyError:\n success = False\n message = \"Invalid arguments\"\n return jsonify({\"success\": success, \"msg\": message})\n\n\ndef file(DOCname,TABLEid):\n directory = DOCname\n parent_dir = \"E:\\\\Tables\\\\Documents\\\\\"+TABLEid\n path = os.path.join(parent_dir, directory)\n\n try:\n os.makedirs(path, exist_ok = True)\n return (True, \"Directory '%s' created successfully\" % directory)\n\n except OSError as error:\n return (False, \"Directory '%s' can not be created\" % directory)\n\nand run with python -m flask --app myscriptname run\nWhile you can test with postman, you should be able to also demo this with a basic curl command like:\ncurl -X POST -H \"Content-Type: application/json\" -d '{\\\n \"docname\": \"mydoc\",\\\n \"tableid\": \"tid001\"\\\n}' http://localhost:5000/\n\n" ]
[ 0, 0 ]
[]
[]
[ "api", "flask", "python" ]
stackoverflow_0074488581_api_flask_python.txt
Q: .xlsx and xls(Latest Versions) to pdf using python With the help of this .doc to pdf using python Link I am trying for excel (.xlsx and xls formats) Following is modified Code for Excel: import os from win32com import client folder = "C:\\Oprance\\Excel\\XlsxWriter-0.5.1" file_type = 'xlsx' out_folder = folder + "\\PDF_excel" os.chdir(folder) if not os.path.exists(out_folder): print 'Creating output folder...' os.makedirs(out_folder) print out_folder, 'created.' else: print out_folder, 'already exists.\n' for files in os.listdir("."): if files.endswith(".xlsx"): print files print '\n\n' word = client.DispatchEx("Excel.Application") for files in os.listdir("."): if files.endswith(".xlsx") or files.endswith('xls'): out_name = files.replace(file_type, r"pdf") in_file = os.path.abspath(folder + "\\" + files) out_file = os.path.abspath(out_folder + "\\" + out_name) doc = word.Workbooks.Open(in_file) print 'Exporting', out_file doc.SaveAs(out_file, FileFormat=56) doc.Close() It is showing following error : >>> execfile('excel_to_pdf.py') Creating output folder... C:\Excel\XlsxWriter-0.5.1\PDF_excel created. apms_trial.xlsx ~$apms_trial.xlsx Exporting C:\Excel\XlsxWriter-0.5.1\PDF_excel\apms_trial.pdf Traceback (most recent call last): File "<stdin>", line 1, in <module> File "excel_to_pdf.py", line 30, in <module> doc = word.Workbooks.Open(in_file) File "<COMObject <unknown>>", line 8, in Open pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, u'Microsoft Excel ', u"Excel cannot open the file '~$apms_trial.xlsx' because the file format or f ile extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file.", u'xlmain11.chm', 0, -21468 27284), None) >>> There is problem in doc.SaveAs(out_file, FileFormat=56) What should be FileFormat file format? Please Help A: Link of xlsxwriter : https://xlsxwriter.readthedocs.org/en/latest/contents.html With the help of this you can generate excel file with .xlsx and .xls for example excel file generated name is trial.xls Now if you want to generate pdf of that excel file then do the following : from win32com import client xlApp = client.Dispatch("Excel.Application") books = xlApp.Workbooks.Open('C:\\excel\\trial.xls') ws = books.Worksheets[0] ws.Visible = 1 ws.ExportAsFixedFormat(0, 'C:\\excel\\trial.pdf') A: I got the same thing and the same error... ANSWER: 57.... see below... from win32com import client import win32api def exceltopdf(doc): excel = client.DispatchEx("Excel.Application") excel.Visible = 0 wb = excel.Workbooks.Open(doc) ws = wb.Worksheets[1] try: wb.SaveAs('c:\\targetfolder\\result.pdf', FileFormat=57) except Exception, e: print "Failed to convert" print str(e) finally: wb.Close() excel.Quit() ... as an alternative to the fragile ExportAsFixedFormat... A: The GroupDocs.Conversion Cloud SDK for Python is another option to convert Excel to PDF. It is paid API. However, it provides 150 free monthly API calls. P.S: I'm a developer evangelist at GroupDocs. # Import module import groupdocs_conversion_cloud from shutil import copyfile # Get your client_id and client_key at https://dashboard.groupdocs.cloud (free registration is required). client_id = "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx" client_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Create instance of the API convert_api = groupdocs_conversion_cloud.ConvertApi.from_keys(client_id, client_key) try: #Convert PDF to PNG # Prepare request request = groupdocs_conversion_cloud.ConvertDocumentDirectRequest("pdf", "C:/Temp/Book1.xlsx") # Convert result = convert_api.convert_document_direct(request) copyfile(result, 'C:/Temp/Book1_output.pdf') print("Result {}".format(result)) except groupdocs_conversion_cloud.ApiException as e: print("Exception when calling get_supported_conversion_types: {0}".format(e.message)) A: You can print an excel sheet to pdf on linux using python. Do need to run openoffice as a headless server and use unoconv, takes a bit of configuring but is doable You run OO as a (service) daemon and use it for the conversions for xls, xlsx and doc, docx. http://dag.wiee.rs/home-made/unoconv/ A: Another solution for Is to start gotenberg docker container locally https://github.com/gotenberg/gotenberg And pass (any supported by libreoffice) file from python wia HTTP to the container and get result as pdf LIBREOFFICE_URL = 'http://localhost:3000/forms/libreoffice/convert' LIBREOFFICE_LANDSCAPE_URL = 'http://localhost:3000/forms/libreoffice/convert?landscape=1' def _retry_gotenberg(url, io_bytes, post_file_name='index.html'): response = None for _ in range(5): response = requests.post(url, files={post_file_name: io_bytes}) if response.status_code == 200: break logging.info('Will sleep and retry: %s %s', response.status_code, response.content) sleep(3) if not response or response.status_code != 200: raise RuntimeRrror(f'Bad response from doc-to-pdf: {response.status_code} {response.content}') return response def process_libreoffice(io_bytes, ext: str): if ext in ('.doc', '.docx'): url = LIBREOFFICE_URL else: url = LIBREOFFICE_LANDSCAPE_URL response = self._retry_gotenberg(url, io_bytes, post_file_name=f'file.{ext}') return response.content
.xlsx and xls(Latest Versions) to pdf using python
With the help of this .doc to pdf using python Link I am trying for excel (.xlsx and xls formats) Following is modified Code for Excel: import os from win32com import client folder = "C:\\Oprance\\Excel\\XlsxWriter-0.5.1" file_type = 'xlsx' out_folder = folder + "\\PDF_excel" os.chdir(folder) if not os.path.exists(out_folder): print 'Creating output folder...' os.makedirs(out_folder) print out_folder, 'created.' else: print out_folder, 'already exists.\n' for files in os.listdir("."): if files.endswith(".xlsx"): print files print '\n\n' word = client.DispatchEx("Excel.Application") for files in os.listdir("."): if files.endswith(".xlsx") or files.endswith('xls'): out_name = files.replace(file_type, r"pdf") in_file = os.path.abspath(folder + "\\" + files) out_file = os.path.abspath(out_folder + "\\" + out_name) doc = word.Workbooks.Open(in_file) print 'Exporting', out_file doc.SaveAs(out_file, FileFormat=56) doc.Close() It is showing following error : >>> execfile('excel_to_pdf.py') Creating output folder... C:\Excel\XlsxWriter-0.5.1\PDF_excel created. apms_trial.xlsx ~$apms_trial.xlsx Exporting C:\Excel\XlsxWriter-0.5.1\PDF_excel\apms_trial.pdf Traceback (most recent call last): File "<stdin>", line 1, in <module> File "excel_to_pdf.py", line 30, in <module> doc = word.Workbooks.Open(in_file) File "<COMObject <unknown>>", line 8, in Open pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, u'Microsoft Excel ', u"Excel cannot open the file '~$apms_trial.xlsx' because the file format or f ile extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file.", u'xlmain11.chm', 0, -21468 27284), None) >>> There is problem in doc.SaveAs(out_file, FileFormat=56) What should be FileFormat file format? Please Help
[ "Link of xlsxwriter :\nhttps://xlsxwriter.readthedocs.org/en/latest/contents.html\nWith the help of this you can generate excel file with .xlsx and .xls\nfor example excel file generated name is trial.xls\nNow if you want to generate pdf of that excel file then do the following :\nfrom win32com import client\nxlApp = client.Dispatch(\"Excel.Application\")\nbooks = xlApp.Workbooks.Open('C:\\\\excel\\\\trial.xls')\nws = books.Worksheets[0]\nws.Visible = 1\nws.ExportAsFixedFormat(0, 'C:\\\\excel\\\\trial.pdf')\n\n", "I got the same thing and the same error... ANSWER: 57.... see below...\nfrom win32com import client\nimport win32api\n\ndef exceltopdf(doc):\n excel = client.DispatchEx(\"Excel.Application\")\n excel.Visible = 0\n\n wb = excel.Workbooks.Open(doc)\n ws = wb.Worksheets[1]\n\n try:\n wb.SaveAs('c:\\\\targetfolder\\\\result.pdf', FileFormat=57)\n except Exception, e:\n print \"Failed to convert\"\n print str(e)\n finally:\n wb.Close()\n excel.Quit()\n\n... as an alternative to the fragile ExportAsFixedFormat...\n", "The GroupDocs.Conversion Cloud SDK for Python is another option to convert Excel to PDF. It is paid API. However, it provides 150 free monthly API calls.\nP.S: I'm a developer evangelist at GroupDocs.\n# Import module\nimport groupdocs_conversion_cloud\nfrom shutil import copyfile\n\n# Get your client_id and client_key at https://dashboard.groupdocs.cloud (free registration is required).\nclient_id = \"xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx\"\nclient_key = \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n\n# Create instance of the API\nconvert_api = groupdocs_conversion_cloud.ConvertApi.from_keys(client_id, client_key)\n\ntry:\n\n #Convert PDF to PNG\n # Prepare request\n request = groupdocs_conversion_cloud.ConvertDocumentDirectRequest(\"pdf\", \"C:/Temp/Book1.xlsx\")\n\n # Convert\n result = convert_api.convert_document_direct(request) \n copyfile(result, 'C:/Temp/Book1_output.pdf')\n print(\"Result {}\".format(result))\n \nexcept groupdocs_conversion_cloud.ApiException as e:\n print(\"Exception when calling get_supported_conversion_types: {0}\".format(e.message))\n\n\n", "You can print an excel sheet to pdf on linux using python.\nDo need to run openoffice as a headless server and use unoconv, takes a bit of configuring but is doable\nYou run OO as a (service) daemon and use it for the conversions for xls, xlsx and doc, docx.\nhttp://dag.wiee.rs/home-made/unoconv/\n", "Another solution for\nIs to start gotenberg docker container locally\nhttps://github.com/gotenberg/gotenberg\nAnd pass (any supported by libreoffice) file from python wia HTTP to the container and get result as pdf\nLIBREOFFICE_URL = 'http://localhost:3000/forms/libreoffice/convert'\nLIBREOFFICE_LANDSCAPE_URL = 'http://localhost:3000/forms/libreoffice/convert?landscape=1'\n\n\ndef _retry_gotenberg(url, io_bytes, post_file_name='index.html'):\n response = None\n for _ in range(5):\n response = requests.post(url, files={post_file_name: io_bytes})\n if response.status_code == 200:\n break\n logging.info('Will sleep and retry: %s %s', response.status_code, response.content)\n sleep(3)\n if not response or response.status_code != 200:\n raise RuntimeRrror(f'Bad response from doc-to-pdf: {response.status_code} {response.content}')\n return response\n\ndef process_libreoffice(io_bytes, ext: str):\n if ext in ('.doc', '.docx'):\n url = LIBREOFFICE_URL\n else:\n url = LIBREOFFICE_LANDSCAPE_URL\n response = self._retry_gotenberg(url, io_bytes, post_file_name=f'file.{ext}')\n return response.content\n\n" ]
[ 23, 5, 1, 0, 0 ]
[]
[]
[ "excel", "excel_2010", "pdf", "python", "win32com" ]
stackoverflow_0020854840_excel_excel_2010_pdf_python_win32com.txt
Q: How to read a env variable added in script execution from GitHub workflow Execute a script (tmp.py) with workflow that has below line: os.environ["VERSION"] = "Version 1.1.1.2.2.3" print(os.system('env')) #prints all env included above one Now I need this var in workflow: - name: Run script run: python3 tmp.py - name: print env var if: always() run: | echo ${{ env.VERSION }} #Blank, expected the value "Version 1.1.1.2.2.3" It prints blank. Later I have gone through the GitHub docs and found that this syntax {{ env.APP_VERSION }} can be used only if env itself added in workflow. So how can I use this var os.environ["VERSION"] value in workflow ? Document link: https://docs.github.com/en/actions/learn-github-actions/environment-variables I have not tried Job.<job_id>.env as it seems related to JOB env. A: To save an environment variable in a step (to use it in another one), you would generally use the following syntax: echo "foo=bar" >> $GITHUB_ENV In a Python script, that would be done similarly. I'm personally using the following syntax which directly write the variable to the env file: import os env_file = os.getenv('GITHUB_ENV') bar='bar' with open(env_file, "a") as myfile: myfile.write(f"foo={bar}") The variable could then be accessed in a following step by using ${{ env.foo }}. Note that this solution works for outputs by using GITHUB_OUTPUT instead of GITHUB_ENV.
How to read a env variable added in script execution from GitHub workflow
Execute a script (tmp.py) with workflow that has below line: os.environ["VERSION"] = "Version 1.1.1.2.2.3" print(os.system('env')) #prints all env included above one Now I need this var in workflow: - name: Run script run: python3 tmp.py - name: print env var if: always() run: | echo ${{ env.VERSION }} #Blank, expected the value "Version 1.1.1.2.2.3" It prints blank. Later I have gone through the GitHub docs and found that this syntax {{ env.APP_VERSION }} can be used only if env itself added in workflow. So how can I use this var os.environ["VERSION"] value in workflow ? Document link: https://docs.github.com/en/actions/learn-github-actions/environment-variables I have not tried Job.<job_id>.env as it seems related to JOB env.
[ "To save an environment variable in a step (to use it in another one), you would generally use the following syntax:\necho \"foo=bar\" >> $GITHUB_ENV\n\nIn a Python script, that would be done similarly. I'm personally using the following syntax which directly write the variable to the env file:\nimport os\n\nenv_file = os.getenv('GITHUB_ENV')\n\nbar='bar'\n\nwith open(env_file, \"a\") as myfile:\n myfile.write(f\"foo={bar}\")\n\nThe variable could then be accessed in a following step by using ${{ env.foo }}.\nNote that this solution works for outputs by using GITHUB_OUTPUT instead of GITHUB_ENV.\n" ]
[ 0 ]
[]
[]
[ "github", "github_actions", "python" ]
stackoverflow_0074486495_github_github_actions_python.txt
Q: Extract a JSON string from a Text File using pyspark I have a TEXT file with 4 fields and 3rd field is JSON string which I want to extract and create a separate column in dataframe. pk,line,json,date DBG,CDL,{"line":"CDL","stn":"DBG","latitude":"12.298915","longitude":"143.846263","isInterchange":true,"isIncidentStn":false,"stnKpis":[{"code":"PCD_PCT","value":0.1,"valueCreatedTs":1667361600000,"confidence":"50.0",}]},20221102 Expected output format in dataframe: I tried below command , but it didn't produce expected output df=spark.read.csv("/content/sample_data/file.txt",header=True,inferSchema=True,quote='"',escape='"') spark version:2.4 python version:3.6 A: You can read the csv file using pyspark into a dataframe. df = spark.read.csv("/tmp/resources/zipcodes.csv") Then json_string = json.loads(df.iloc["json"]) A: Data df =spark.createDataFrame([('DBG','CDL',{"line":"CDL","stn":"DBG","latitude":"12.298915","longitude":"143.846263","isInterchange":'true',"isIncidentStn":'false',"stnKpis":[{"code":"PCD_PCT","value":0.1,"valueCreatedTs":1667361600000,"confidence":"50.0",}]},'20221102')], ('pk','line','json','date')) ##Every df has an underlying rdd, select column into df and send it to rdd rdd=df.select(col("json").alias("jsoncol")).rdd.map(lambda x: x.jsoncol) #Read rdd schema spark.read.json(rdd).show()
Extract a JSON string from a Text File using pyspark
I have a TEXT file with 4 fields and 3rd field is JSON string which I want to extract and create a separate column in dataframe. pk,line,json,date DBG,CDL,{"line":"CDL","stn":"DBG","latitude":"12.298915","longitude":"143.846263","isInterchange":true,"isIncidentStn":false,"stnKpis":[{"code":"PCD_PCT","value":0.1,"valueCreatedTs":1667361600000,"confidence":"50.0",}]},20221102 Expected output format in dataframe: I tried below command , but it didn't produce expected output df=spark.read.csv("/content/sample_data/file.txt",header=True,inferSchema=True,quote='"',escape='"') spark version:2.4 python version:3.6
[ "You can read the csv file using pyspark into a dataframe.\ndf = spark.read.csv(\"/tmp/resources/zipcodes.csv\")\nThen\njson_string = json.loads(df.iloc[\"json\"])\n", "Data\ndf =spark.createDataFrame([('DBG','CDL',{\"line\":\"CDL\",\"stn\":\"DBG\",\"latitude\":\"12.298915\",\"longitude\":\"143.846263\",\"isInterchange\":'true',\"isIncidentStn\":'false',\"stnKpis\":[{\"code\":\"PCD_PCT\",\"value\":0.1,\"valueCreatedTs\":1667361600000,\"confidence\":\"50.0\",}]},'20221102')],\n ('pk','line','json','date'))\n\n##Every df has an underlying rdd, select column into df and send it to rdd\nrdd=df.select(col(\"json\").alias(\"jsoncol\")).rdd.map(lambda x: x.jsoncol)\n\n#Read rdd\nschema spark.read.json(rdd).show()\n\n" ]
[ 0, 0 ]
[]
[]
[ "apache_spark", "json", "pyspark", "python" ]
stackoverflow_0074488160_apache_spark_json_pyspark_python.txt
Q: path in FileField django I prepared my model to create PDF filled by all the fields it includes and I try to to link generated file to the pdf = models.FileField(). However the path to the file seems to be ok I can't reach the file through the view. models.py: class Lesson(models.Model): # fields # ... pdf = models.FileField(upload_to = 'pdfs/', default = None, blank = True) def render_template(self, request): #some magic BASE_DIR = str(Path(__file__).resolve().parent.parent) file_name = 'myfile' os.system(f"pdflatex -halt-on-error --output-directory={BASE_DIR}/media/pdfs {BASE_DIR}/media/tex/rendered/{file_name}") os.system(f"rm {BASE_DIR}/media/tex/rendered/{file_name}.tex") os.system(f"rm {BASE_DIR}/media/pdfs/{file_name}.aux") os.system(f"rm {BASE_DIR}/media/pdfs/{file_name}.log") return f"{BASE_DIR}/media/pdfs/{file_name}.pdf" views.py: def create_lesson(request): lesson = Lesson() lesson.pdf = lesson.render_template(request) lesson.save() message = { 'tag' : 'success', 'body' : "Successfully added new lesson!" } return JsonResponse({'message' : message}) But when putting <a href="{{ lesson.pdf }}">CLICK TO VIEW FILE</a> the link directs to: http://localhost:8000/docs/lessons/media/pdfs/myfile.pdf What path should be set to pdf field to direct to media/pdf/myfile.pdf in models.py? A: Just use it like this: <a href="/media/{{ lesson.pdf }}">CLICK TO VIEW FILE</a> After using the above code, you will be redirected to this path media/pdf/myfile.pdf
path in FileField django
I prepared my model to create PDF filled by all the fields it includes and I try to to link generated file to the pdf = models.FileField(). However the path to the file seems to be ok I can't reach the file through the view. models.py: class Lesson(models.Model): # fields # ... pdf = models.FileField(upload_to = 'pdfs/', default = None, blank = True) def render_template(self, request): #some magic BASE_DIR = str(Path(__file__).resolve().parent.parent) file_name = 'myfile' os.system(f"pdflatex -halt-on-error --output-directory={BASE_DIR}/media/pdfs {BASE_DIR}/media/tex/rendered/{file_name}") os.system(f"rm {BASE_DIR}/media/tex/rendered/{file_name}.tex") os.system(f"rm {BASE_DIR}/media/pdfs/{file_name}.aux") os.system(f"rm {BASE_DIR}/media/pdfs/{file_name}.log") return f"{BASE_DIR}/media/pdfs/{file_name}.pdf" views.py: def create_lesson(request): lesson = Lesson() lesson.pdf = lesson.render_template(request) lesson.save() message = { 'tag' : 'success', 'body' : "Successfully added new lesson!" } return JsonResponse({'message' : message}) But when putting <a href="{{ lesson.pdf }}">CLICK TO VIEW FILE</a> the link directs to: http://localhost:8000/docs/lessons/media/pdfs/myfile.pdf What path should be set to pdf field to direct to media/pdf/myfile.pdf in models.py?
[ "Just use it like this:\n<a href=\"/media/{{ lesson.pdf }}\">CLICK TO VIEW FILE</a>\n\nAfter using the above code, you will be redirected to this path media/pdf/myfile.pdf\n" ]
[ 3 ]
[]
[]
[ "django", "filefield", "python" ]
stackoverflow_0074488939_django_filefield_python.txt
Q: Using Regex to combine two lines I would like to use regex to combine two lines. If the first line has only one word and is followed by one \n , then combine it with next line. The first line sometimes may have a word and a comma , or a word with hyphen - My text looks like this: import re text = ''' Critical Accounting Policies and Estimates Review, Approval or Ratification of Transactions with Related Persons Audit-Related Fees are fees for assurance and related services by the principal accountant that are traditionally performed by the principal accountant and which are reasonably related to the performance of the audit or review of the registrant s financial statements and fees attributed to the audit of Guskin Gold Corporation, our wholly owned subsidiary. Effective risk oversight is an important priority of the Board of Directors. Because risks are considered in virtually every business decision, the Board of Directors discusses risk throughout the year generally or in connection with specific proposed actions. The Board of Directors approach to risk oversight includes understanding the critical risks in the Company s business and strategy, evaluating the Company s risk management processes, allocating responsibilities for risk oversight among the full Board of Directors, and fostering an appropriate culture of integrity and compliance with legal responsibilities. Corporate Governance The Company promotes accountability for adherence to honest and ethical conduct; endeavors to provide full, fair, accurate, timely and understandable disclosure in reports and documents that the Company files with the SEC and in other public communications made by the Company; and strives to be compliant with applicable governmental laws, rules and regulations. The Company has not formally adopted a written code of business conduct and ethics that governs the Company s employees, officers and Directors as the Company is not required to do so. ''' combine = re.sub(r'((?=[A-Za-z,-])\n(?=[a-zA-Z]))', ' ', text) print(combine) I tried to use following code to combine them, but it didn't work. combine = re.sub(r'((?=[A-Za-z,-])\n(?=[a-zA-Z]))', ' ', text) I hope it looks like this finally: text = ''' Critical Accounting Policies and Estimates Review, Approval or Ratification of Transactions with Related Persons Audit-Related Fees are fees for assurance and related services by the principal accountant that are traditionally performed by the principal accountant and which are reasonably related to the performance of the audit or review of the registrant s financial statements and fees attributed to the audit of Guskin Gold Corporation, our wholly owned subsidiary. Effective risk oversight is an important priority of the Board of Directors. Because risks are considered in virtually every business decision, the Board of Directors discusses risk throughout the year generally or in connection with specific proposed actions. The Board of Directors approach to risk oversight includes understanding the critical risks in the Company s business and strategy, evaluating the Company s risk management processes, allocating responsibilities for risk oversight among the full Board of Directors, and fostering an appropriate culture of integrity and compliance with legal responsibilities. Corporate Governance The Company promotes accountability for adherence to honest and ethical conduct; endeavors to provide full, fair, accurate, timely and understandable disclosure in reports and documents that the Company files with the SEC and in other public communications made by the Company; and strives to be compliant with applicable governmental laws, rules and regulations. The Company has not formally adopted a written code of business conduct and ethics that governs the Company s employees, officers and Directors as the Company is not required to do so. ''' How could I write the code to combine them? Thanks! A: Thanks for Wiktor's comment! The code should be combine = re.sub(r'((?<=[A-Za-z,-])\n(?=[a-zA-Z]))', ' ', text)
Using Regex to combine two lines
I would like to use regex to combine two lines. If the first line has only one word and is followed by one \n , then combine it with next line. The first line sometimes may have a word and a comma , or a word with hyphen - My text looks like this: import re text = ''' Critical Accounting Policies and Estimates Review, Approval or Ratification of Transactions with Related Persons Audit-Related Fees are fees for assurance and related services by the principal accountant that are traditionally performed by the principal accountant and which are reasonably related to the performance of the audit or review of the registrant s financial statements and fees attributed to the audit of Guskin Gold Corporation, our wholly owned subsidiary. Effective risk oversight is an important priority of the Board of Directors. Because risks are considered in virtually every business decision, the Board of Directors discusses risk throughout the year generally or in connection with specific proposed actions. The Board of Directors approach to risk oversight includes understanding the critical risks in the Company s business and strategy, evaluating the Company s risk management processes, allocating responsibilities for risk oversight among the full Board of Directors, and fostering an appropriate culture of integrity and compliance with legal responsibilities. Corporate Governance The Company promotes accountability for adherence to honest and ethical conduct; endeavors to provide full, fair, accurate, timely and understandable disclosure in reports and documents that the Company files with the SEC and in other public communications made by the Company; and strives to be compliant with applicable governmental laws, rules and regulations. The Company has not formally adopted a written code of business conduct and ethics that governs the Company s employees, officers and Directors as the Company is not required to do so. ''' combine = re.sub(r'((?=[A-Za-z,-])\n(?=[a-zA-Z]))', ' ', text) print(combine) I tried to use following code to combine them, but it didn't work. combine = re.sub(r'((?=[A-Za-z,-])\n(?=[a-zA-Z]))', ' ', text) I hope it looks like this finally: text = ''' Critical Accounting Policies and Estimates Review, Approval or Ratification of Transactions with Related Persons Audit-Related Fees are fees for assurance and related services by the principal accountant that are traditionally performed by the principal accountant and which are reasonably related to the performance of the audit or review of the registrant s financial statements and fees attributed to the audit of Guskin Gold Corporation, our wholly owned subsidiary. Effective risk oversight is an important priority of the Board of Directors. Because risks are considered in virtually every business decision, the Board of Directors discusses risk throughout the year generally or in connection with specific proposed actions. The Board of Directors approach to risk oversight includes understanding the critical risks in the Company s business and strategy, evaluating the Company s risk management processes, allocating responsibilities for risk oversight among the full Board of Directors, and fostering an appropriate culture of integrity and compliance with legal responsibilities. Corporate Governance The Company promotes accountability for adherence to honest and ethical conduct; endeavors to provide full, fair, accurate, timely and understandable disclosure in reports and documents that the Company files with the SEC and in other public communications made by the Company; and strives to be compliant with applicable governmental laws, rules and regulations. The Company has not formally adopted a written code of business conduct and ethics that governs the Company s employees, officers and Directors as the Company is not required to do so. ''' How could I write the code to combine them? Thanks!
[ "Thanks for Wiktor's comment! The code should be\ncombine = re.sub(r'((?<=[A-Za-z,-])\\n(?=[a-zA-Z]))', ' ', text) \n\n" ]
[ 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074488689_python_regex.txt
Q: How to slice output of a neuronal network I constructed a generator CNN which has the output (1, 3328, 1), but I would need (1, 3326, 1) so just 2 neurons/outputs less. I don't think that I can achieve it by changing parameter of the existing net. But I thought, it would be great just to cut out the last 2 neurons of the last layer. But does someone know how to "slice" a layer in a NN? Model: "functional_9" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_7 (InputLayer) [(None, 500)] 0 _________________________________________________________________ dense_6 (Dense) (None, 26624) 13338624 _________________________________________________________________ leaky_re_lu_18 (LeakyReLU) (None, 26624) 0 _________________________________________________________________ reshape_12 (Reshape) (None, 832, 1, 32) 0 _________________________________________________________________ conv2d_transpose_12 (Conv2DT (None, 1664, 1, 16) 4624 _________________________________________________________________ batch_normalization_12 (Batc (None, 1664, 1, 16) 64 _________________________________________________________________ leaky_re_lu_19 (LeakyReLU) (None, 1664, 1, 16) 0 _________________________________________________________________ conv2d_transpose_13 (Conv2DT (None, 3328, 1, 8) 1160 _________________________________________________________________ batch_normalization_13 (Batc (None, 3328, 1, 8) 32 _________________________________________________________________ leaky_re_lu_20 (LeakyReLU) (None, 3328, 1, 8) 0 _________________________________________________________________ reshape_13 (Reshape) (None, 3328, 8) 0 _________________________________________________________________ conv1d_6 (Conv1D) (None, 3328, 1) 25 _________________________________________________________________ activation_4 (Activation) (None, 3328, 1) 0 ================================================================= Total params: 13,344,529 Trainable params: 13,344,481 Non-trainable params: 48 _________________________________________________________________ Out[40]: (1, 3328, 1) A: Do this model = tf.keras.models.Model(model.input , model.layers[-1].output[:,:-2,:]) Simply do this model.layers[-1].output[:,:-2,:] #This will simply return [None, 3326, None]
How to slice output of a neuronal network
I constructed a generator CNN which has the output (1, 3328, 1), but I would need (1, 3326, 1) so just 2 neurons/outputs less. I don't think that I can achieve it by changing parameter of the existing net. But I thought, it would be great just to cut out the last 2 neurons of the last layer. But does someone know how to "slice" a layer in a NN? Model: "functional_9" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_7 (InputLayer) [(None, 500)] 0 _________________________________________________________________ dense_6 (Dense) (None, 26624) 13338624 _________________________________________________________________ leaky_re_lu_18 (LeakyReLU) (None, 26624) 0 _________________________________________________________________ reshape_12 (Reshape) (None, 832, 1, 32) 0 _________________________________________________________________ conv2d_transpose_12 (Conv2DT (None, 1664, 1, 16) 4624 _________________________________________________________________ batch_normalization_12 (Batc (None, 1664, 1, 16) 64 _________________________________________________________________ leaky_re_lu_19 (LeakyReLU) (None, 1664, 1, 16) 0 _________________________________________________________________ conv2d_transpose_13 (Conv2DT (None, 3328, 1, 8) 1160 _________________________________________________________________ batch_normalization_13 (Batc (None, 3328, 1, 8) 32 _________________________________________________________________ leaky_re_lu_20 (LeakyReLU) (None, 3328, 1, 8) 0 _________________________________________________________________ reshape_13 (Reshape) (None, 3328, 8) 0 _________________________________________________________________ conv1d_6 (Conv1D) (None, 3328, 1) 25 _________________________________________________________________ activation_4 (Activation) (None, 3328, 1) 0 ================================================================= Total params: 13,344,529 Trainable params: 13,344,481 Non-trainable params: 48 _________________________________________________________________ Out[40]: (1, 3328, 1)
[ "Do this\nmodel = tf.keras.models.Model(model.input , model.layers[-1].output[:,:-2,:])\n\nSimply do this\nmodel.layers[-1].output[:,:-2,:]\n\n#This will simply return \n[None, 3326, None] \n\n\n" ]
[ 1 ]
[]
[]
[ "deep_learning", "keras", "neural_network", "python", "python_3.x" ]
stackoverflow_0074488736_deep_learning_keras_neural_network_python_python_3.x.txt
Q: to print name with a pattern in python I ask the user to enter it's name and I print the pattern eg: W WO WOR WORL WORLD s=input("Enter your name") l=s.split() i=len(l) for m in range(0,i): for s in range(0,m): print(s) print() I have written this program where am I wrong please help. A beginner here A: Others have given you code that does what you want it to do; I'll try to explain why your code doesn't do what you think it would do. #s=input("Enter your name") # Let's pretend that the given word from the user was 'WORLD' as in your example. s = "WORLD" l=s.split() The above line s.split() uses the default-behaviour of the built-in str.split() method. Which does the following if we look at the help-file: split(self, /, sep=None, maxsplit=-1) Return a list of the words in the string, using sep as the delimiter string. sep The delimiter according which to split the string. None (the default value) means split according to any whitespace, and discard empty strings from the result. That means that it will try to split your given string on each whitespace-character inside of it and return a list containing the results. "WORLD".split() would therefore return: ['WORLD'] i=len(l) This returns 1, because the result of s.split(). Now let's break down what happens inside of the for-loop. # This is essentially: for m in range(0, 1) which will only loop once, because range is non-inclusive for m in range(0,i): # This is range-command will not execute, because the first value of m will be 0 # Because range is non-inclusive, running range(0, 0) will not return a value. # That means that nothing inside of the for-loop will execute. for s in range(0,m): print(s) print() All of this results in only the print() statement inside of the first for-loop being executed, and it will only be executed once because of how the range-function works with the values it has been given. A: Don't complicate the code unnecessarily. A string you can think of as a list of characters on which to iterate, without resorting to splitting. If you use Python's List Slicing, you can point to the positions of the characters you are interested in printing. Your code becomes: name = input("Enter your name: ") for i in range(len(name)): print(name[:i+1]) A: We can do this without using 2 loops. s = input("Enter your name") for i in range(len(s)+1): print(s[:i]) #Output: W WO WOR WORL WORLD
to print name with a pattern in python
I ask the user to enter it's name and I print the pattern eg: W WO WOR WORL WORLD s=input("Enter your name") l=s.split() i=len(l) for m in range(0,i): for s in range(0,m): print(s) print() I have written this program where am I wrong please help. A beginner here
[ "Others have given you code that does what you want it to do; I'll try to explain why your code doesn't do what you think it would do.\n#s=input(\"Enter your name\")\n# Let's pretend that the given word from the user was 'WORLD' as in your example.\ns = \"WORLD\"\nl=s.split()\n\nThe above line s.split() uses the default-behaviour of the built-in str.split() method. Which does the following if we look at the help-file:\nsplit(self, /, sep=None, maxsplit=-1)\n Return a list of the words in the string, using sep as the delimiter string.\n\n sep\n The delimiter according which to split the string.\n None (the default value) means split according to any whitespace,\n and discard empty strings from the result.\n\nThat means that it will try to split your given string on each whitespace-character inside of it and return a list containing the results. \"WORLD\".split() would therefore return: ['WORLD']\ni=len(l)\n\nThis returns 1, because the result of s.split().\nNow let's break down what happens inside of the for-loop.\n# This is essentially: for m in range(0, 1) which will only loop once, because range is non-inclusive\nfor m in range(0,i): \n # This is range-command will not execute, because the first value of m will be 0\n # Because range is non-inclusive, running range(0, 0) will not return a value.\n # That means that nothing inside of the for-loop will execute.\n for s in range(0,m):\n print(s)\n print()\n\nAll of this results in only the print() statement inside of the first for-loop being executed, and it will only be executed once because of how the range-function works with the values it has been given.\n", "Don't complicate the code unnecessarily.\nA string you can think of as a list of characters on which to iterate, without resorting to splitting.\nIf you use Python's List Slicing, you can point to the positions of the characters you are interested in printing.\nYour code becomes:\nname = input(\"Enter your name: \")\nfor i in range(len(name)):\n print(name[:i+1])\n\n", "We can do this without using 2 loops.\ns = input(\"Enter your name\")\n\nfor i in range(len(s)+1):\n print(s[:i])\n\n#Output:\nW\nWO\nWOR\nWORL\nWORLD\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "debugging", "design_patterns", "python" ]
stackoverflow_0074488803_debugging_design_patterns_python.txt
Q: How do I plot this piecewise function into Python with matplotlib? This is the function I need to plot: This is my code: pi = np.pi sin = np.sin e = np.e x1 = np.linspace(-10*pi, -pi) y1 = (4*pi*(e**0.1*x1)) * sin(2*pi*x1) plt.plot(x1, y1) x2 = np.linspace(-pi, -pi/2) y2 = 0 plt.plot(x2, y2) x3 = np.linspace(-pi/2, pi/2) y3 = 4/pi * x3**2 - pi plt.plot(x3, y3) x4 = np.linspace(pi/2, pi) y4 = 0 plt.plot(x4, y4) plt.show() But every time I try to run it gives me a ValueError: ValueError: x and y must have same first dimension, but have shapes (50,) and (1,) I have tried using np.piecewise but haven't gotten anywhere. A: To define a piecewise function, I usually use a chained sequence of numpy.where. First, the domain for the independent variable, then the conditions and the analytical expression, with a difference for the last where, as explained in the docs. NB: are you sure that the circular frequency of the sines is 2Ο€? when I see a domain expressed in multiples of Ο€ I think immediately to frequencies expressed as integer numbers or simple fractions… from numpy import exp,linspace, pi, sin, where from matplotlib.pyplot import grid, plot, show x = linspace(-10*pi, +10*pi, 4001) y = where(x < -pi, 4*pi*exp(+x/10)*sin(1*x), where(x <-pi/2, 0, where(x <+pi/2, 4*x*x/pi-pi, where(x < +pi, 0, 4*pi*exp(-x/10)*sin(1*x))))) plot(x, y) ; grid() ; show() PS In a comment Davide_sd correctly remarks that the technique I have shown is OK only if the piecewise function is continuous. If there are discontinuities across sub-domains, you can always use numpy.where but you should assign np.nan values to the y array in the points of discontinuity, so that Matplotlib knows that she has to break the line across the NaN. β€” EDIT β€” I've changed the circular frequency of the sines because I cannot make sense of the OP specification. A: x2 is an array, and y2 is a number, matplotlib expects both to be arrays, so you should switch y2 and y4 definition to be y2 = np.zeros_like(x2), and y4 = np.zeros_like(x4). A: As x and y need to be the same first dimension, you might want to define y2 and y4 to be a function of x, so that an array of the same dimension is produced as a result to plot. #... y2 = x2*0 plt.plot(x2, y2) #... y4 = x4*0 plt.plot(x4, y4) Alternatively, you could have y2 and y4 defined as an array of zeros of the same size of x2 and x4 respectively. y2 = np.zeros(x2.shape) y4 = np.zeros(x4.shape)
How do I plot this piecewise function into Python with matplotlib?
This is the function I need to plot: This is my code: pi = np.pi sin = np.sin e = np.e x1 = np.linspace(-10*pi, -pi) y1 = (4*pi*(e**0.1*x1)) * sin(2*pi*x1) plt.plot(x1, y1) x2 = np.linspace(-pi, -pi/2) y2 = 0 plt.plot(x2, y2) x3 = np.linspace(-pi/2, pi/2) y3 = 4/pi * x3**2 - pi plt.plot(x3, y3) x4 = np.linspace(pi/2, pi) y4 = 0 plt.plot(x4, y4) plt.show() But every time I try to run it gives me a ValueError: ValueError: x and y must have same first dimension, but have shapes (50,) and (1,) I have tried using np.piecewise but haven't gotten anywhere.
[ "\nTo define a piecewise function, I usually use a chained sequence of numpy.where.\nFirst, the domain for the independent variable, then the conditions and the analytical expression, with a difference for the last where, as explained in the docs.\nNB: are you sure that the circular frequency of the sines is 2Ο€? when I see a domain expressed in multiples of Ο€ I think immediately to frequencies expressed as integer numbers or simple fractions…\nfrom numpy import exp,linspace, pi, sin, where\nfrom matplotlib.pyplot import grid, plot, show\n\nx = linspace(-10*pi, +10*pi, 4001)\ny = where(x < -pi, 4*pi*exp(+x/10)*sin(1*x),\n where(x <-pi/2, 0,\n where(x <+pi/2, 4*x*x/pi-pi,\n where(x < +pi, 0,\n 4*pi*exp(-x/10)*sin(1*x)))))\n \nplot(x, y) ; grid() ; show()\n\n\nPS\nIn a comment Davide_sd correctly remarks that the technique I have shown is OK only if the piecewise function is continuous.\nIf there are discontinuities across sub-domains, you can always use numpy.where but you should assign np.nan values to the y array in the points of discontinuity, so that Matplotlib knows that she has to break the line across the NaN.\n\nβ€” EDIT β€” I've changed the circular frequency of the sines because I cannot make sense of the OP specification.\n", "x2 is an array, and y2 is a number, matplotlib expects both to be arrays, so you should switch y2 and y4 definition to be y2 = np.zeros_like(x2), and y4 = np.zeros_like(x4).\n", "As x and y need to be the same first dimension, you might want to define y2 and y4 to be a function of x, so that an array of the same dimension is produced as a result to plot.\n#...\ny2 = x2*0\nplt.plot(x2, y2)\n\n#...\n\ny4 = x4*0\nplt.plot(x4, y4)\n\n\nAlternatively, you could have y2 and y4 defined as an array of zeros of the same size of x2 and x4 respectively.\ny2 = np.zeros(x2.shape)\ny4 = np.zeros(x4.shape)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "matplotlib", "numpy", "piecewise", "python", "valueerror" ]
stackoverflow_0074488361_matplotlib_numpy_piecewise_python_valueerror.txt
Q: Why does gunicorn need to restart so often in my gcloud appengine app? I am using Flask to run an application. The application will be deployed on gcloud appengine. Currently, when I run it on my local dev machine, there is no issue. But when I run it on gcloud appengine, it appears that the gunicorn thread is being restarted quite often. 2022-11-13 08:54:13 default[20221113t165059] Other load 2022-11-13 08:54:13 default[20221113t165059] post.get_by_pageid 2022-11-13 08:54:13 default[20221113t165059] Returning posts 0 to 4 2022-11-13 08:54:15 default[20221113t165059] "GET /view/tree/61e2b6585fc8f37d73f59218? HTTP/1.1" 201 2022-11-13 08:54:15 default[20221113t165059] [2022-11-13 08:54:15 +0000] [9] [INFO] Starting gunicorn 20.1.0 2022-11-13 08:54:15 default[20221113t165059] [2022-11-13 08:54:15 +0000] [9] [INFO] Listening at: http://0.0.0.0:8081 (9) 2022-11-13 08:54:15 default[20221113t165059] [2022-11-13 08:54:15 +0000] [9] [INFO] Using worker: gthread 2022-11-13 08:54:15 default[20221113t165059] [2022-11-13 08:54:15 +0000] [19] [INFO] Booting worker with pid: 19 2022-11-13 08:54:16 default[20221113t165059] [2022-11-13 08:54:16 +0000] [22] [INFO] Booting worker with pid: 22 2022-11-13 08:54:16 default[20221113t165059] I1113 08:54:16.841506 22 __init__.py:52] Initializing Cloud Debugger Python agent version: 3.1 2022-11-13 08:54:16 default[20221113t165059] I1113 08:54:16.841692 19 __init__.py:52] Initializing Cloud Debugger Python agent version: 3.1 2022-11-13 08:54:17 default[20221113t165059] I1113 08:54:17.034071 24 gcp_hub_client.py:377] Debuggee registered successfully, ID: gcp:431135224927:32ede2785bfa47c7, agent ID: 636e418c-0000-2d3e-8038-089e08203644, canary mode: CANARY_MODE_ALWAYS_ENABLED 2022-11-13 08:54:17 default[20221113t165059] I1113 08:54:17.034071 23 gcp_hub_client.py:377] Debuggee registered successfully, ID: gcp:431135224927:32ede2785bfa47c7, agent ID: 636d1d19-0000-23d2-bc11-089e082c7780, canary mode: CANARY_MODE_ALWAYS_ENABLED 2022-11-13 08:54:17 default[20221113t165059] secureCheckLoggedIn 2022-11-13 08:54:17 default[20221113t165059] SCLI Not Logged In 2022-11-13 08:54:17 default[20221113t165059] view tree 2022-11-13 08:54:17 default[20221113t165059] page.get 2022-11-13 08:54:19 default[20221113t165059] "GET /robots.txt HTTP/1.1" 404 2022-11-13 08:54:19 default[20221113t165059] page.get_user_pages 2022-11-13 08:54:19 default[20221113t165059] page.get_latest 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [11] [INFO] Starting gunicorn 20.1.0 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [11] [INFO] Listening at: http://0.0.0.0:8081 (11) 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [11] [INFO] Using worker: gthread 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [19] [INFO] Booting worker with pid: 19 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [22] [INFO] Booting worker with pid: 22 2022-11-13 08:54:21 default[20221113t165059] I1113 08:54:21.078205 22 __init__.py:52] Initializing Cloud Debugger Python agent version: 3.1 2022-11-13 08:54:21 default[20221113t165059] I1113 08:54:21.078326 19 __init__.py:52] Initializing Cloud Debugger Python agent version: 3.1 2022-11-13 08:54:21 default[20221113t165059] "GET /load?c=0 HTTP/1.1" 200 2022-11-13 08:54:21 default[20221113t165059] secureCheckLoggedIn 2022-11-13 08:54:21 default[20221113t165059] SCLI Not Logged In Gunicorn is restarted twice in the space of 2 seconds. Every time the thread restarts, it invalidates the previous stored session variables. How do I fix this please? Just in case, here is my app.yaml runtime: python38 env_variables: PASSWORD: "XXXXXXXXX" SENDGRID_API_KEY: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" app_engine_apis: true handlers: - url: /static static_dir: static - url: /.* script: auto P.S. secureCheckLoggedIn, Page.... are all my debug printing. A: Many apologies. I found that the reason why my session (run off MongoDB) was so unstable. The reason is because for the secretKey = os.urandom(21) # your own secret key So every time gunicorn reinitialized itself (I don't know the reason why though), all the code infront of that which required to have the same secret_key was thrashed on gcloud. No such problems for my local dev machine though. I changed secretKey to a fixed string and that solved it on gcloud.
Why does gunicorn need to restart so often in my gcloud appengine app?
I am using Flask to run an application. The application will be deployed on gcloud appengine. Currently, when I run it on my local dev machine, there is no issue. But when I run it on gcloud appengine, it appears that the gunicorn thread is being restarted quite often. 2022-11-13 08:54:13 default[20221113t165059] Other load 2022-11-13 08:54:13 default[20221113t165059] post.get_by_pageid 2022-11-13 08:54:13 default[20221113t165059] Returning posts 0 to 4 2022-11-13 08:54:15 default[20221113t165059] "GET /view/tree/61e2b6585fc8f37d73f59218? HTTP/1.1" 201 2022-11-13 08:54:15 default[20221113t165059] [2022-11-13 08:54:15 +0000] [9] [INFO] Starting gunicorn 20.1.0 2022-11-13 08:54:15 default[20221113t165059] [2022-11-13 08:54:15 +0000] [9] [INFO] Listening at: http://0.0.0.0:8081 (9) 2022-11-13 08:54:15 default[20221113t165059] [2022-11-13 08:54:15 +0000] [9] [INFO] Using worker: gthread 2022-11-13 08:54:15 default[20221113t165059] [2022-11-13 08:54:15 +0000] [19] [INFO] Booting worker with pid: 19 2022-11-13 08:54:16 default[20221113t165059] [2022-11-13 08:54:16 +0000] [22] [INFO] Booting worker with pid: 22 2022-11-13 08:54:16 default[20221113t165059] I1113 08:54:16.841506 22 __init__.py:52] Initializing Cloud Debugger Python agent version: 3.1 2022-11-13 08:54:16 default[20221113t165059] I1113 08:54:16.841692 19 __init__.py:52] Initializing Cloud Debugger Python agent version: 3.1 2022-11-13 08:54:17 default[20221113t165059] I1113 08:54:17.034071 24 gcp_hub_client.py:377] Debuggee registered successfully, ID: gcp:431135224927:32ede2785bfa47c7, agent ID: 636e418c-0000-2d3e-8038-089e08203644, canary mode: CANARY_MODE_ALWAYS_ENABLED 2022-11-13 08:54:17 default[20221113t165059] I1113 08:54:17.034071 23 gcp_hub_client.py:377] Debuggee registered successfully, ID: gcp:431135224927:32ede2785bfa47c7, agent ID: 636d1d19-0000-23d2-bc11-089e082c7780, canary mode: CANARY_MODE_ALWAYS_ENABLED 2022-11-13 08:54:17 default[20221113t165059] secureCheckLoggedIn 2022-11-13 08:54:17 default[20221113t165059] SCLI Not Logged In 2022-11-13 08:54:17 default[20221113t165059] view tree 2022-11-13 08:54:17 default[20221113t165059] page.get 2022-11-13 08:54:19 default[20221113t165059] "GET /robots.txt HTTP/1.1" 404 2022-11-13 08:54:19 default[20221113t165059] page.get_user_pages 2022-11-13 08:54:19 default[20221113t165059] page.get_latest 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [11] [INFO] Starting gunicorn 20.1.0 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [11] [INFO] Listening at: http://0.0.0.0:8081 (11) 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [11] [INFO] Using worker: gthread 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [19] [INFO] Booting worker with pid: 19 2022-11-13 08:54:20 default[20221113t165059] [2022-11-13 08:54:20 +0000] [22] [INFO] Booting worker with pid: 22 2022-11-13 08:54:21 default[20221113t165059] I1113 08:54:21.078205 22 __init__.py:52] Initializing Cloud Debugger Python agent version: 3.1 2022-11-13 08:54:21 default[20221113t165059] I1113 08:54:21.078326 19 __init__.py:52] Initializing Cloud Debugger Python agent version: 3.1 2022-11-13 08:54:21 default[20221113t165059] "GET /load?c=0 HTTP/1.1" 200 2022-11-13 08:54:21 default[20221113t165059] secureCheckLoggedIn 2022-11-13 08:54:21 default[20221113t165059] SCLI Not Logged In Gunicorn is restarted twice in the space of 2 seconds. Every time the thread restarts, it invalidates the previous stored session variables. How do I fix this please? Just in case, here is my app.yaml runtime: python38 env_variables: PASSWORD: "XXXXXXXXX" SENDGRID_API_KEY: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" app_engine_apis: true handlers: - url: /static static_dir: static - url: /.* script: auto P.S. secureCheckLoggedIn, Page.... are all my debug printing.
[ "Many apologies. I found that the reason why my session (run off MongoDB) was so unstable. The reason is because for the\nsecretKey = os.urandom(21) # your own secret key\n\nSo every time gunicorn reinitialized itself (I don't know the reason why though), all the code infront of that which required to have the same secret_key was thrashed on gcloud. No such problems for my local dev machine though. I changed secretKey to a fixed string and that solved it on gcloud.\n" ]
[ 0 ]
[]
[]
[ "flask", "gcloud", "google_app_engine", "gunicorn", "python" ]
stackoverflow_0074420230_flask_gcloud_google_app_engine_gunicorn_python.txt
Q: How to unnest JSON with levels upon levels I'm trying to unnest a json file. The JSON has multiple lists of dictionaries inside a list of dictionaries. I'm trying to flatten everything in it and turn it into a dataframe. it looks something like this: { "Result": [ { "OptionalColumns": { "optionalColumnName": "Joe Blogs" }, "fieldOne": "some string", "fieldtwo": "some more string", "fieldthree": "even more string", "secondList": [ { "secondListFieldOne": "value", "secondListFieldTwo": 0, "secondListFieldThree": true }, { "secondListFieldOne": "value", "secondListFieldTwo": 0, "secondListFieldThree": true } ], "anotherField": "string value", "thirdList": [ { "thirdListFieldOne": "string", "thirdListFieldTwo": "string" } ], "someNumberValue": 1 }, { "OptionalColumns": { "optionalColumnName": "Joe Blogs" }, "fieldOne": "some string", "fieldtwo": "some more string", "fieldthree": "even more string", "secondList": [ { "secondListFieldOne": "value", "secondListFieldTwo": 0, "secondListFieldThree": true }, { "secondListFieldOne": "value", "secondListFieldTwo": 0, "secondListFieldThree": true } ], "anotherField": "string value", "thirdList": [ { "thirdListFieldOne": "string", "thirdListFieldTwo": "string" } ], "someNumberValue": 1 } ], "Message": null, "Errors": [] } I'm using the below method to unnest. I know this works with one nest, or even two, however I can't for the life of me get it to work. HELP. with open('data/my_file.json','r') as f: json_data = json.loads(f.read()) df_unnested_list = pd.json_normalize(json_data, 'Result') When trying to unnest a list of dicts: pd.json_normalize(data, "field", ["fieldTwo", "nestFieldOne"]) A: It would be how you posted it before with another dict. Quite simple really. pd.json_normalize(data, β€œthirdList”, [β€œOptionalColumns”, β€œoptionalColumnName”],”fieldOne”, β€œfieldTwo”, β€œfieldThree”, [β€œsecondList”, β€œsecondListDictOneFieldOne”,”secondListDictTwoFieldOne”], β€œanotherField”, β€œsomeNumberValue”)
How to unnest JSON with levels upon levels
I'm trying to unnest a json file. The JSON has multiple lists of dictionaries inside a list of dictionaries. I'm trying to flatten everything in it and turn it into a dataframe. it looks something like this: { "Result": [ { "OptionalColumns": { "optionalColumnName": "Joe Blogs" }, "fieldOne": "some string", "fieldtwo": "some more string", "fieldthree": "even more string", "secondList": [ { "secondListFieldOne": "value", "secondListFieldTwo": 0, "secondListFieldThree": true }, { "secondListFieldOne": "value", "secondListFieldTwo": 0, "secondListFieldThree": true } ], "anotherField": "string value", "thirdList": [ { "thirdListFieldOne": "string", "thirdListFieldTwo": "string" } ], "someNumberValue": 1 }, { "OptionalColumns": { "optionalColumnName": "Joe Blogs" }, "fieldOne": "some string", "fieldtwo": "some more string", "fieldthree": "even more string", "secondList": [ { "secondListFieldOne": "value", "secondListFieldTwo": 0, "secondListFieldThree": true }, { "secondListFieldOne": "value", "secondListFieldTwo": 0, "secondListFieldThree": true } ], "anotherField": "string value", "thirdList": [ { "thirdListFieldOne": "string", "thirdListFieldTwo": "string" } ], "someNumberValue": 1 } ], "Message": null, "Errors": [] } I'm using the below method to unnest. I know this works with one nest, or even two, however I can't for the life of me get it to work. HELP. with open('data/my_file.json','r') as f: json_data = json.loads(f.read()) df_unnested_list = pd.json_normalize(json_data, 'Result') When trying to unnest a list of dicts: pd.json_normalize(data, "field", ["fieldTwo", "nestFieldOne"])
[ "It would be how you posted it before with another dict. Quite simple really.\npd.json_normalize(data, β€œthirdList”, [β€œOptionalColumns”, β€œoptionalColumnName”],”fieldOne”, β€œfieldTwo”, β€œfieldThree”, [β€œsecondList”, β€œsecondListDictOneFieldOne”,”secondListDictTwoFieldOne”], β€œanotherField”, β€œsomeNumberValue”)\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "json", "nested_lists", "python", "unnest" ]
stackoverflow_0074488995_arrays_json_nested_lists_python_unnest.txt
Q: How can I hide "" (NaN) values with st.dataframe() or st.table() in Streamlit? When I display a Pandas DataFrame in Streamlit, using st.dataframe() or st.table(), NaN values show up as the text <NA>. I would like to hide them. Code: # table.py import pandas as pd import streamlit as st df = pd.read_csv("nlp_metrics_v2.csv", header=0) st.dataframe(df) # nlp_metrics_v2.csv Model,NLP Model,NLP Prime,YOLO-NLP Average Rouge 1,,,   F1 Score,0.5,0.7,0.3   Precision,0.5,0.2,0.5   Recall,0.7,0.32,0.32 Average Rouge 2,,,   F1 Score,0.4,0.3,0.5   Precision,0.7,0.46,0.33   Recall,0.6,0.7,0.5 Average Rouge L,,,   F1 Score,0.8,0.45,0.5   Precision,0.7,0.5,0.25   Recall,0.1,0.8,0.25 # Command line streamlit run table.py Original Result: Desired Result: Hide the cells that contain <NA>, without hiding those rows since they give context about other rows. Any approach that lets me keep the values right-aligned with fixed precision (e.g., 2 decimal places) would be fine. (Ideally I'd like to do this without converting the values in those columns into strings, but that's not a hard requirement.) I'm aware I'm not using DataFrames the way they were intended, but they seem to be the only mechanism I have for displaying tables in Streamlit. A: I tried using pandas.io.formats.style.Styler.highlight_null to set "opacity: 0" or "visibility: hidden", but Streamlit seemed to ignore these CSS properties. I found this solution by playing around with WebStorm and PyCharm: # table.py import pandas as pd import streamlit as st df = pd.read_csv("nlp_metrics_v2.csv", header=0) df = df.style.highlight_null(props="color: transparent;") # hide NaNs st.dataframe(df) A: To hide NaN values in a dataframe you want to display with st.dataframe(df), just convert the corresponding entries to a string and replace "nan" with the space character " ". Here is the code to replace all NaN values in a column: df['column'] = df['column'].astype(str).replace("nan", " ")
How can I hide "" (NaN) values with st.dataframe() or st.table() in Streamlit?
When I display a Pandas DataFrame in Streamlit, using st.dataframe() or st.table(), NaN values show up as the text <NA>. I would like to hide them. Code: # table.py import pandas as pd import streamlit as st df = pd.read_csv("nlp_metrics_v2.csv", header=0) st.dataframe(df) # nlp_metrics_v2.csv Model,NLP Model,NLP Prime,YOLO-NLP Average Rouge 1,,,   F1 Score,0.5,0.7,0.3   Precision,0.5,0.2,0.5   Recall,0.7,0.32,0.32 Average Rouge 2,,,   F1 Score,0.4,0.3,0.5   Precision,0.7,0.46,0.33   Recall,0.6,0.7,0.5 Average Rouge L,,,   F1 Score,0.8,0.45,0.5   Precision,0.7,0.5,0.25   Recall,0.1,0.8,0.25 # Command line streamlit run table.py Original Result: Desired Result: Hide the cells that contain <NA>, without hiding those rows since they give context about other rows. Any approach that lets me keep the values right-aligned with fixed precision (e.g., 2 decimal places) would be fine. (Ideally I'd like to do this without converting the values in those columns into strings, but that's not a hard requirement.) I'm aware I'm not using DataFrames the way they were intended, but they seem to be the only mechanism I have for displaying tables in Streamlit.
[ "I tried using pandas.io.formats.style.Styler.highlight_null to set \"opacity: 0\" or \"visibility: hidden\", but Streamlit seemed to ignore these CSS properties.\nI found this solution by playing around with WebStorm and PyCharm:\n# table.py\nimport pandas as pd\nimport streamlit as st\n\ndf = pd.read_csv(\"nlp_metrics_v2.csv\", header=0)\ndf = df.style.highlight_null(props=\"color: transparent;\") # hide NaNs\nst.dataframe(df)\n\n", "To hide NaN values in a dataframe you want to display with st.dataframe(df), just convert the corresponding entries to a string and replace \"nan\" with the space character \" \".\nHere is the code to replace all NaN values in a column:\ndf['column'] = df['column'].astype(str).replace(\"nan\", \" \")\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "nan", "pandas", "python", "streamlit" ]
stackoverflow_0073339413_dataframe_nan_pandas_python_streamlit.txt
Q: Implement HTTP methods in different APIView class in django I have an API with 2 routes some_resource/ and some_resource/<id> and I would like to implement the normal CRUD actions (list, retrieve, create, update, delete). However, I don't want to use ViewSet because I want to have 1 class for each view. Thus I need to set up the route manually for clarity. : class SomeResourceRetrieveView(APIView): def get(self, request, pk, *args, **kwargs): ... class SomeResourceListView(APIView): def get(self, request, *args, **kwargs): ... class SomeResourceCreateView(APIView): def post(self, request, *args, **kwargs): ... So in urls.py it looks like this url_patterns = [ path("some_resource/", InvitationTeamAccessListAPI.as_view(), name="some-resource-list"), path("some_resource/", InvitationTeamAccessCreateAPI.as_view(), name="some-resource-create"), path("some_resource/<int:pk>", InvitationTeamAccessRetrieveAPI.as_view(), name="some-resource-retrieve"), ] However when I use POST on some_resource/, I get a 405. I think django stops at the first matched route and doesn't find an implementation for post. Is there a way to plug all my views to the same pattern but keep them as separate classes? A: You can make use of ViewSets give this a try: from rest_framework import viewsets from rest_framework.response import Response class InvitationTeamAccessViewSet(viewsets.ViewSet): """ Example empty viewset demonstrating the standard actions that will be handled by a router class. If you're using format suffixes, make sure to also include the `format=None` keyword argument for each action. """ def list(self, request): # this is your get() method queryset = InvitationTeamAccess.objects.all() # you should change your serializer name accordingly serializer = InvitationTeamAccessSerializer(queryset, many=True) return Response(serializer.data) def create(self, request): # this is your post method # you have to write you post logic here pass def retrieve(self, request, pk=None): # this is your get() details method queryset = InvitationTeamAccess.get(pk=pk) # you should change your serializer name accordingly serializer = InvitationTeamAccessSerializer(queryset) return Response(serializer.data) def update(self, request, pk=None): # write your post() method here pass def partial_update(self, request, pk=None): pass def destroy(self, request, pk=None): pass and in urls.py from myapp.views import InvitationTeamAccess from rest_framework.routers import DefaultRouter router = DefaultRouter() router.register(r'some_resource/', InvitationTeamAccess, basename='InvitationTeamAccess') urlpatterns = router.urls then with single url path you can perform multiple actions GET: some_resource/ will show the list of InvitationTeamAccess's GET: some_resource/1/ will show the detail of a single InvitationTeamAccess POST: some_resource/ will create new InvitationTeamAccess also read about ModelVieset
Implement HTTP methods in different APIView class in django
I have an API with 2 routes some_resource/ and some_resource/<id> and I would like to implement the normal CRUD actions (list, retrieve, create, update, delete). However, I don't want to use ViewSet because I want to have 1 class for each view. Thus I need to set up the route manually for clarity. : class SomeResourceRetrieveView(APIView): def get(self, request, pk, *args, **kwargs): ... class SomeResourceListView(APIView): def get(self, request, *args, **kwargs): ... class SomeResourceCreateView(APIView): def post(self, request, *args, **kwargs): ... So in urls.py it looks like this url_patterns = [ path("some_resource/", InvitationTeamAccessListAPI.as_view(), name="some-resource-list"), path("some_resource/", InvitationTeamAccessCreateAPI.as_view(), name="some-resource-create"), path("some_resource/<int:pk>", InvitationTeamAccessRetrieveAPI.as_view(), name="some-resource-retrieve"), ] However when I use POST on some_resource/, I get a 405. I think django stops at the first matched route and doesn't find an implementation for post. Is there a way to plug all my views to the same pattern but keep them as separate classes?
[ "You can make use of ViewSets\ngive this a try:\nfrom rest_framework import viewsets\nfrom rest_framework.response import Response\n\nclass InvitationTeamAccessViewSet(viewsets.ViewSet):\n \"\"\"\n Example empty viewset demonstrating the standard\n actions that will be handled by a router class.\n\n If you're using format suffixes, make sure to also include\n the `format=None` keyword argument for each action.\n \"\"\"\n\n def list(self, request):\n # this is your get() method\n queryset = InvitationTeamAccess.objects.all()\n # you should change your serializer name accordingly \n serializer = InvitationTeamAccessSerializer(queryset, many=True)\n return Response(serializer.data)\n\n\n def create(self, request):\n # this is your post method\n # you have to write you post logic here\n pass\n\n def retrieve(self, request, pk=None):\n # this is your get() details method\n queryset = InvitationTeamAccess.get(pk=pk)\n # you should change your serializer name accordingly \n serializer = InvitationTeamAccessSerializer(queryset)\n return Response(serializer.data) \n\n def update(self, request, pk=None):\n # write your post() method here\n pass\n\n def partial_update(self, request, pk=None):\n pass\n\n def destroy(self, request, pk=None):\n pass\n\nand in urls.py\nfrom myapp.views import InvitationTeamAccess\nfrom rest_framework.routers import DefaultRouter\n\nrouter = DefaultRouter()\nrouter.register(r'some_resource/', InvitationTeamAccess, basename='InvitationTeamAccess')\nurlpatterns = router.urls\n\nthen with single url path you can perform multiple actions\n\nGET: some_resource/ will show the list of InvitationTeamAccess's\nGET: some_resource/1/ will show the detail of a single InvitationTeamAccess\nPOST: some_resource/ will create new InvitationTeamAccess\n\nalso read about ModelVieset\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "python" ]
stackoverflow_0074487483_django_django_rest_framework_python.txt
Q: add shell script to lambda function on EC2 I have a lambda function that boots a machine on EC2 triggered by a file uploaded on S3 bucket. I would like to run a shell command that is in that machine after the boot, but I failed to do so. Any thoughts of what I can do? import boto3 region = 'us-east-1' instances = ['i-079e6065f959e151a'] def lambda_handler(event, context): ec2 = boto3.client('ec2', region_name=region) ec2.start_instances(InstanceIds=instances) A: On the Amazon EC2 instance, store your script in: /var/lib/cloud/scripts/per-boot/ Any scripts in that directory will be automatically run each time that the instance boots (or 'Starts'). When the instance has finished its work, it should perform a shutdown with: sudo shutdown now -h This will return the instance to a Stopped state. See: Auto-Stop EC2 instances when they finish a task - DEV Community
add shell script to lambda function on EC2
I have a lambda function that boots a machine on EC2 triggered by a file uploaded on S3 bucket. I would like to run a shell command that is in that machine after the boot, but I failed to do so. Any thoughts of what I can do? import boto3 region = 'us-east-1' instances = ['i-079e6065f959e151a'] def lambda_handler(event, context): ec2 = boto3.client('ec2', region_name=region) ec2.start_instances(InstanceIds=instances)
[ "On the Amazon EC2 instance, store your script in:\n/var/lib/cloud/scripts/per-boot/\n\nAny scripts in that directory will be automatically run each time that the instance boots (or 'Starts').\nWhen the instance has finished its work, it should perform a shutdown with:\nsudo shutdown now -h\n\nThis will return the instance to a Stopped state.\nSee: Auto-Stop EC2 instances when they finish a task - DEV Community\n" ]
[ 0 ]
[]
[]
[ "amazon_ec2", "amazon_web_services", "aws_lambda", "python", "shell" ]
stackoverflow_0074488971_amazon_ec2_amazon_web_services_aws_lambda_python_shell.txt
Q: Why won't my grouped box plot work in Python? I have a data set (my_data) that looks something like this: Gender Time Money Score Female 23 14 26.74 Male 12 98 56.76 Male 11 34 53.98 Female 18 58 25.98 etc. I want to make a grouped box plot of gender against score, so that there will be two plots in the same graph. My code so far: Males = [my_data.loc[my_data['Gender']=='Male', 'Score']] Females = [my_data.loc[my_data['Gender']=='Female', 'Score']] Score = [Males, Females] fig, ax = plt.subplots() ax.boxplot(Score) plt.show() However, this runs an error message: ValueError: X must have 2 or fewer dimensions I tried converting Males and Females to an array, thinking maybe Python wasn't liking it as a list by doing: Males = np.array([my_data.loc[my_data['Gender']=='Male', 'Score']]) Females = np.array([my_data.loc[my_data['Gender']=='Female', 'Score']]) But that still didn't work. Also Python does say it takes lists as values for boxplots so I shouldn't need to do that anyway. I also tried a different way of making a boxplot like this: fig = plt.figure(figsize =(10, 7)) ax = fig.add_axes([Males, Females]) bp = ax.boxplot(Score) plt.show() And it gave me this error code: TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' <Figure size 720x504 with 0 Axes> What's going on? A: When you extracted your source data, you put them unnecessary in square brackets. Generate them instead as: Males = my_data.loc[my_data['Gender']=='Male', 'Score'] Females = my_data.loc[my_data['Gender']=='Female', 'Score'] Then, to generate your box plot, you can run e.g.: fig, ax = plt.subplots(1, 1) ax.boxplot([Males, Females]) ax.set_xticklabels(['Males', 'Females']) plt.show()
Why won't my grouped box plot work in Python?
I have a data set (my_data) that looks something like this: Gender Time Money Score Female 23 14 26.74 Male 12 98 56.76 Male 11 34 53.98 Female 18 58 25.98 etc. I want to make a grouped box plot of gender against score, so that there will be two plots in the same graph. My code so far: Males = [my_data.loc[my_data['Gender']=='Male', 'Score']] Females = [my_data.loc[my_data['Gender']=='Female', 'Score']] Score = [Males, Females] fig, ax = plt.subplots() ax.boxplot(Score) plt.show() However, this runs an error message: ValueError: X must have 2 or fewer dimensions I tried converting Males and Females to an array, thinking maybe Python wasn't liking it as a list by doing: Males = np.array([my_data.loc[my_data['Gender']=='Male', 'Score']]) Females = np.array([my_data.loc[my_data['Gender']=='Female', 'Score']]) But that still didn't work. Also Python does say it takes lists as values for boxplots so I shouldn't need to do that anyway. I also tried a different way of making a boxplot like this: fig = plt.figure(figsize =(10, 7)) ax = fig.add_axes([Males, Females]) bp = ax.boxplot(Score) plt.show() And it gave me this error code: TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' <Figure size 720x504 with 0 Axes> What's going on?
[ "When you extracted your source data, you put them unnecessary in square brackets.\nGenerate them instead as:\nMales = my_data.loc[my_data['Gender']=='Male', 'Score']\nFemales = my_data.loc[my_data['Gender']=='Female', 'Score']\n\nThen, to generate your box plot, you can run e.g.:\nfig, ax = plt.subplots(1, 1)\nax.boxplot([Males, Females])\nax.set_xticklabels(['Males', 'Females'])\nplt.show()\n\n" ]
[ 0 ]
[]
[]
[ "boxplot", "graph", "matplotlib", "numpy", "python" ]
stackoverflow_0074488153_boxplot_graph_matplotlib_numpy_python.txt
Q: run an external script and print the output in real-time in a text widget I want to run an external script (demo_print.py) and print the output in real-time in a text widget. I got error: What's my mistake and how to reach my goal ? You can suggest more simple solution if you have. Exception in thread Thread-1: Traceback (most recent call last): File "/usr/bin/python3/3.7.4/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/bin/python3/3.7.4/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "example_gui.py", line 37, in test textbox.insert(tk.END, msg + "\n") File "example_gui.py", line 20, in write self.widget.insert('end', textbox) File "/usr/bin/python3/3.7.4/lib/python3.7/tkinter/__init__.py", line 3272, in insert self.tk.call((self._w, 'insert', index, chars) + args) _tkinter.TclError: out of stack space (infinite loop?) I want to run an external script (demo_print.py) and print the output in real-time in a text widget. example_gui.py import tkinter as tk import subprocess import threading import sys from functools import partial # ### classes #### class Redirect: def __init__(self, widget, autoscroll=True): self.widget = widget self.autoscroll = autoscroll def write(self, textbox): self.widget.insert('end', textbox) if self.autoscroll: self.widget.see('end') # autoscroll def flush(self): pass def run(textbox=None): threading.Thread(target=test, args=[textbox]).start() def test(textbox=None): p = subprocess.Popen("demo_print.py", stdout=subprocess.PIPE, bufsize=1, text=True) while p.poll() is None: msg = p.stdout.readline().strip() # read a line from the process output if msg: textbox.insert(tk.END, msg + "\n") if __name__ == "__main__": fenster = tk.Tk() fenster.title("My Program") textbox = tk.Text(fenster) textbox.grid() scrollbar = tk.Scrollbar(fenster, orient=tk.VERTICAL) scrollbar.grid() textbox.config(yscrollcommand=scrollbar.set) scrollbar.config(command=textbox.yview) start_button = tk.Button(fenster, text="Start", command=partial(run, textbox)) start_button.grid() old_stdout = sys.stdout sys.stdout = Redirect(textbox) fenster.mainloop() sys.stdout = old_stdout demo_print.py import time for i in range(10): print(f"print {i}") time.sleep(1) A: Okay so first of all make sure demo_print.py is in the same space as your main.py not in a folder or anything then you can just do this: from demo_print import * print(whatever u named your output variable in demo_print) From the looks of it you know how to do the rest. A: Since you execute "demo_print.py" directly, so it must be executable and has correct shebang (for example, #!/usr/bin/python -u) in the file. However, I would suggest to execute the file using Python executable instead. Also the Redirect class is not necessary for your case. Below is the modified code: import tkinter as tk import subprocess import threading import sys from functools import partial def run(textbox=None): threading.Thread(target=test, args=[textbox]).start() def test(textbox=None): # using the Python executable to run demo_print.py p = subprocess.Popen([sys.executable, "-u", "demo_print.py"], stdout=subprocess.PIPE, bufsize=1, text=True) while p.poll() is None: msg = p.stdout.readline().strip() # read a line from the process output if msg: textbox.insert(tk.END, msg + "\n") textbox.see(tk.END) if __name__ == "__main__": fenster = tk.Tk() fenster.title("My Program") textbox = tk.Text(fenster) textbox.grid(row=0, column=0) scrollbar = tk.Scrollbar(fenster, orient=tk.VERTICAL) scrollbar.grid(row=0, column=1, sticky="ns") textbox.config(yscrollcommand=scrollbar.set) scrollbar.config(command=textbox.yview) start_button = tk.Button(fenster, text="Start", command=partial(run, textbox)) start_button.grid() fenster.mainloop()
run an external script and print the output in real-time in a text widget
I want to run an external script (demo_print.py) and print the output in real-time in a text widget. I got error: What's my mistake and how to reach my goal ? You can suggest more simple solution if you have. Exception in thread Thread-1: Traceback (most recent call last): File "/usr/bin/python3/3.7.4/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/bin/python3/3.7.4/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "example_gui.py", line 37, in test textbox.insert(tk.END, msg + "\n") File "example_gui.py", line 20, in write self.widget.insert('end', textbox) File "/usr/bin/python3/3.7.4/lib/python3.7/tkinter/__init__.py", line 3272, in insert self.tk.call((self._w, 'insert', index, chars) + args) _tkinter.TclError: out of stack space (infinite loop?) I want to run an external script (demo_print.py) and print the output in real-time in a text widget. example_gui.py import tkinter as tk import subprocess import threading import sys from functools import partial # ### classes #### class Redirect: def __init__(self, widget, autoscroll=True): self.widget = widget self.autoscroll = autoscroll def write(self, textbox): self.widget.insert('end', textbox) if self.autoscroll: self.widget.see('end') # autoscroll def flush(self): pass def run(textbox=None): threading.Thread(target=test, args=[textbox]).start() def test(textbox=None): p = subprocess.Popen("demo_print.py", stdout=subprocess.PIPE, bufsize=1, text=True) while p.poll() is None: msg = p.stdout.readline().strip() # read a line from the process output if msg: textbox.insert(tk.END, msg + "\n") if __name__ == "__main__": fenster = tk.Tk() fenster.title("My Program") textbox = tk.Text(fenster) textbox.grid() scrollbar = tk.Scrollbar(fenster, orient=tk.VERTICAL) scrollbar.grid() textbox.config(yscrollcommand=scrollbar.set) scrollbar.config(command=textbox.yview) start_button = tk.Button(fenster, text="Start", command=partial(run, textbox)) start_button.grid() old_stdout = sys.stdout sys.stdout = Redirect(textbox) fenster.mainloop() sys.stdout = old_stdout demo_print.py import time for i in range(10): print(f"print {i}") time.sleep(1)
[ "Okay so first of all make sure demo_print.py is in the same space as your main.py not in a folder or anything then you can just do this:\nfrom demo_print import *\nprint(whatever u named your output variable in demo_print)\n\nFrom the looks of it you know how to do the rest.\n", "Since you execute \"demo_print.py\" directly, so it must be executable and has correct shebang (for example, #!/usr/bin/python -u) in the file.\nHowever, I would suggest to execute the file using Python executable instead. Also the Redirect class is not necessary for your case.\nBelow is the modified code:\nimport tkinter as tk\nimport subprocess\nimport threading\nimport sys\nfrom functools import partial\n\n\ndef run(textbox=None):\n threading.Thread(target=test, args=[textbox]).start()\n\n\ndef test(textbox=None):\n # using the Python executable to run demo_print.py\n p = subprocess.Popen([sys.executable, \"-u\", \"demo_print.py\"], stdout=subprocess.PIPE, bufsize=1, text=True)\n while p.poll() is None:\n msg = p.stdout.readline().strip() # read a line from the process output\n if msg:\n textbox.insert(tk.END, msg + \"\\n\")\n textbox.see(tk.END)\n\n\nif __name__ == \"__main__\":\n fenster = tk.Tk()\n fenster.title(\"My Program\")\n textbox = tk.Text(fenster)\n textbox.grid(row=0, column=0)\n scrollbar = tk.Scrollbar(fenster, orient=tk.VERTICAL)\n scrollbar.grid(row=0, column=1, sticky=\"ns\")\n\n textbox.config(yscrollcommand=scrollbar.set)\n scrollbar.config(command=textbox.yview)\n\n start_button = tk.Button(fenster, text=\"Start\", command=partial(run, textbox))\n start_button.grid()\n\n fenster.mainloop()\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x", "tkinter" ]
stackoverflow_0074487766_python_python_3.x_tkinter.txt
Q: I am getting a type error when I am trying to replace I am trying to do a conditional replacing of values in one column(age_cat) by values in another column(stillbirth) but it's giving me a type error TypeError: '<' not supported between instances of 'str' and 'float' basically, I need it to say age_cat is "SB" if report_stillbirth is Yes 'report_stillbirth' 'age_cat' No 1 Yes 0 No 2 No 4 report_stillbirth is a string age_cat is an integer df.loc[df['report_stillbirth'] == "Yes", 'age_cat'] = "SB" I have tried to change the type of the age_cat to string with: df['age_cat'] = df['age_cat'].astype(str) A: Try this: import numpy as np import pandas as pd df['age_cat'] = np.where(df['report_stillbirth'] == 'Yes', 'SB', df['age_cat']) Example: import numpy as np import pandas as pd choices = ['Yes', 'No'] df = pd.DataFrame( { 'report_stillbirth': np.random.choice(choices, 10), 'age_cat': np.random.randint(1, 15, 10) } ) print(df) # prints: # # report_stillbirth age_cat # 0 No 6 # 1 No 1 # 2 Yes 4 # 3 Yes 12 # 4 Yes 13 # 5 No 2 # 6 No 7 # 7 Yes 1 # 8 Yes 7 # 9 Yes 10 Now, if we apply numpy.where: df['age_cat'] = np.where(df['report_stillbirth'] == 'Yes', 'SB', df['age_cat']) print(df) # # Prints: # # report_stillbirth age_cat # 0 No 6 # 1 No 1 # 2 Yes SB # 3 Yes SB # 4 Yes SB # 5 No 2 # 6 No 7 # 7 Yes SB # 8 Yes SB # 9 Yes SB Screenshot of the executed cell:
I am getting a type error when I am trying to replace
I am trying to do a conditional replacing of values in one column(age_cat) by values in another column(stillbirth) but it's giving me a type error TypeError: '<' not supported between instances of 'str' and 'float' basically, I need it to say age_cat is "SB" if report_stillbirth is Yes 'report_stillbirth' 'age_cat' No 1 Yes 0 No 2 No 4 report_stillbirth is a string age_cat is an integer df.loc[df['report_stillbirth'] == "Yes", 'age_cat'] = "SB" I have tried to change the type of the age_cat to string with: df['age_cat'] = df['age_cat'].astype(str)
[ "Try this:\n\nimport numpy as np\nimport pandas as pd\n\ndf['age_cat'] = np.where(df['report_stillbirth'] == 'Yes', 'SB', df['age_cat'])\n\nExample:\n\nimport numpy as np\nimport pandas as pd\n\n\nchoices = ['Yes', 'No']\n\ndf = pd.DataFrame(\n {\n 'report_stillbirth': np.random.choice(choices, 10),\n 'age_cat': np.random.randint(1, 15, 10)\n }\n)\n\nprint(df)\n# prints:\n#\n# report_stillbirth age_cat\n# 0 No 6\n# 1 No 1\n# 2 Yes 4\n# 3 Yes 12\n# 4 Yes 13\n# 5 No 2\n# 6 No 7\n# 7 Yes 1\n# 8 Yes 7\n# 9 Yes 10\n\n\nNow, if we apply numpy.where:\n\ndf['age_cat'] = np.where(df['report_stillbirth'] == 'Yes', 'SB', df['age_cat'])\n\nprint(df)\n#\n# Prints:\n#\n# report_stillbirth age_cat\n# 0 No 6\n# 1 No 1\n# 2 Yes SB\n# 3 Yes SB\n# 4 Yes SB\n# 5 No 2\n# 6 No 7\n# 7 Yes SB\n# 8 Yes SB\n# 9 Yes SB\n\nScreenshot of the executed cell:\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074489050_numpy_pandas_python.txt
Q: What regex can I use to "clean" a sentence from its first characters like 1), or #1, or I'm trying in Python to "clean up" a string and remove some characters that were added like : "1. bla bla" => i want "bla bla" "#. bla bla" => same "3) bla bla" => same "I. bla bla" => same I tried to use (\W)(\w.*) but doesn't work. Thanks ! A: You can try: ^.[.)]\s+(.*) Regex demo. import re text = """\ 1. bla bla #. bla bla 3) bla bla I. bla bla""" pat = re.compile(r"^.[.)]\s+(.*)", flags=re.M) for cleaned in pat.findall(text): print(cleaned) Prints: bla bla bla bla bla bla bla bla A: You can try this demo (\")[^ ]* ([^\"]*\")
What regex can I use to "clean" a sentence from its first characters like 1), or #1, or
I'm trying in Python to "clean up" a string and remove some characters that were added like : "1. bla bla" => i want "bla bla" "#. bla bla" => same "3) bla bla" => same "I. bla bla" => same I tried to use (\W)(\w.*) but doesn't work. Thanks !
[ "You can try:\n^.[.)]\\s+(.*)\n\nRegex demo.\n\nimport re\n\ntext = \"\"\"\\\n1. bla bla\n#. bla bla\n3) bla bla\nI. bla bla\"\"\"\n\npat = re.compile(r\"^.[.)]\\s+(.*)\", flags=re.M)\n\nfor cleaned in pat.findall(text):\n print(cleaned)\n\nPrints:\nbla bla\nbla bla\nbla bla\nbla bla\n\n", "You can try this demo\n(\\\")[^ ]* ([^\\\"]*\\\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074489160_python_regex.txt
Q: File - read and write? Im new to this so i don't quite understand question fully. It says: It is necessary to load output.txt file in program. Structure of file is that every line is new expression in format: 10-1 6-3. So format is number-operator-number It is necessary to write a program that reads that file, line by line, applies the given operation and writes the result together with the associated expression into the file output.txt. Layout of the output.txt file: 10-1=9 6-3=3 So, were i supposed just to make a new txt document, save it, or write in it 10-1\n6-3 , or within the program make new file 'file'.txt all i get is what i did input. how to get required output (one at the bottom)? Thank you for help. A: so i maganed to do it, just to post it so if some1 else might need. Cheers. f = open("your file location /input.txt") content = f.read() splitRows = content.split('\n') result = "" for x in splitRows: a, b = x.split('-') c = int(a) - int(b) c = str(c) res = (a + "-" + b + "=" + c + "\n") result += res output_file = open("your output file location/ output.txt", "w") output_file.write(result)
File - read and write?
Im new to this so i don't quite understand question fully. It says: It is necessary to load output.txt file in program. Structure of file is that every line is new expression in format: 10-1 6-3. So format is number-operator-number It is necessary to write a program that reads that file, line by line, applies the given operation and writes the result together with the associated expression into the file output.txt. Layout of the output.txt file: 10-1=9 6-3=3 So, were i supposed just to make a new txt document, save it, or write in it 10-1\n6-3 , or within the program make new file 'file'.txt all i get is what i did input. how to get required output (one at the bottom)? Thank you for help.
[ "so i maganed to do it, just to post it so if some1 else might need. Cheers.\nf = open(\"your file location /input.txt\")\ncontent = f.read()\nsplitRows = content.split('\\n')\nresult = \"\"\nfor x in splitRows:\na, b = x.split('-') \n\nc = int(a) - int(b) \n\nc = str(c) \n\nres = (a + \"-\" + b + \"=\" + c + \"\\n\") \n\nresult += res \n\noutput_file = open(\"your output file location/ output.txt\", \"w\")\noutput_file.write(result)\n" ]
[ 0 ]
[]
[]
[ "file", "file_read", "input", "python", "read_write" ]
stackoverflow_0074466037_file_file_read_input_python_read_write.txt
Q: How to fetch data analyzed in python to node.js and pass it to angular? I am new to angular and i want to display JSON data from python to angular with the help of node.js and I used child process to connect python and node.js but I dont know how to pass it to angular service node.js file const express = require('express') const { spawn } = require('child_process') const app = express() const port = 8000 app.get('/', (req, res) => { let dataToSend let largeDataSet = [] // spawn new child process to call the python script const python = spawn('python', ['test.py']) // collect data from script python.stdout.on('data', function (data) { console.log('Pipe data from python script ...') //dataToSend = data; largeDataSet.push(data) }) // in close event we are sure that stream is from child process is closed python.on('close', (code) => { console.log(`child process close all stdio with code ${code}`) // send data to browser res.send(largeDataSet.join('')) }) }) app.listen(port, () => { console.log(`App listening on port ${port}!`) }) A: Technically you just have to send a Http GET request from your service. I suggest that you should read and follow this offical http client guide to set it up correctly. Here is a simple service snippet. This should be enough. @Injectable({ providedIn: 'root', }) export class MyService { constructor(private http: HttpClient) {} getData(): Observable<any> { const url = ''; return this.http.get(url); } }
How to fetch data analyzed in python to node.js and pass it to angular?
I am new to angular and i want to display JSON data from python to angular with the help of node.js and I used child process to connect python and node.js but I dont know how to pass it to angular service node.js file const express = require('express') const { spawn } = require('child_process') const app = express() const port = 8000 app.get('/', (req, res) => { let dataToSend let largeDataSet = [] // spawn new child process to call the python script const python = spawn('python', ['test.py']) // collect data from script python.stdout.on('data', function (data) { console.log('Pipe data from python script ...') //dataToSend = data; largeDataSet.push(data) }) // in close event we are sure that stream is from child process is closed python.on('close', (code) => { console.log(`child process close all stdio with code ${code}`) // send data to browser res.send(largeDataSet.join('')) }) }) app.listen(port, () => { console.log(`App listening on port ${port}!`) })
[ "Technically you just have to send a Http GET request from your service.\nI suggest that you should read and follow this offical http client guide to set it up correctly.\nHere is a simple service snippet. This should be enough.\n @Injectable({\n providedIn: 'root',\n })\n export class MyService {\n constructor(private http: HttpClient) {}\n \n getData(): Observable<any> {\n const url = '';\n return this.http.get(url);\n }\n }\n\n" ]
[ 0 ]
[]
[]
[ "angular", "node.js", "python" ]
stackoverflow_0074488744_angular_node.js_python.txt
Q: how to pass python script variables to a csh script? I have a python script which asks user to selct one of many options, I would like to use the selected variable in a csh script to proceed further. I am getting undefined variable when I am trying to use the python variable from the shell script. Here is some reference: There is a python script choices.py whose output is something like this: Select a Choice: 1) choice1 2) choice2 3) choice3 Choice: 1 Selected Choice: choice1 Then in my csh script I am envoking the python script and trying to use the choice variable for subsequent operations. Something like this: python choices.py echo "$choice" Getting the error: choice: Undefined variable. Is there a way to make the python variable global so that it can be used in other places? Expected: choice1 Getting the error: choice: Undefined variable. A: Csh has no way to know what variables existed in a Python process which has now ceased to exist, just like you have no way to know what C variables exist internally in the C compiler. A common arrangement is to have your script output a value on standard output, and have the shell capture that: set choice=`python choices.py` echo "$choice" Of course, if your script already needs standard output for other things (like communicating with the user!) you need a slightly different approach. Perhaps it can write the value to a file, and have the caller read that file? More tangentially, the C shell has basically been abandoned by all its sane users some 30+ years ago in favor of Bourne-compatible shells. You generally do not want to write scripts using csh syntax unless you are writing them strictly for your own private use and you have a very odd masochistic fetish. The standard reference for persuading people to switch is Csh considered harmful though some of the arguments in that are slightly dubious; the conclusion is still forcefully argued with sane arguments even after you ignore some of the spurious ones. Perhaps see also https://www.grymoire.com/Unix/CshTop10.txt Here is the same construct in standard POSIX sh syntax: #!/bin/sh choice=$(python choices.py) echo "$choice"
how to pass python script variables to a csh script?
I have a python script which asks user to selct one of many options, I would like to use the selected variable in a csh script to proceed further. I am getting undefined variable when I am trying to use the python variable from the shell script. Here is some reference: There is a python script choices.py whose output is something like this: Select a Choice: 1) choice1 2) choice2 3) choice3 Choice: 1 Selected Choice: choice1 Then in my csh script I am envoking the python script and trying to use the choice variable for subsequent operations. Something like this: python choices.py echo "$choice" Getting the error: choice: Undefined variable. Is there a way to make the python variable global so that it can be used in other places? Expected: choice1 Getting the error: choice: Undefined variable.
[ "Csh has no way to know what variables existed in a Python process which has now ceased to exist, just like you have no way to know what C variables exist internally in the C compiler.\nA common arrangement is to have your script output a value on standard output, and have the shell capture that:\nset choice=`python choices.py`\necho \"$choice\"\n\nOf course, if your script already needs standard output for other things (like communicating with the user!) you need a slightly different approach. Perhaps it can write the value to a file, and have the caller read that file?\nMore tangentially, the C shell has basically been abandoned by all its sane users some 30+ years ago in favor of Bourne-compatible shells. You generally do not want to write scripts using csh syntax unless you are writing them strictly for your own private use and you have a very odd masochistic fetish. The standard reference for persuading people to switch is Csh considered harmful though some of the arguments in that are slightly dubious; the conclusion is still forcefully argued with sane arguments even after you ignore some of the spurious ones. Perhaps see also https://www.grymoire.com/Unix/CshTop10.txt\nHere is the same construct in standard POSIX sh syntax:\n#!/bin/sh\nchoice=$(python choices.py)\necho \"$choice\"\n\n" ]
[ 0 ]
[]
[]
[ "csh", "python", "shell" ]
stackoverflow_0074488906_csh_python_shell.txt
Q: Why threading doesn't work in my Python script? I try to launch this code on my computer and threading doesn't work: import threading def infinite_loop(): while 1 == 1: pass def myname(): print("chralabya") t1 = threading.Thread(target=infinite_loop()) t2 = threading.Thread(target=myname()) t1.start() t2.start() When I execute this program myname() is never executed. Can someone can explain to me why threading doesn't work? A: target=inifinite_loop() calls your function (note the ()) and assigns the result (which never comes) to the target parameter. That's not what you want! Instead, you want to pass the function itself to the Thread constructor: t1 = threading.Thread(target=infinite_loop) t2 = threading.Thread(target=myname)
Why threading doesn't work in my Python script?
I try to launch this code on my computer and threading doesn't work: import threading def infinite_loop(): while 1 == 1: pass def myname(): print("chralabya") t1 = threading.Thread(target=infinite_loop()) t2 = threading.Thread(target=myname()) t1.start() t2.start() When I execute this program myname() is never executed. Can someone can explain to me why threading doesn't work?
[ "target=inifinite_loop() calls your function (note the ()) and assigns the result (which never comes) to the target parameter. That's not what you want!\nInstead, you want to pass the function itself to the Thread constructor:\nt1 = threading.Thread(target=infinite_loop)\nt2 = threading.Thread(target=myname)\n\n" ]
[ 1 ]
[]
[]
[ "multithreading", "python", "python_3.x", "python_multithreading" ]
stackoverflow_0074489241_multithreading_python_python_3.x_python_multithreading.txt
Q: Want to display only specific value in graph's x-axis , but its showing repeated values of columns of csv-file I need to display only unique values on x-axis, but it is showing all the values in a specific column of the csv-file. Any suggestions please to fix this out? df=pd.read_csv('//media//HOTEL MANAGEMENT.csv') df.plot('Room_Type','Charges',color='g') plt.show()
Want to display only specific value in graph's x-axis , but its showing repeated values of columns of csv-file
I need to display only unique values on x-axis, but it is showing all the values in a specific column of the csv-file. Any suggestions please to fix this out? df=pd.read_csv('//media//HOTEL MANAGEMENT.csv') df.plot('Room_Type','Charges',color='g') plt.show()
[]
[]
[ "My assumption is that you are looking to plot the result of some aggregated data. e.g. Either:\n\nThe total charges per room type, or\nThe average charge per room type, or\nThe minimum/maximum charge per room type.\n\nIf so, you could so like:\ndf=pd.read_csv('//media//HOTEL MANAGEMENT.csv')\n\n# And use any of the following: \ndf.groupby('Room_Type')['Charges'].sum().plot(color='g')\ndf.groupby('Room_Type')['Charges'].mean().plot(color='g')\ndf.groupby('Room_Type')['Charges'].min().plot(color='g')\ndf.groupby('Room_Type')['Charges'].max().plot(color='g')\n\nSeeing that the x-axis may not necesarily be sequential, a comparative bar graph could be another way to plot.\ndf.groupby('Room_Type')['Charges'].mean().plot.bar(color=['r','g'])\n\n" ]
[ -1 ]
[ "csv", "matplotlib", "python" ]
stackoverflow_0074489133_csv_matplotlib_python.txt
Q: Passing variable to href django template I have some problem and maybe I can give an example of two views below what I want to achieve. class SomeViewOne(TemplateView): model = None template_name = 'app/template1.html' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) # The downloads view contains a list of countries eg France, Poland, Germany # This returns to context and lists these countries in template1 class ItemDetail(TemplateView): model = None template_name = 'app/template2.html' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) countries_name = kwargs.get('str') The view should get the passed "x" with the name of the country where I described it below. Then on the page I have a list of these countries. After clicking on the selected country, a new tab should open and show a list of cities in the selected country. So I am using in loop template1.html as below {% for x in list_countries %} <li> <a href="{% url 'some-name-url' '{{x}}' %}" class="target='_blank'">{{ x }}</a><br> </li> {% endfor %} I can't pass "x" this way. Why? The url for the next view looks like this path('some/countries/<str:x>/',views.ItemDetail.as_view(), name='some-name-url'), And I can't get that 'x' given in the template in the href A: If Manoj's solution doesn't work, try removing the single quotes AND {{ }}. In my program, my integer doesnt need to be wrapped with {{ }}, so maybe neither does your string. I have this in my code: {% for item in items %} <div class="item-title"> {{ item }}<br> </div> <a href="{% url 'core:edit_item' item.id %}">Edit {{ item }}</a> {% endfor %} It works just fine. Try: <a href="{% url 'some-name-url' x %}" class="target='_blank'">{{ x }}</a> A: There are several mistakes such as: It should be only x in url tag neither {{x}} nor '{{x}}' you have passed the value as x in url params (some/countries/<str:x>/) and accessing it using kwargs.get('str') which is not correct it should be kwargs.get('x'). Also you are not including variable countries_name in context and not even returning context. Note: Assuming that you are already getting some companies in template1.html template that's why you are running loop. Try below code: views.py class ItemDetail(TemplateView): template_name = 'template2.html' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['countries_name'] = self.kwargs.get('x') return context Template1.html file {% for x in list_countries %} <li> <a onclick="window.open('{% url 'some-name-url' x %}', '_blank')" style='cursor:pointer;'>{{ x }}</a><br> </li> {% endfor %} Then you can this countries_name value passed from template1.html in template2.html. template2.html <p>The clicked country is {{countries_name}}</p> A: You don't need pass that variable with single quotes. <a href="{% url 'some-name-url' {{ x }} %}" #Just removed single quotes from variable x. And see if it shows on template
Passing variable to href django template
I have some problem and maybe I can give an example of two views below what I want to achieve. class SomeViewOne(TemplateView): model = None template_name = 'app/template1.html' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) # The downloads view contains a list of countries eg France, Poland, Germany # This returns to context and lists these countries in template1 class ItemDetail(TemplateView): model = None template_name = 'app/template2.html' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) countries_name = kwargs.get('str') The view should get the passed "x" with the name of the country where I described it below. Then on the page I have a list of these countries. After clicking on the selected country, a new tab should open and show a list of cities in the selected country. So I am using in loop template1.html as below {% for x in list_countries %} <li> <a href="{% url 'some-name-url' '{{x}}' %}" class="target='_blank'">{{ x }}</a><br> </li> {% endfor %} I can't pass "x" this way. Why? The url for the next view looks like this path('some/countries/<str:x>/',views.ItemDetail.as_view(), name='some-name-url'), And I can't get that 'x' given in the template in the href
[ "If Manoj's solution doesn't work, try removing the single quotes AND {{ }}. In my program, my integer doesnt need to be wrapped with {{ }}, so maybe neither does your string.\nI have this in my code:\n{% for item in items %}\n\n <div class=\"item-title\">\n {{ item }}<br>\n </div>\n <a href=\"{% url 'core:edit_item' item.id %}\">Edit {{ item }}</a>\n{% endfor %}\n\nIt works just fine. Try:\n<a href=\"{% url 'some-name-url' x %}\" class=\"target='_blank'\">{{ x }}</a>\n\n", "There are several mistakes such as:\n\nIt should be only x in url tag neither {{x}} nor '{{x}}'\n\nyou have passed the value as x in url params (some/countries/<str:x>/) and accessing it using kwargs.get('str') which is not correct it should be kwargs.get('x').\n\nAlso you are not including variable countries_name in context and not even returning context.\n\n\n\nNote: Assuming that you are already getting some companies in template1.html template that's why you are running loop.\n\nTry below code:\nviews.py\nclass ItemDetail(TemplateView):\n template_name = 'template2.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['countries_name'] = self.kwargs.get('x')\n return context\n\nTemplate1.html file\n{% for x in list_countries %}\n <li>\n <a onclick=\"window.open('{% url 'some-name-url' x %}', '_blank')\" style='cursor:pointer;'>{{ x }}</a><br>\n </li>\n{% endfor %}\n\nThen you can this countries_name value passed from template1.html in template2.html.\ntemplate2.html\n<p>The clicked country is {{countries_name}}</p>\n\n", "You don't need pass that variable with single quotes.\n<a href=\"{% url 'some-name-url' {{ x }} %}\" #Just removed single quotes from variable x.\n\nAnd see if it shows on template\n" ]
[ 1, 1, 0 ]
[]
[]
[ "django", "django_templates", "django_urls", "django_views", "python" ]
stackoverflow_0074488338_django_django_templates_django_urls_django_views_python.txt
Q: PyCharm warns for unresolved reference builtin datetime module I just installed the latest version of PyCharm (4.5). Now I am experiencing unresolved reference errors. On the top of my code I have: from datetime import datetime OS is Ubuntu 15.04. Already did the Invalidate Cache/Restart several times. No difference. The Project interpreter of my project is set to Python 2.7.6. Already reloaded the Interperter Paths. Code works fine, it's just the IDE that produces an annoying error and no more autocomplete. A: As mentioned here try to delete the content of the skeleton folder. It reside inside of the settingsfolder (~/.PyCharmxxxx.xx/system/python_stubs) Removing/adding the python environment was not necessary for me. Simply restart PyCharm after removing the content (or the whole python_stubs folder) This does the trick for me and now it works like a (py)charm again. A: That's a known, already reported at https://youtrack.jetbrains.com/issue/PY-15460 A: Instead of import datetime Try this from datetime import datetime Works for me.
PyCharm warns for unresolved reference builtin datetime module
I just installed the latest version of PyCharm (4.5). Now I am experiencing unresolved reference errors. On the top of my code I have: from datetime import datetime OS is Ubuntu 15.04. Already did the Invalidate Cache/Restart several times. No difference. The Project interpreter of my project is set to Python 2.7.6. Already reloaded the Interperter Paths. Code works fine, it's just the IDE that produces an annoying error and no more autocomplete.
[ "As mentioned here try to delete the content of the skeleton folder. It reside inside of the settingsfolder (~/.PyCharmxxxx.xx/system/python_stubs)\nRemoving/adding the python environment was not necessary for me. Simply restart PyCharm after removing the content (or the whole python_stubs folder)\nThis does the trick for me and now it works like a (py)charm again.\n", "That's a known, already reported at https://youtrack.jetbrains.com/issue/PY-15460\n", "Instead of\nimport datetime\n\nTry this\nfrom datetime import datetime\n\nWorks for me.\n" ]
[ 9, 4, 0 ]
[]
[]
[ "pycharm", "python" ]
stackoverflow_0030311954_pycharm_python.txt
Q: Reverse certain elements in a 2d array to produce a matrix in the specified format, Python 3 I have the following code for a list of lists with the intention of creating a matrix of numbers: grid=[[1,2,3,4,5,6,7],[8,9,10,11,12],[13,14,15,16,17],[18,19,20,21,22]] On using the following code which i figured out would reverse the list, it produces a matrix ... for i in reversed(grid): print(i) The output is: [18, 19, 20, 21, 22] [13, 14, 15, 16, 17] [8, 9, 10, 11, 12] [1, 2, 3, 4, 5, 6, 7] I want however, the output to be as below, so that the numbers "connect" as they go up: [22,21,20,19,18] [13,14,15,16,17] [12,11,10,9,8] [1,2,3,4,5,6,7] Also, for an upvote, I'd be interested in more efficient ways of generating the matrix in the first place. For instance, to generate a 7x7 array - can it be done using a variable, for instance 7, or 49. Or for a 10x10 matrix, 10, or 100? UPDATE: Yes, sorry - the sublists should all be of the same size. Typo above UPDATE BASED ON ANSWER BELOW These two lines: >>> grid=[[1,2,3,4,5,6,7],[8,9,10,11,12],[13,14,15,16,17],[18,18,20,21,22]] >>> [lst[::-1] for lst in grid[::-1]] produce the following output: [[22, 21, 20, 18, 18], [17, 16, 15, 14, 13], [12, 11, 10, 9, 8], [7, 6, 5, 4, 3, 2, 1]] but I want them to print one line after the other, like a matrix ....also, so I can check the output is as I specified. That's all I need essentially, for the answer to be the answer! A: You need to reverse the list and also the sub-lists: [lst[::-1] for lst in grid[::-1]] Note that lst[::-1] reverses the list via list slicing, see here. You can visualize the resulting nested lists across multiples lines with pprint: >>> from pprint import pprint >>> pprint([lst[::-1] for lst in grid[::-1]]) [[22, 21, 20, 19, 18], [17, 16, 15, 14, 13], [12, 11, 10, 9, 8], [7, 6, 5, 4, 3, 2, 1]] A: usually 2D matrices are created, manipulated with numpy then index slicing can reorder rows, columns import numpy as np def SnakeMatrx(n): Sq, Sq.shape = np.arange(n * n), (n, n) # Sq matrix filled with a range Sq[1::2,:] = Sq[1::2,::-1] # reverse odd row's columns return Sq[::-1,:] + 1 # reverse order of rows, add 1 to every entry SnakeMatrx(5) Out[33]: array([[21, 22, 23, 24, 25], [20, 19, 18, 17, 16], [11, 12, 13, 14, 15], [10, 9, 8, 7, 6], [ 1, 2, 3, 4, 5]]) SnakeMatrx(4) Out[34]: array([[16, 15, 14, 13], [ 9, 10, 11, 12], [ 8, 7, 6, 5], [ 1, 2, 3, 4]]) if you really want a list of lists: SnakeMatrx(4).tolist() Out[39]: [[16, 15, 14, 13], [9, 10, 11, 12], [8, 7, 6, 5], [1, 2, 3, 4]] numpy is popular but not a official Standard Library in Python distributions of course it can be done with list manipulation def SnakeLoL(n): Sq = [[1 + i + n * j for i in range(n)] for j in range(n)] # Sq LoL filled with a range for row in Sq[1::2]: row.reverse() # reverse odd row's columns return Sq[::-1][:] # reverse order of rows # or maybe more Pythonic for return Sq[::-1][:] # Sq.reverse() # reverse order of rows # return Sq SnakeLoL(4) Out[91]: [[16, 15, 14, 13], [9, 10, 11, 12], [8, 7, 6, 5], [1, 2, 3, 4]] SnakeLoL(5) Out[92]: [[21, 22, 23, 24, 25], [20, 19, 18, 17, 16], [11, 12, 13, 14, 15], [10, 9, 8, 7, 6], [1, 2, 3, 4, 5]] print(*SnakeLoL(4), sep='\n') [16, 15, 14, 13] [9, 10, 11, 12] [8, 7, 6, 5] [1, 2, 3, 4] A: Simple way of python: list(map(lambda i: print(i), [lst[::-1] for lst in grid[::-1]]))
Reverse certain elements in a 2d array to produce a matrix in the specified format, Python 3
I have the following code for a list of lists with the intention of creating a matrix of numbers: grid=[[1,2,3,4,5,6,7],[8,9,10,11,12],[13,14,15,16,17],[18,19,20,21,22]] On using the following code which i figured out would reverse the list, it produces a matrix ... for i in reversed(grid): print(i) The output is: [18, 19, 20, 21, 22] [13, 14, 15, 16, 17] [8, 9, 10, 11, 12] [1, 2, 3, 4, 5, 6, 7] I want however, the output to be as below, so that the numbers "connect" as they go up: [22,21,20,19,18] [13,14,15,16,17] [12,11,10,9,8] [1,2,3,4,5,6,7] Also, for an upvote, I'd be interested in more efficient ways of generating the matrix in the first place. For instance, to generate a 7x7 array - can it be done using a variable, for instance 7, or 49. Or for a 10x10 matrix, 10, or 100? UPDATE: Yes, sorry - the sublists should all be of the same size. Typo above UPDATE BASED ON ANSWER BELOW These two lines: >>> grid=[[1,2,3,4,5,6,7],[8,9,10,11,12],[13,14,15,16,17],[18,18,20,21,22]] >>> [lst[::-1] for lst in grid[::-1]] produce the following output: [[22, 21, 20, 18, 18], [17, 16, 15, 14, 13], [12, 11, 10, 9, 8], [7, 6, 5, 4, 3, 2, 1]] but I want them to print one line after the other, like a matrix ....also, so I can check the output is as I specified. That's all I need essentially, for the answer to be the answer!
[ "You need to reverse the list and also the sub-lists:\n[lst[::-1] for lst in grid[::-1]]\n\nNote that lst[::-1] reverses the list via list slicing, see here. \nYou can visualize the resulting nested lists across multiples lines with pprint:\n>>> from pprint import pprint\n>>> pprint([lst[::-1] for lst in grid[::-1]])\n[[22, 21, 20, 19, 18],\n [17, 16, 15, 14, 13],\n [12, 11, 10, 9, 8],\n [7, 6, 5, 4, 3, 2, 1]]\n\n", "usually 2D matrices are created, manipulated with numpy\nthen index slicing can reorder rows, columns\nimport numpy as np\n\ndef SnakeMatrx(n):\n Sq, Sq.shape = np.arange(n * n), (n, n) # Sq matrix filled with a range\n Sq[1::2,:] = Sq[1::2,::-1] # reverse odd row's columns\n return Sq[::-1,:] + 1 # reverse order of rows, add 1 to every entry\n\nSnakeMatrx(5)\nOut[33]: \narray([[21, 22, 23, 24, 25],\n [20, 19, 18, 17, 16],\n [11, 12, 13, 14, 15],\n [10, 9, 8, 7, 6],\n [ 1, 2, 3, 4, 5]])\n\nSnakeMatrx(4)\nOut[34]: \narray([[16, 15, 14, 13],\n [ 9, 10, 11, 12],\n [ 8, 7, 6, 5],\n [ 1, 2, 3, 4]])\n\nif you really want a list of lists:\nSnakeMatrx(4).tolist()\nOut[39]: [[16, 15, 14, 13], [9, 10, 11, 12], [8, 7, 6, 5], [1, 2, 3, 4]]\n\nnumpy is popular but not a official Standard Library in Python distributions\nof course it can be done with list manipulation\ndef SnakeLoL(n):\n Sq = [[1 + i + n * j for i in range(n)] for j in range(n)] # Sq LoL filled with a range\n for row in Sq[1::2]:\n row.reverse() # reverse odd row's columns\n return Sq[::-1][:] # reverse order of rows\n# or maybe more Pythonic for return Sq[::-1][:]\n# Sq.reverse() # reverse order of rows \n# return Sq \n\nSnakeLoL(4)\nOut[91]: [[16, 15, 14, 13], [9, 10, 11, 12], [8, 7, 6, 5], [1, 2, 3, 4]]\n\nSnakeLoL(5)\nOut[92]: \n[[21, 22, 23, 24, 25],\n [20, 19, 18, 17, 16],\n [11, 12, 13, 14, 15],\n [10, 9, 8, 7, 6],\n [1, 2, 3, 4, 5]]\n\nprint(*SnakeLoL(4), sep='\\n')\n[16, 15, 14, 13]\n[9, 10, 11, 12]\n[8, 7, 6, 5]\n[1, 2, 3, 4]\n\n", "Simple way of python:\nlist(map(lambda i: print(i), [lst[::-1] for lst in grid[::-1]]))\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "list", "matrix", "python", "reverse" ]
stackoverflow_0041983087_list_matrix_python_reverse.txt
Q: VSCode not recognizing python import and functions Can someone let me know what the squiggly lines represent in the image? The actual error the flags up when I hover my mouse over the squiggly line is: Import "pyspark.sql.functions" could not be resolvedPylance I'm not sure what that means, but I'm getting the error for almost all functions in VSCode. Can someone let me know how to resolve it? A: I was with the same error as yours. VSCode usually has a "recommended" interpreter, but sometimes it won't help you out with what you need. So, I changed the Interpeter (ctrl + shift + p in VSCODE). Look for "Python: Select Interpreter. Choose the one who contains the name "Conda" And that's how the magic happens. A: Check also the Workspaces. If you have two or many workspaces opened (many codebase folders ), the editor will be hard to associate the interpreter-linter with the code.
VSCode not recognizing python import and functions
Can someone let me know what the squiggly lines represent in the image? The actual error the flags up when I hover my mouse over the squiggly line is: Import "pyspark.sql.functions" could not be resolvedPylance I'm not sure what that means, but I'm getting the error for almost all functions in VSCode. Can someone let me know how to resolve it?
[ "I was with the same error as yours. VSCode usually has a \"recommended\" interpreter, but sometimes it won't help you out with what you need. So,\n\nI changed the Interpeter (ctrl + shift + p in VSCODE).\nLook for \"Python: Select Interpreter.\nChoose the one who contains the name \"Conda\"\n\nAnd that's how the magic happens.\n", "Check also the Workspaces. If you have two or many workspaces opened (many codebase folders ), the editor will be hard to associate the interpreter-linter with the code.\n" ]
[ 2, 0 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0070362595_python_visual_studio_code.txt
Q: _tkinter.TclError: can't delete Tcl command - customtkinter - custom prompt What do I need I am trying to implement a custom Yes / No prompt box with help of tkinter. However I don't want to use the default messagebox, because I require the following two functionalites: a default value a countdown after which the widget destroys itself and takes the default value as answer What are the unpredictable errors I've managed to implement these requirements with the code below, however I get some really unpredictable behaviour when using the widgets in the following sense: Sometimes everything works as expected. When I press the buttons, the correct answer is stored, or if I let the countdown time out, the default answer is stored, or if I click the close-window it correctly applies the default value as answer. But then, at times when I click the buttons, I get some wierd errors _tkinter.TclError: invalid command name ".!ctkframe2.!ctkcanvas" (see execution log below for whole stacktrace) I suspect it has something to do with the timer, since the errors do not always apper when the buttons are pressed. It is really driving me crazy... example code # util_gui_classes.py # -*- coding: utf-8 -*- """ Classes which serve for gui applications. """ from typing import Any import tkinter import tkinter.messagebox import customtkinter # ____________________________________________________________________________________________ customtkinter.set_appearance_mode('System') # Modes: 'System' (standard), 'Dark', 'Light' customtkinter.set_default_color_theme('blue') # Themes: 'blue' (standard), 'green', 'dark-blue' # ____________________________________________________________________________________________ class GuiPromptYesNo(customtkinter.CTk): """ Creates a yes / no gui based prompt with default value and countdown functionality. The user input will be stored in: > instance.answer """ WIDTH = 500 HEIGHT = 200 def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0): super().__init__() self.title('input required') self.geometry(f'{self.__class__.WIDTH}x{self.__class__.HEIGHT}') self.protocol('WM_DELETE_WINDOW', self.on_closing) # call .on_closing() when app gets closed self.resizable(False, False) self.question = question self.answer = None self.default_value = default_value self.countdown_seconds = countdown_seconds self.remaining_seconds = countdown_seconds # ============ create top-level-frames ============ # configure grid layout (4x1) self.equal_weighted_grid(self, 4, 1) self.grid_rowconfigure(0, minsize=10) self.grid_rowconfigure(3, minsize=10) self.frame_label = customtkinter.CTkFrame(master=self, corner_radius=10) self.frame_label.grid(row=1, column=0) self.frame_buttons = customtkinter.CTkFrame(master=self, corner_radius=0, fg_color=None) self.frame_buttons.grid(row=2, column=0) # ============ design frame_label ============ # configure grid layout (5x4) self.equal_weighted_grid(self.frame_label, 5, 4) self.frame_label.grid_rowconfigure(0, minsize=10) self.frame_label.grid_rowconfigure(2, minsize=10) self.frame_label.grid_rowconfigure(5, minsize=10) self.label_question = customtkinter.CTkLabel( master=self.frame_label, text=self.question, text_font=('Consolas',), ) self.label_question.grid(row=1, column=0, columnspan=4, pady=5, padx=10) self.label_default_value = customtkinter.CTkLabel( master=self.frame_label, text='default value: ', text_font=('Consolas',), ) self.label_default_value.grid(row=3, column=0, pady=5, padx=10) self.entry_default_value = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', placeholder_text=self.default_value, state='disabled', textvariable=tkinter.StringVar(value=self.default_value), text_font=('Consolas',), ) self.entry_default_value.grid(row=3, column=1, pady=5, padx=10) if countdown_seconds > 0: self.label_timer = customtkinter.CTkLabel( master=self.frame_label, text='timer [s]: ', text_font=('Consolas',), ) self.label_timer.grid(row=3, column=2, pady=5, padx=10) self.entry_timer = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', state='disabled', textvariable=tkinter.StringVar(value=str(self.remaining_seconds)), placeholder_text=str(self.remaining_seconds), text_font=('Consolas',), ) self.entry_timer.grid(row=3, column=3, pady=5, padx=10) # ============ design frame_buttons ============ # configure grid layout (3x2) self.equal_weighted_grid(self.frame_buttons, 3, 2) self.frame_buttons.grid_rowconfigure(0, minsize=10) self.frame_buttons.grid_rowconfigure(2, minsize=10) self.button_yes = customtkinter.CTkButton( master=self.frame_buttons, text='yes', text_font=('Consolas',), command=lambda: self.button_event('yes'), ) self.button_yes.grid(row=1, column=0, pady=5, padx=20) self.button_no = customtkinter.CTkButton( master=self.frame_buttons, text='no', text_font=('Consolas',), command=lambda: self.button_event('no'), ) self.button_no.grid(row=1, column=1, pady=5, padx=20) if self.countdown_seconds > 0: self.countdown() self.attributes('-topmost', True) self.mainloop() # __________________________________________________________ # methods @staticmethod def equal_weighted_grid(obj: Any, rows: int, cols: int): """configures the grid to be of equal cell sizes for rows and columns.""" for row in range(rows): obj.grid_rowconfigure(row, weight=1) for col in range(cols): obj.grid_columnconfigure(col, weight=1) def button_event(self, answer): """Stores the user input as instance attribute `answer`.""" self.answer = answer self.terminate() def countdown(self): """Sets the timer for the question.""" if self.answer is not None: self.terminate() elif self.remaining_seconds < 0: self.answer = self.default_value self.terminate() else: self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds))) self.remaining_seconds -= 1 self.after(1000, self.countdown) def stop_after_callbacks(self): """Stops all after callbacks on the root.""" for after_id in self.tk.eval('after info').split(): self.after_cancel(after_id) def on_closing(self, event=0): """If the user presses the window x button without providing input""" if self.answer is None and self.default_value is not None: self.answer = self.default_value self.terminate() def terminate(self): """Properly terminates the gui.""" # stop all .after callbacks to avoid error message "Invalid command ..." after destruction self.stop_after_callbacks() self.destroy() # ____________________________________________________________________________________________ if __name__ == '__main__': print('\n', 'do some python stuff before', '\n', sep='') q1 = GuiPromptYesNo(question='1. do you want to proceed?', countdown_seconds=5) print(f'>>>{q1.answer=}') print('\n', 'do some python stuff in between', '\n', sep='') q2 = GuiPromptYesNo(question='2. do you want to proceed?', countdown_seconds=5) print(f'>>>{q2.answer=}') print('\n', 'do some python stuff at the end', '\n', sep='') # ____________________________________________________________________________________________ execution log with errors The first three tests where successful (clicking buttons included), after that the errors appeared. (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='yes' do some python stuff in between q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='yes' do some python stuff in between q2.answer='yes' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='no' do some python stuff in between q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before >>>q1.answer='yes' do some python stuff in between Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 861, in callit func(*args) File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 197, in countdown self.terminate() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 224, in terminate child.destroy() File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2635, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command >>>q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_button.py", line 377, in clicked self.command() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 124, in <lambda> command=lambda: self.button_event('yes'), ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 156, in button_event self.terminate() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 186, in terminate self.destroy() File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\windows\ctk_tk.py", line 108, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2367, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2635, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 142, in update_dimensions_event self.draw(no_color_updates=True) # faster drawing without color changes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_frame.py", line 80, in draw requires_recoloring = self.draw_engine.draw_rounded_rect_with_border(self.apply_widget_scaling(self._current_width), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\draw_engine.py", line 88, in draw_rounded_rect_with_border return self.__draw_rounded_rect_with_border_font_shapes(width, height, corner_radius, border_width, inner_corner_radius, ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\draw_engine.py", line 207, in __draw_rounded_rect_with_border_font_shapes self._canvas.delete("border_parts") File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2879, in delete self.tk.call((self._w, 'delete') + args) _tkinter.TclError: invalid command name ".!ctkframe2.!ctkcanvas" >>>q1.answer='yes' do some python stuff in between Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_button.py", line 377, in clicked self.command() super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command >>>q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools> requirements I am using Windows 11 as os and have a virtual python 3.11 environment with customtkinter installed. EDIT: With the help of @Thingamabobs answer I managed to achive the expected behaviour without getting the errors. Here is the final code: # util_gui_classes.py # -*- coding: utf-8 -*- """ Classes which serve for gui applications. """ from typing import Any import tkinter import tkinter.messagebox import customtkinter from _tkinter import TclError # _______________________________________________________________________ customtkinter.set_appearance_mode('System') # Modes: 'System' (standard), 'Dark', 'Light' customtkinter.set_default_color_theme('blue') # Themes: 'blue' (standard), 'green', 'dark-blue' # _______________________________________________________________________ class GuiPromptYesNo(customtkinter.CTk): """ Creates a yes / no gui based prompt with default value and countdown functionality. The user input will be stored in: >>> instance.answer """ WIDTH = 500 HEIGHT = 200 def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0): super().__init__() self.terminated = False self.title('input required') self.geometry(f'{self.__class__.WIDTH}x{self.__class__.HEIGHT}') self.protocol('WM_DELETE_WINDOW', self.on_closing) # call .on_closing() when app gets closed self.resizable(False, False) self.question = question self.answer = None self.default_value = default_value self.countdown_seconds = countdown_seconds self.remaining_seconds = countdown_seconds # ============ create top-level-frames ============ # configure grid layout (4x1) self.equal_weighted_grid(self, 4, 1) self.grid_rowconfigure(0, minsize=10) self.grid_rowconfigure(3, minsize=10) self.frame_label = customtkinter.CTkFrame(master=self, corner_radius=10) self.frame_label.grid(row=1, column=0) self.frame_buttons = customtkinter.CTkFrame(master=self, corner_radius=0, fg_color=None) self.frame_buttons.grid(row=2, column=0) # ============ design frame_label ============ # configure grid layout (5x4) self.equal_weighted_grid(self.frame_label, 5, 4) self.frame_label.grid_rowconfigure(0, minsize=10) self.frame_label.grid_rowconfigure(2, minsize=10) self.frame_label.grid_rowconfigure(5, minsize=10) self.label_question = customtkinter.CTkLabel( master=self.frame_label, text=self.question, text_font=('Consolas',), ) self.label_question.grid(row=1, column=0, columnspan=4, pady=5, padx=10) self.label_default_value = customtkinter.CTkLabel( master=self.frame_label, text='default value: ', text_font=('Consolas',), ) self.label_default_value.grid(row=3, column=0, pady=5, padx=10) self.entry_default_value = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', placeholder_text=self.default_value, state='disabled', textvariable=tkinter.StringVar(value=self.default_value), text_font=('Consolas',), ) self.entry_default_value.grid(row=3, column=1, pady=5, padx=10) if countdown_seconds > 0: self.label_timer = customtkinter.CTkLabel( master=self.frame_label, text='timer [s]: ', text_font=('Consolas',), ) self.label_timer.grid(row=3, column=2, pady=5, padx=10) self.entry_timer = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', state='disabled', textvariable=tkinter.StringVar(value=str(self.remaining_seconds)), placeholder_text=str(self.remaining_seconds), text_font=('Consolas',), ) self.entry_timer.grid(row=3, column=3, pady=5, padx=10) # ============ design frame_buttons ============ # configure grid layout (3x2) self.equal_weighted_grid(self.frame_buttons, 3, 2) self.frame_buttons.grid_rowconfigure(0, minsize=10) self.frame_buttons.grid_rowconfigure(2, minsize=10) self.button_yes = customtkinter.CTkButton( master=self.frame_buttons, text='yes', text_font=('Consolas',), command=lambda: self.button_event('yes'), ) self.button_yes.grid(row=1, column=0, pady=5, padx=20) self.button_no = customtkinter.CTkButton( master=self.frame_buttons, text='no', text_font=('Consolas',), command=lambda: self.button_event('no'), ) self.button_no.grid(row=1, column=1, pady=5, padx=20) if self.countdown_seconds > 0: self.countdown() self.attributes('-topmost', True) self.mainloop() # __________________________________________________________ # methods @staticmethod def equal_weighted_grid(obj: Any, rows: int, cols: int): """configures the grid to be of equal cell sizes for rows and columns.""" for row in range(rows): obj.grid_rowconfigure(row, weight=1) for col in range(cols): obj.grid_columnconfigure(col, weight=1) def button_event(self, answer): """Stores the user input as instance attribute `answer`.""" self.answer = answer self.terminate() def countdown(self): """Sets the timer for the question.""" if self.answer is not None: self.terminate() elif self.remaining_seconds < 0: self.answer = self.default_value self.terminate() else: self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds))) self.remaining_seconds -= 1 self.after(1000, self.countdown) def stop_after_callbacks(self): """Stops all after callbacks on the root.""" for after_id in self.tk.eval('after info').split(): self.after_cancel(after_id) def on_closing(self, event=0): """If the user presses the window x button without providing input""" if self.answer is None and self.default_value is not None: self.answer = self.default_value self.terminate() def terminate(self): """Properly terminates the gui.""" # stop all .after callbacks to avoid error message "Invalid command ..." after destruction self.stop_after_callbacks() if not self.terminated: self.terminated = True try: self.destroy() except TclError: self.destroy() # _______________________________________________________________________ if __name__ == '__main__': print('before') q1 = GuiPromptYesNo(question='1. do you want to proceed?', countdown_seconds=5) print(f'>>>{q1.answer=}') print('between') q2 = GuiPromptYesNo(question='2. do you want to proceed?', countdown_seconds=5) print(f'>>>{q2.answer=}') print('after') # _______________________________________________________________________ BTW: the class can also be found in my package utils_nm inside the module util_gui_classes. A: While I don't have Ctk to give you the exact code. I can tell you exactly what is wrong and how you need to solve it. You have self repeating function via after here: def countdown(self): """Sets the timer for the question.""" if self.answer is not None: self.terminate() elif self.remaining_seconds < 0: self.answer = self.default_value self.terminate() else: self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds))) self.remaining_seconds -= 1 self.after(1000, self.countdown) The problem that you are facing is that you try to delete the window that has already been destroyed in an after call. So depending on which event is quicker you are running into an error or not. Why does this happen? When ever you give an instruction regardless of what it is (with a few exceptions e.g update), it is placed in a event queue and scheduled in some sort of FIFO (first in, first out). So the oldest event gets processed. That means you can try to cancel an alarm but running the alarm before you actual cancel it. How to solve? There are different approaches available. The easiest and cleanest solution, in my opinion, is to set a flag and avoid trying to destroy an already destroyed Window. Something like: def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0): super().__init__() self.terminated = False and set it like: def terminate(self): """Properly terminates the gui.""" # stop all .after callbacks to avoid error message "Invalid command ..." after destruction #self.stop_after_callbacks() shouldn't be needed if not self.terminated: self.terminated = True self.destroy() In addition to the above proposal I suggest you to set up a protocol handler for WM_DELETE_WINDOW and set the flag, since it probably also occur by destroying the window via the window manager. A different approach is a try and except block in terminate where you catch except _tkinter.TclError:. But note the underscore, the module is not intended to be used and it can change in future and might break your app again, even if this seems unlikely.
_tkinter.TclError: can't delete Tcl command - customtkinter - custom prompt
What do I need I am trying to implement a custom Yes / No prompt box with help of tkinter. However I don't want to use the default messagebox, because I require the following two functionalites: a default value a countdown after which the widget destroys itself and takes the default value as answer What are the unpredictable errors I've managed to implement these requirements with the code below, however I get some really unpredictable behaviour when using the widgets in the following sense: Sometimes everything works as expected. When I press the buttons, the correct answer is stored, or if I let the countdown time out, the default answer is stored, or if I click the close-window it correctly applies the default value as answer. But then, at times when I click the buttons, I get some wierd errors _tkinter.TclError: invalid command name ".!ctkframe2.!ctkcanvas" (see execution log below for whole stacktrace) I suspect it has something to do with the timer, since the errors do not always apper when the buttons are pressed. It is really driving me crazy... example code # util_gui_classes.py # -*- coding: utf-8 -*- """ Classes which serve for gui applications. """ from typing import Any import tkinter import tkinter.messagebox import customtkinter # ____________________________________________________________________________________________ customtkinter.set_appearance_mode('System') # Modes: 'System' (standard), 'Dark', 'Light' customtkinter.set_default_color_theme('blue') # Themes: 'blue' (standard), 'green', 'dark-blue' # ____________________________________________________________________________________________ class GuiPromptYesNo(customtkinter.CTk): """ Creates a yes / no gui based prompt with default value and countdown functionality. The user input will be stored in: > instance.answer """ WIDTH = 500 HEIGHT = 200 def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0): super().__init__() self.title('input required') self.geometry(f'{self.__class__.WIDTH}x{self.__class__.HEIGHT}') self.protocol('WM_DELETE_WINDOW', self.on_closing) # call .on_closing() when app gets closed self.resizable(False, False) self.question = question self.answer = None self.default_value = default_value self.countdown_seconds = countdown_seconds self.remaining_seconds = countdown_seconds # ============ create top-level-frames ============ # configure grid layout (4x1) self.equal_weighted_grid(self, 4, 1) self.grid_rowconfigure(0, minsize=10) self.grid_rowconfigure(3, minsize=10) self.frame_label = customtkinter.CTkFrame(master=self, corner_radius=10) self.frame_label.grid(row=1, column=0) self.frame_buttons = customtkinter.CTkFrame(master=self, corner_radius=0, fg_color=None) self.frame_buttons.grid(row=2, column=0) # ============ design frame_label ============ # configure grid layout (5x4) self.equal_weighted_grid(self.frame_label, 5, 4) self.frame_label.grid_rowconfigure(0, minsize=10) self.frame_label.grid_rowconfigure(2, minsize=10) self.frame_label.grid_rowconfigure(5, minsize=10) self.label_question = customtkinter.CTkLabel( master=self.frame_label, text=self.question, text_font=('Consolas',), ) self.label_question.grid(row=1, column=0, columnspan=4, pady=5, padx=10) self.label_default_value = customtkinter.CTkLabel( master=self.frame_label, text='default value: ', text_font=('Consolas',), ) self.label_default_value.grid(row=3, column=0, pady=5, padx=10) self.entry_default_value = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', placeholder_text=self.default_value, state='disabled', textvariable=tkinter.StringVar(value=self.default_value), text_font=('Consolas',), ) self.entry_default_value.grid(row=3, column=1, pady=5, padx=10) if countdown_seconds > 0: self.label_timer = customtkinter.CTkLabel( master=self.frame_label, text='timer [s]: ', text_font=('Consolas',), ) self.label_timer.grid(row=3, column=2, pady=5, padx=10) self.entry_timer = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', state='disabled', textvariable=tkinter.StringVar(value=str(self.remaining_seconds)), placeholder_text=str(self.remaining_seconds), text_font=('Consolas',), ) self.entry_timer.grid(row=3, column=3, pady=5, padx=10) # ============ design frame_buttons ============ # configure grid layout (3x2) self.equal_weighted_grid(self.frame_buttons, 3, 2) self.frame_buttons.grid_rowconfigure(0, minsize=10) self.frame_buttons.grid_rowconfigure(2, minsize=10) self.button_yes = customtkinter.CTkButton( master=self.frame_buttons, text='yes', text_font=('Consolas',), command=lambda: self.button_event('yes'), ) self.button_yes.grid(row=1, column=0, pady=5, padx=20) self.button_no = customtkinter.CTkButton( master=self.frame_buttons, text='no', text_font=('Consolas',), command=lambda: self.button_event('no'), ) self.button_no.grid(row=1, column=1, pady=5, padx=20) if self.countdown_seconds > 0: self.countdown() self.attributes('-topmost', True) self.mainloop() # __________________________________________________________ # methods @staticmethod def equal_weighted_grid(obj: Any, rows: int, cols: int): """configures the grid to be of equal cell sizes for rows and columns.""" for row in range(rows): obj.grid_rowconfigure(row, weight=1) for col in range(cols): obj.grid_columnconfigure(col, weight=1) def button_event(self, answer): """Stores the user input as instance attribute `answer`.""" self.answer = answer self.terminate() def countdown(self): """Sets the timer for the question.""" if self.answer is not None: self.terminate() elif self.remaining_seconds < 0: self.answer = self.default_value self.terminate() else: self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds))) self.remaining_seconds -= 1 self.after(1000, self.countdown) def stop_after_callbacks(self): """Stops all after callbacks on the root.""" for after_id in self.tk.eval('after info').split(): self.after_cancel(after_id) def on_closing(self, event=0): """If the user presses the window x button without providing input""" if self.answer is None and self.default_value is not None: self.answer = self.default_value self.terminate() def terminate(self): """Properly terminates the gui.""" # stop all .after callbacks to avoid error message "Invalid command ..." after destruction self.stop_after_callbacks() self.destroy() # ____________________________________________________________________________________________ if __name__ == '__main__': print('\n', 'do some python stuff before', '\n', sep='') q1 = GuiPromptYesNo(question='1. do you want to proceed?', countdown_seconds=5) print(f'>>>{q1.answer=}') print('\n', 'do some python stuff in between', '\n', sep='') q2 = GuiPromptYesNo(question='2. do you want to proceed?', countdown_seconds=5) print(f'>>>{q2.answer=}') print('\n', 'do some python stuff at the end', '\n', sep='') # ____________________________________________________________________________________________ execution log with errors The first three tests where successful (clicking buttons included), after that the errors appeared. (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='yes' do some python stuff in between q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='yes' do some python stuff in between q2.answer='yes' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='no' do some python stuff in between q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before >>>q1.answer='yes' do some python stuff in between Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 861, in callit func(*args) File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 197, in countdown self.terminate() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 224, in terminate child.destroy() File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2635, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command >>>q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_button.py", line 377, in clicked self.command() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 124, in <lambda> command=lambda: self.button_event('yes'), ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 156, in button_event self.terminate() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 186, in terminate self.destroy() File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\windows\ctk_tk.py", line 108, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2367, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2635, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 142, in update_dimensions_event self.draw(no_color_updates=True) # faster drawing without color changes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_frame.py", line 80, in draw requires_recoloring = self.draw_engine.draw_rounded_rect_with_border(self.apply_widget_scaling(self._current_width), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\draw_engine.py", line 88, in draw_rounded_rect_with_border return self.__draw_rounded_rect_with_border_font_shapes(width, height, corner_radius, border_width, inner_corner_radius, ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\draw_engine.py", line 207, in __draw_rounded_rect_with_border_font_shapes self._canvas.delete("border_parts") File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2879, in delete self.tk.call((self._w, 'delete') + args) _tkinter.TclError: invalid command name ".!ctkframe2.!ctkcanvas" >>>q1.answer='yes' do some python stuff in between Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_button.py", line 377, in clicked self.command() super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command >>>q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools> requirements I am using Windows 11 as os and have a virtual python 3.11 environment with customtkinter installed. EDIT: With the help of @Thingamabobs answer I managed to achive the expected behaviour without getting the errors. Here is the final code: # util_gui_classes.py # -*- coding: utf-8 -*- """ Classes which serve for gui applications. """ from typing import Any import tkinter import tkinter.messagebox import customtkinter from _tkinter import TclError # _______________________________________________________________________ customtkinter.set_appearance_mode('System') # Modes: 'System' (standard), 'Dark', 'Light' customtkinter.set_default_color_theme('blue') # Themes: 'blue' (standard), 'green', 'dark-blue' # _______________________________________________________________________ class GuiPromptYesNo(customtkinter.CTk): """ Creates a yes / no gui based prompt with default value and countdown functionality. The user input will be stored in: >>> instance.answer """ WIDTH = 500 HEIGHT = 200 def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0): super().__init__() self.terminated = False self.title('input required') self.geometry(f'{self.__class__.WIDTH}x{self.__class__.HEIGHT}') self.protocol('WM_DELETE_WINDOW', self.on_closing) # call .on_closing() when app gets closed self.resizable(False, False) self.question = question self.answer = None self.default_value = default_value self.countdown_seconds = countdown_seconds self.remaining_seconds = countdown_seconds # ============ create top-level-frames ============ # configure grid layout (4x1) self.equal_weighted_grid(self, 4, 1) self.grid_rowconfigure(0, minsize=10) self.grid_rowconfigure(3, minsize=10) self.frame_label = customtkinter.CTkFrame(master=self, corner_radius=10) self.frame_label.grid(row=1, column=0) self.frame_buttons = customtkinter.CTkFrame(master=self, corner_radius=0, fg_color=None) self.frame_buttons.grid(row=2, column=0) # ============ design frame_label ============ # configure grid layout (5x4) self.equal_weighted_grid(self.frame_label, 5, 4) self.frame_label.grid_rowconfigure(0, minsize=10) self.frame_label.grid_rowconfigure(2, minsize=10) self.frame_label.grid_rowconfigure(5, minsize=10) self.label_question = customtkinter.CTkLabel( master=self.frame_label, text=self.question, text_font=('Consolas',), ) self.label_question.grid(row=1, column=0, columnspan=4, pady=5, padx=10) self.label_default_value = customtkinter.CTkLabel( master=self.frame_label, text='default value: ', text_font=('Consolas',), ) self.label_default_value.grid(row=3, column=0, pady=5, padx=10) self.entry_default_value = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', placeholder_text=self.default_value, state='disabled', textvariable=tkinter.StringVar(value=self.default_value), text_font=('Consolas',), ) self.entry_default_value.grid(row=3, column=1, pady=5, padx=10) if countdown_seconds > 0: self.label_timer = customtkinter.CTkLabel( master=self.frame_label, text='timer [s]: ', text_font=('Consolas',), ) self.label_timer.grid(row=3, column=2, pady=5, padx=10) self.entry_timer = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', state='disabled', textvariable=tkinter.StringVar(value=str(self.remaining_seconds)), placeholder_text=str(self.remaining_seconds), text_font=('Consolas',), ) self.entry_timer.grid(row=3, column=3, pady=5, padx=10) # ============ design frame_buttons ============ # configure grid layout (3x2) self.equal_weighted_grid(self.frame_buttons, 3, 2) self.frame_buttons.grid_rowconfigure(0, minsize=10) self.frame_buttons.grid_rowconfigure(2, minsize=10) self.button_yes = customtkinter.CTkButton( master=self.frame_buttons, text='yes', text_font=('Consolas',), command=lambda: self.button_event('yes'), ) self.button_yes.grid(row=1, column=0, pady=5, padx=20) self.button_no = customtkinter.CTkButton( master=self.frame_buttons, text='no', text_font=('Consolas',), command=lambda: self.button_event('no'), ) self.button_no.grid(row=1, column=1, pady=5, padx=20) if self.countdown_seconds > 0: self.countdown() self.attributes('-topmost', True) self.mainloop() # __________________________________________________________ # methods @staticmethod def equal_weighted_grid(obj: Any, rows: int, cols: int): """configures the grid to be of equal cell sizes for rows and columns.""" for row in range(rows): obj.grid_rowconfigure(row, weight=1) for col in range(cols): obj.grid_columnconfigure(col, weight=1) def button_event(self, answer): """Stores the user input as instance attribute `answer`.""" self.answer = answer self.terminate() def countdown(self): """Sets the timer for the question.""" if self.answer is not None: self.terminate() elif self.remaining_seconds < 0: self.answer = self.default_value self.terminate() else: self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds))) self.remaining_seconds -= 1 self.after(1000, self.countdown) def stop_after_callbacks(self): """Stops all after callbacks on the root.""" for after_id in self.tk.eval('after info').split(): self.after_cancel(after_id) def on_closing(self, event=0): """If the user presses the window x button without providing input""" if self.answer is None and self.default_value is not None: self.answer = self.default_value self.terminate() def terminate(self): """Properly terminates the gui.""" # stop all .after callbacks to avoid error message "Invalid command ..." after destruction self.stop_after_callbacks() if not self.terminated: self.terminated = True try: self.destroy() except TclError: self.destroy() # _______________________________________________________________________ if __name__ == '__main__': print('before') q1 = GuiPromptYesNo(question='1. do you want to proceed?', countdown_seconds=5) print(f'>>>{q1.answer=}') print('between') q2 = GuiPromptYesNo(question='2. do you want to proceed?', countdown_seconds=5) print(f'>>>{q2.answer=}') print('after') # _______________________________________________________________________ BTW: the class can also be found in my package utils_nm inside the module util_gui_classes.
[ "While I don't have Ctk to give you the exact code. I can tell you exactly what is wrong and how you need to solve it.\nYou have self repeating function via after here:\ndef countdown(self):\n \"\"\"Sets the timer for the question.\"\"\"\n if self.answer is not None:\n self.terminate()\n elif self.remaining_seconds < 0:\n self.answer = self.default_value\n self.terminate()\n else:\n self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds)))\n self.remaining_seconds -= 1\n self.after(1000, self.countdown)\n\nThe problem that you are facing is that you try to delete the window that has already been destroyed in an after call. So depending on which event is quicker you are running into an error or not.\nWhy does this happen?\nWhen ever you give an instruction regardless of what it is (with a few exceptions e.g update), it is placed in a event queue and scheduled in some sort of FIFO (first in, first out). So the oldest event gets processed. That means you can try to cancel an alarm but running the alarm before you actual cancel it.\nHow to solve?\nThere are different approaches available. The easiest and cleanest solution, in my opinion, is to set a flag and avoid trying to destroy an already destroyed Window.\nSomething like:\n def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0):\n super().__init__()\n self.terminated = False\n\nand set it like:\ndef terminate(self):\n \"\"\"Properly terminates the gui.\"\"\"\n # stop all .after callbacks to avoid error message \"Invalid command ...\" after destruction\n #self.stop_after_callbacks() shouldn't be needed\n if not self.terminated:\n self.terminated = True\n self.destroy()\n\nIn addition to the above proposal I suggest you to set up a protocol handler for WM_DELETE_WINDOW and set the flag, since it probably also occur by destroying the window via the window manager.\n\nA different approach is a try and except block in terminate where you catch except _tkinter.TclError:. But note the underscore, the module is not intended to be used and it can change in future and might break your app again, even if this seems unlikely.\n" ]
[ 2 ]
[]
[]
[ "customtkinter", "python", "tkinter" ]
stackoverflow_0074488759_customtkinter_python_tkinter.txt
Q: Faster numpy array indexing when using condition (numpy.where)? I have a huge numpy array with shape (50000000, 3) and I'm using: x = array[np.where((array[:,0] == value) | (array[:,1] == value))] to get the part of the array that I want. But this way seems to be quite slow. Is there a more efficient way of performing the same task with numpy? A: np.where is highly optimized and I doubt someone can write a faster code than the one implemented in the last Numpy version (disclaimer: I was one who optimized it). That being said, the main issue here is not much np.where but the conditional which create a temporary boolean array. This is unfortunately the way to do that in Numpy and there is not much to do as long as you use only Numpy with the same input layout. One reason explaining why it is not very efficient is that the input data layout is inefficient. Indeed, assuming array is contiguously stored in memory using the default row major ordering, array[:,0] == value will read 1 item every 3 item of the array in memory. Due to the way CPU cache works (ie. cache lines, prefetching, etc.), 2/3 of the memory bandwidth is wasted. In fact, the output boolean array also need to be written and filling a newly-created array is a bit slow due to page faults. Note that array[:,1] == value will certainly reload data from RAM due to the size of the input (that cannot fit in most CPU caches). The RAM is slow and it is getter slower compared to the computational speed of the CPU and caches. This problem, called "memory wall", has been observed few decades ago and it is not expected to be fixed any time soon. Also note that the logical-or will also create a new array read/written from/to RAM. A better data layout is a (3, 50000000) transposed array contiguous in memory (note that np.transpose does not produce a contiguous array). Another reason explaining the performance issue is that Numpy tends not to be optimized to operate on very small axis. One main solution is to create the input in a transposed way if possible. Another solution is to write a Numba or Cython code. Here is an implementation of the non transposed input: # Compilation for the most frequent types. # Please pick the right ones so to speed up the compilation time. @nb.njit(['(uint8[:,::1],uint8)', '(int32[:,::1],int32)', '(int64[:,::1],int64)', '(float64[:,::1],float64)'], parallel=True) def select(array, value): n = array.shape[0] mask = np.empty(n, dtype=np.bool_) for i in nb.prange(n): mask[i] = array[i, 0] == value or array[i, 1] == value return mask x = array[select(array, value)] Note that I used a parallel implementation since the or operator is sub-optimal with Numba (the only solution seems to use a native code or Cython) and also because the RAM cannot be fully saturated with one thread on some platforms like computing servers. Also note that it can be faster to use array[np.where(select(array, value))[0]] regarding the result of select. Indeed, if the result is random or very small, then np.where can be faster since it has special optimizations for theses cases that a boolean indexing does not perform. Note that np.where is not particularly optimized in the context of a Numba function since Numba use its own implementation of Numpy functions and they are sometimes not as much optimized for large arrays. A faster implementation consists in creating x in parallel but this is not trivial to do with Numba since the number of output item is not known ahead of time and that threads must know where to write data, not to mention Numpy is already fairly fast to do that in sequential as long as the output is predictable.
Faster numpy array indexing when using condition (numpy.where)?
I have a huge numpy array with shape (50000000, 3) and I'm using: x = array[np.where((array[:,0] == value) | (array[:,1] == value))] to get the part of the array that I want. But this way seems to be quite slow. Is there a more efficient way of performing the same task with numpy?
[ "np.where is highly optimized and I doubt someone can write a faster code than the one implemented in the last Numpy version (disclaimer: I was one who optimized it). That being said, the main issue here is not much np.where but the conditional which create a temporary boolean array. This is unfortunately the way to do that in Numpy and there is not much to do as long as you use only Numpy with the same input layout.\nOne reason explaining why it is not very efficient is that the input data layout is inefficient. Indeed, assuming array is contiguously stored in memory using the default row major ordering, array[:,0] == value will read 1 item every 3 item of the array in memory. Due to the way CPU cache works (ie. cache lines, prefetching, etc.), 2/3 of the memory bandwidth is wasted. In fact, the output boolean array also need to be written and filling a newly-created array is a bit slow due to page faults. Note that array[:,1] == value will certainly reload data from RAM due to the size of the input (that cannot fit in most CPU caches). The RAM is slow and it is getter slower compared to the computational speed of the CPU and caches. This problem, called \"memory wall\", has been observed few decades ago and it is not expected to be fixed any time soon. Also note that the logical-or will also create a new array read/written from/to RAM. A better data layout is a (3, 50000000) transposed array contiguous in memory (note that np.transpose does not produce a contiguous array).\nAnother reason explaining the performance issue is that Numpy tends not to be optimized to operate on very small axis.\nOne main solution is to create the input in a transposed way if possible. Another solution is to write a Numba or Cython code. Here is an implementation of the non transposed input:\n# Compilation for the most frequent types. \n# Please pick the right ones so to speed up the compilation time. \n@nb.njit(['(uint8[:,::1],uint8)', '(int32[:,::1],int32)', '(int64[:,::1],int64)', '(float64[:,::1],float64)'], parallel=True)\ndef select(array, value):\n n = array.shape[0]\n mask = np.empty(n, dtype=np.bool_)\n for i in nb.prange(n):\n mask[i] = array[i, 0] == value or array[i, 1] == value\n return mask\n\nx = array[select(array, value)]\n\nNote that I used a parallel implementation since the or operator is sub-optimal with Numba (the only solution seems to use a native code or Cython) and also because the RAM cannot be fully saturated with one thread on some platforms like computing servers. Also note that it can be faster to use array[np.where(select(array, value))[0]] regarding the result of select. Indeed, if the result is random or very small, then np.where can be faster since it has special optimizations for theses cases that a boolean indexing does not perform. Note that np.where is not particularly optimized in the context of a Numba function since Numba use its own implementation of Numpy functions and they are sometimes not as much optimized for large arrays. A faster implementation consists in creating x in parallel but this is not trivial to do with Numba since the number of output item is not known ahead of time and that threads must know where to write data, not to mention Numpy is already fairly fast to do that in sequential as long as the output is predictable.\n" ]
[ 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074482961_numpy_python.txt
Q: How to replicate SHAP Summary plot Instead of the traditional approach using train/test split or cross-validation, I've done 100 repeats of cross-validation, taken SHAP values on each repeat, then averaged them out. I'd now like to plot these averages in the same way that a summary_plot would look. Is this possible at all, either by hacking the summary_plot code or with some other library? A: I almost got there with the following code: ## Obtain range of each var to rank variables by importance ranges = mean_shap_values.apply(np.ptp, axis=0) .sort_values(ascending=False) ## Re-order df for plotting purposes ordered_mean_shap = mean_shap_values[ranges.index] ## Transpose dataframe to long form values, labels = [],[] for i in range(len(ordered_mean_shap.columns)): for j in range(len(ordered_mean_shap)): values.append(ordered_mean_shap.T[j][i]) labels.append(ordered_mean_shap.columns[i]) long_df = pd.DataFrame([values,labels]).T ## Use seaborn to make stripplot sns.stripplot(data=long_df, x=0, y=1, hue=1, legend=True).set(xlabel='SHAP Value', ylabel='Variable', title='Feature Importance by SHAP Values') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # Put legend outside plot for readability plt.grid() I can use jitter to capture some of the density aspect (wider vertical chunks where there are more data points) but the one on on the shap version looks much better. Is there a way to imitate this? I'm also missing the a way to represent feature value. I'm open to suggestions for improvement.
How to replicate SHAP Summary plot
Instead of the traditional approach using train/test split or cross-validation, I've done 100 repeats of cross-validation, taken SHAP values on each repeat, then averaged them out. I'd now like to plot these averages in the same way that a summary_plot would look. Is this possible at all, either by hacking the summary_plot code or with some other library?
[ "I almost got there with the following code:\n## Obtain range of each var to rank variables by importance \nranges = mean_shap_values.apply(np.ptp, axis=0) .sort_values(ascending=False)\n\n## Re-order df for plotting purposes\nordered_mean_shap = mean_shap_values[ranges.index]\n\n## Transpose dataframe to long form\nvalues, labels = [],[]\nfor i in range(len(ordered_mean_shap.columns)):\n for j in range(len(ordered_mean_shap)):\n values.append(ordered_mean_shap.T[j][i])\n labels.append(ordered_mean_shap.columns[i])\nlong_df = pd.DataFrame([values,labels]).T\n\n## Use seaborn to make stripplot \nsns.stripplot(data=long_df, x=0, y=1, hue=1, legend=True).set(xlabel='SHAP Value', ylabel='Variable', title='Feature Importance by SHAP Values')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # Put legend outside plot for readability\nplt.grid()\n\n\nI can use jitter to capture some of the density aspect (wider vertical chunks where there are more data points) but the one on on the shap version looks much better. Is there a way to imitate this?\nI'm also missing the a way to represent feature value.\nI'm open to suggestions for improvement.\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "plot", "python", "shap", "visualization" ]
stackoverflow_0074488664_matplotlib_plot_python_shap_visualization.txt
Q: What is the best way to return a boolean when a negative value exists in a list? I have the following funciton telling us that a series has at least one negative value: def has_negative(series): v=False for i in range(len(series)): if series[i]<0: v=True break return v When we use this function on an example we get : y=[1,2,3,4,5,6,7,8,9] z=[1,-2,3,4,5,6,7,8,9] print(has_negative(y)) print(has_negative(y)) Output: >>> False >>> True The function seems to work well, although I want to make it shorter, any suggestion from your side will be appreciated A: You can utilise the built-in any function as follows: def has_negative(lst): return any(e < 0 for e in lst) print(has_negative([1,2,3,4,5,6,7,8,9])) print(has_negative([1,-2,3,4,5,6,7,8,9])) Output: False True EDIT: Did some timing tests based around this and other suggested answers. Whilst this is concise and functionally correct, it doesn't perform well. Keep it simple and use @quamrana's first suggestion - it's much faster A: You can sort the list and get the first element, check if it's a negative. With this approach you don't have to iterate over the array: sorted(series)[0] < 0 A: There are several improvements you can make: def has_negative(series): for i in series: if i < 0: return True return False or it can be contracted into one line like this: print(bool([i for i in z if i<0])) A: To add: To keep it clean and short, you could also use a list comprehension within a lambda function as follows: has_negative = lambda series: True if [series for x in series if x < 0] else False z = [1,-2,3,4,5,6,7,8,9] has_negative(z) Output: >>> True
What is the best way to return a boolean when a negative value exists in a list?
I have the following funciton telling us that a series has at least one negative value: def has_negative(series): v=False for i in range(len(series)): if series[i]<0: v=True break return v When we use this function on an example we get : y=[1,2,3,4,5,6,7,8,9] z=[1,-2,3,4,5,6,7,8,9] print(has_negative(y)) print(has_negative(y)) Output: >>> False >>> True The function seems to work well, although I want to make it shorter, any suggestion from your side will be appreciated
[ "You can utilise the built-in any function as follows:\ndef has_negative(lst):\n return any(e < 0 for e in lst)\n\nprint(has_negative([1,2,3,4,5,6,7,8,9]))\nprint(has_negative([1,-2,3,4,5,6,7,8,9]))\n\nOutput:\nFalse\nTrue\n\nEDIT:\nDid some timing tests based around this and other suggested answers. Whilst this is concise and functionally correct, it doesn't perform well. Keep it simple and use @quamrana's first suggestion - it's much faster\n", "You can sort the list and get the first element, check if it's a negative. With this approach you don't have to iterate over the array:\nsorted(series)[0] < 0\n\n", "There are several improvements you can make:\ndef has_negative(series):\n for i in series:\n if i < 0:\n return True\n return False\n\nor it can be contracted into one line like this:\nprint(bool([i for i in z if i<0]))\n\n", "To add:\nTo keep it clean and short, you could also use a list comprehension within a lambda function as follows:\nhas_negative = lambda series: True if [series for x in series if x < 0] else False\n\nz = [1,-2,3,4,5,6,7,8,9]\n\nhas_negative(z)\n\nOutput:\n>>> True\n\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ "boolean", "function", "list", "python" ]
stackoverflow_0074488705_boolean_function_list_python.txt
Q: Loop through list of text I'm trying to write a code in Python that iterates through a list of text (events) as following: ln pid description 1 23 failure in node 5 2 23 restart node 5 3 26 check node 5 4 30 fault alarm in node 10 5 23 finish .. .. .. I want the algorithm to check first if the line has the word 'failure' in a node (for example: node 5) of a process id (pid, say 23) in it and then from there it iterates through the next lines to check for another pid=26 that asks for the same node 5 and before the pid 23 'finish' (line 5). Example: ` for line in text: if 'failure' in line: ... (a loop to check the rest of line for another pid that accesses the same node before the current pid 'finish') ` I tried to use enumerate of the text line indices but I didn't figure out how to iterate from the next line after the line that contains 'failure'. I'm thinking of a while loop that start to iterate through the next lines of text until it finds a line of the same pid and it contains the word 'finish'. A: This code with open('events.txt') as f: failed_node = None while True: try: pid, *msg, node = next(f).split() if 'finish' in msg: failed_node = None continue if failed_node is not None: if node == failed_node: print(f"{pid=} references {failed_node=}") if 'failure' in msg: failed_node = node print(f"{failed_node=}") # print(pid, msg, node) except StopIteration: break with an input file of 23 failure in node 5 23 restart node 5 26 check node 5 30 fault alarm in node 10 23 finish 43 failure in node 7 43 restart node 5 46 check node 7 40 fault alarm in node 10 43 finish produces failed_node='5' pid='23' references failed_node='5' pid='26' references failed_node='5' failed_node='7' pid='46' references failed_node='7' Note: I have removed the line numbers form the file, you can always put them back in. I am not sure your descriptions tells the whole story. I have done what you have asked, but you might want something more sophisticated, e.g. to keep looking for references to failed nodes even after the first finish, i.e. to have them mixed. In such case, you need to keep a set of the failed nodes, not just the last one (I would scan the whole file once first, then apply the business logic). At least it gives you a hint on how you can proceed. All the best
Loop through list of text
I'm trying to write a code in Python that iterates through a list of text (events) as following: ln pid description 1 23 failure in node 5 2 23 restart node 5 3 26 check node 5 4 30 fault alarm in node 10 5 23 finish .. .. .. I want the algorithm to check first if the line has the word 'failure' in a node (for example: node 5) of a process id (pid, say 23) in it and then from there it iterates through the next lines to check for another pid=26 that asks for the same node 5 and before the pid 23 'finish' (line 5). Example: ` for line in text: if 'failure' in line: ... (a loop to check the rest of line for another pid that accesses the same node before the current pid 'finish') ` I tried to use enumerate of the text line indices but I didn't figure out how to iterate from the next line after the line that contains 'failure'. I'm thinking of a while loop that start to iterate through the next lines of text until it finds a line of the same pid and it contains the word 'finish'.
[ "This code\nwith open('events.txt') as f:\n failed_node = None\n while True:\n try:\n pid, *msg, node = next(f).split()\n if 'finish' in msg:\n failed_node = None\n continue\n if failed_node is not None:\n if node == failed_node:\n print(f\"{pid=} references {failed_node=}\")\n if 'failure' in msg:\n failed_node = node\n print(f\"{failed_node=}\")\n # print(pid, msg, node)\n except StopIteration:\n break\n\nwith an input file of\n23 failure in node 5\n23 restart node 5\n26 check node 5\n30 fault alarm in node 10\n23 finish\n43 failure in node 7\n43 restart node 5\n46 check node 7\n40 fault alarm in node 10\n43 finish\n\nproduces\nfailed_node='5'\npid='23' references failed_node='5'\npid='26' references failed_node='5'\nfailed_node='7'\npid='46' references failed_node='7'\n\nNote: I have removed the line numbers form the file, you can always put them back in.\nI am not sure your descriptions tells the whole story. I have done what you have asked, but you might want something more sophisticated, e.g. to keep looking for references to failed nodes even after the first finish, i.e. to have them mixed.\nIn such case, you need to keep a set of the failed nodes, not just the last one (I would scan the whole file once first, then apply the business logic).\nAt least it gives you a hint on how you can proceed. All the best\n" ]
[ 0 ]
[]
[]
[ "enumerate", "list", "loops", "python" ]
stackoverflow_0074488644_enumerate_list_loops_python.txt
Q: Turtle not moving, screen events Heello! My turtle is not moving and I don't really know why... May anyone help? import turtle chocolate = turtle.Turtle() def move_forward(): chocolate.forward(10) screen = turtle.Screen() screen.exitonclick() screen.listen() screen.onkey(fun=move_forward, key="space") screen.mainloop() I expect my turtle moving with 10 pace when I press "space". A: Try this. It's working. Tested here. import turtle chocolate = turtle.Turtle() chocolate.shape("turtle") chocolate.speed(500) def move_forward(): chocolate.forward(1) screen = turtle.Screen() screen.onkey(move_forward, "space") screen.listen() screen.exitonclick() Perhaps this can give you more insights. Also exitonclick() must be at the very end. A: exit on click must be at the end. import turtle chocolate = turtle.Turtle() def move_forward(): chocolate.forward(10) screen = turtle.Screen() screen.listen() screen.onkey(fun=move_forward, key="space") screen.exitonclick()
Turtle not moving, screen events
Heello! My turtle is not moving and I don't really know why... May anyone help? import turtle chocolate = turtle.Turtle() def move_forward(): chocolate.forward(10) screen = turtle.Screen() screen.exitonclick() screen.listen() screen.onkey(fun=move_forward, key="space") screen.mainloop() I expect my turtle moving with 10 pace when I press "space".
[ "Try this. It's working. Tested here.\nimport turtle\n\nchocolate = turtle.Turtle()\nchocolate.shape(\"turtle\")\nchocolate.speed(500)\n\ndef move_forward():\n chocolate.forward(1)\n\nscreen = turtle.Screen()\nscreen.onkey(move_forward, \"space\")\nscreen.listen()\nscreen.exitonclick()\n\nPerhaps this can give you more insights. Also exitonclick() must be at the very end.\n", "exit on click must be at the end.\nimport turtle\n\nchocolate = turtle.Turtle()\n\n\ndef move_forward():\n chocolate.forward(10)\n\n\nscreen = turtle.Screen()\n\n\nscreen.listen()\nscreen.onkey(fun=move_forward, key=\"space\")\n\nscreen.exitonclick()\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074489269_python_python_turtle_turtle_graphics.txt
Q: Python pandas check if dataframe is not empty I have an if statement where it checks if the data frame is not empty. The way I do it is the following: if dataframe.empty: pass else: #do something But really I need: if dataframe is not empty: #do something My question is - is there a method .not_empty() to achieve this? I also wanted to ask if the second version is better in terms of performance? Otherwise maybe it makes sense for me to leave it as it is i.e. the first version? A: Just do if not dataframe.empty: # insert code here The reason this works is because dataframe.empty returns True if dataframe is empty. To invert this, we can use the negation operator not, which flips True to False and vice-versa. A: .empty returns a boolean value >>> df_empty.empty True So if not empty can be written as if not df.empty: #Your code Check pandas.DataFrame.empty , might help someone. A: You can use the attribute dataframe.empty to check whether it's empty or not: if not dataframe.empty: #do something Or if len(dataframe) != 0: #do something Or if len(dataframe.index) != 0: #do something A: No doubt that the use of empty is the most comprehensive in this case (explicit is better than implicit). However, the most efficient in term of computation time is through the usage of len : if not len(df.index) == 0: # insert code here Source : this answer. A: As already clearly explained by other commentators, you can negate a boolean expression in Python by simply prepending the not operator, hence: if not df.empty: # do something does the trick. I only want to clarify the meaning of "empty" in this context, because it was a bit confusing for me at first. According to the Pandas documentation, the DataFrame.empty method returns True if any of the axes in the DataFrame are of length 0. As a consequence, "empty" doesn't mean zero rows and zero columns, like someone might expect. A dataframe with zero rows (axis 1 is empty) but non-zero columns (axis 2 is not empty) is still considered empty: > df = pd.DataFrame(columns=["A", "B", "C"]) > df.empty True Another interesting point highlighted in the documentation is a DataFrame that only contains NaNs is not considered empty. > df = pd.DataFrame(columns=["A", "B", "C"], index=['a', 'b', 'c']) > df A B C a NaN NaN NaN b NaN NaN NaN c NaN NaN NaN > df.empty False
Python pandas check if dataframe is not empty
I have an if statement where it checks if the data frame is not empty. The way I do it is the following: if dataframe.empty: pass else: #do something But really I need: if dataframe is not empty: #do something My question is - is there a method .not_empty() to achieve this? I also wanted to ask if the second version is better in terms of performance? Otherwise maybe it makes sense for me to leave it as it is i.e. the first version?
[ "Just do\nif not dataframe.empty:\n # insert code here\n\nThe reason this works is because dataframe.empty returns True if dataframe is empty. To invert this, we can use the negation operator not, which flips True to False and vice-versa. \n", ".empty returns a boolean value\n>>> df_empty.empty\nTrue\n\nSo if not empty can be written as\nif not df.empty:\n #Your code\n\nCheck pandas.DataFrame.empty \n, might help someone.\n", "You can use the attribute dataframe.empty to check whether it's empty or not:\nif not dataframe.empty:\n #do something\n\nOr\nif len(dataframe) != 0:\n #do something\n\nOr\nif len(dataframe.index) != 0:\n #do something\n\n", "No doubt that the use of empty is the most comprehensive in this case (explicit is better than implicit).\nHowever, the most efficient in term of computation time is through the usage of len :\nif not len(df.index) == 0:\n # insert code here\n\nSource : this answer.\n", "As already clearly explained by other commentators, you can negate a boolean expression in Python by simply prepending the not operator, hence:\nif not df.empty:\n # do something\n\ndoes the trick.\nI only want to clarify the meaning of \"empty\" in this context, because it was a bit confusing for me at first.\nAccording to the Pandas documentation, the DataFrame.empty method returns True if any of the axes in the DataFrame are of length 0.\nAs a consequence, \"empty\" doesn't mean zero rows and zero columns, like someone might expect. A dataframe with zero rows (axis 1 is empty) but non-zero columns (axis 2 is not empty) is still considered empty:\n> df = pd.DataFrame(columns=[\"A\", \"B\", \"C\"])\n> df.empty\nTrue\n\nAnother interesting point highlighted in the documentation is a DataFrame that only contains NaNs is not considered empty.\n> df = pd.DataFrame(columns=[\"A\", \"B\", \"C\"], index=['a', 'b', 'c'])\n> df\n A B C\na NaN NaN NaN\nb NaN NaN NaN\nc NaN NaN NaN\n> df.empty\nFalse\n\n" ]
[ 149, 17, 15, 0, 0 ]
[ "Another way:\nif dataframe.empty == False:\n #do something`\n\n" ]
[ -3 ]
[ "pandas", "python", "python_3.x" ]
stackoverflow_0036543606_pandas_python_python_3.x.txt
Q: Django: How to add created_at with nulls for old records? I'm trying to add created_at field to a table with millions of records, and I don't want to add value to the previous records. I have tried the following: created_at = models.DateTimeField(auto_now_add=True) OR created_at = models.DateTimeField(auto_now_add=True, null=True) Still, it set the value to all records, not only to new ones. How can I keep the old with nulls? My database is postgresql A: auto_now_add=True automatically sets the fields value and it is not editable. You need to remove the auto_now_add and set a default value. Best approach for that is: from django.utils.timezone import now created = models.DateTimeField(blank=True, null=True, default=now)
Django: How to add created_at with nulls for old records?
I'm trying to add created_at field to a table with millions of records, and I don't want to add value to the previous records. I have tried the following: created_at = models.DateTimeField(auto_now_add=True) OR created_at = models.DateTimeField(auto_now_add=True, null=True) Still, it set the value to all records, not only to new ones. How can I keep the old with nulls? My database is postgresql
[ "auto_now_add=True automatically sets the fields value and it is not editable.\nYou need to remove the auto_now_add and set a default value. Best approach for that is:\nfrom django.utils.timezone import now\n\ncreated = models.DateTimeField(blank=True, null=True, default=now)\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "postgresql", "python" ]
stackoverflow_0074488917_django_django_models_postgresql_python.txt
Q: How to plot complex numbers (Argand Diagram) using matplotlib I'd like to create an Argand Diagram from a set of complex numbers using matplotlib. Are there any pre-built functions to help me do this? Can anyone recommend an approach? Image by LeonardoG, CC-SA-3.0 A: I'm not sure exactly what you're after here...you have a set of complex numbers, and want to map them to the plane by using their real part as the x coordinate and the imaginary part as y? If so you can get the real part of any python imaginary number with number.real and the imaginary part with number.imag. If you're using numpy, it also provides a set of helper functions numpy.real and numpy.imag etc. which work on numpy arrays. So for instance if you had an array of complex numbers stored something like this: In [13]: a = n.arange(5) + 1j*n.arange(6,11) In [14]: a Out[14]: array([ 0. +6.j, 1. +7.j, 2. +8.j, 3. +9.j, 4.+10.j]) ...you can just do In [15]: fig,ax = subplots() In [16]: ax.scatter(a.real,a.imag) This plots dots on an argand diagram for each point. edit: For the plotting part, you must of course have imported matplotlib.pyplot via from matplotlib.pyplot import * or (as I did) use the ipython shell in pylab mode. A: To follow up @inclement's answer; the following function produces an argand plot that is centred around 0,0 and scaled to the maximum absolute value in the set of complex numbers. I used the plot function and specified solid lines from (0,0). These can be removed by replacing ro- with ro. def argand(a): import matplotlib.pyplot as plt import numpy as np for x in range(len(a)): plt.plot([0,a[x].real],[0,a[x].imag],'ro-',label='python') limit=np.max(np.ceil(np.absolute(a))) # set limits for axis plt.xlim((-limit,limit)) plt.ylim((-limit,limit)) plt.ylabel('Imaginary') plt.xlabel('Real') plt.show() For example: >>> a = n.arange(5) + 1j*n.arange(6,11) >>> from argand import argand >>> argand(a) produces: EDIT: I have just realised there is also a polar plot function: for x in a: plt.polar([0,angle(x)],[0,abs(x)],marker='o') A: If you prefer a plot like the one below one type of plot or this one second type of plot you can do this simply by these two lines (as an example for the plots above): z=[20+10j,15,-10-10j,5+15j] # array of complex values complex_plane2(z,1) # function to be called by using a simple jupyter code from here https://github.com/osnove/other/blob/master/complex_plane.py I have written it for my own purposes. Even better it it helps to others. A: To get that: You can use: cmath.polar to convert a complex number to polar rho-theta coordinates. In the code below this function is first vectorized in order to process an array of complex numbers instead of a single number, this is just to prevent the use an explicit loop. A pyplot axis with its projection type set to polar. Plot can be done using pyplot.stem or pyplot.scatter. In order to plot horizontal and vertical lines for Cartesian coordinates there are two possibilities: Add a Cartesian axis and plot Cartesian coordinates. This solution is described in this question. I don't think it's an easy solution as the Cartesian axis won't be centered, nor it will have the correct scaling factor. Use the polar axis, and translate Cartesian coordinates for projections into polar coordinates. This is the solution I used to plot the graph above. To not clutter the graph I've shown only one point with its projected Cartesian coordinates. Code used for the plot above: from cmath import pi, e, polar from numpy import linspace, vectorize, sin, cos from numpy.random import rand from matplotlib import pyplot as plt # Arrays of evenly spaced angles, and random lengths angles = linspace(0, 2*pi, 12, endpoint=False) lengths = 3*rand(*angles.shape) # Create an array of complex numbers in Cartesian form z = lengths * e ** (1j*angles) # Convert back to polar form vect_polar = vectorize(polar) rho_theta = vect_polar(z) # Plot numbers on polar projection fig, ax = plt.subplots(subplot_kw={'projection': 'polar'}) ax.stem(rho_theta[1], rho_theta[0]) # Get a number, find projections on axes n = 11 rho, theta = rho_theta[0][n], rho_theta[1][n] a = cos(theta) b = sin(theta) rho_h, theta_h = abs(a)*rho, 0 if a >= 0 else -pi rho_v, theta_v = abs(b)*rho, pi/2 if b >= 0 else -pi/2 # Plot h/v lines on polar projection ax.plot((theta_h, theta), (rho_h, rho), c='r', ls='--') ax.plot((theta, theta_v), (rho, rho_v), c='g', ls='--') A: import matplotlib.pyplot as plt from numpy import * ''' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~` This draws the axis for argand diagram ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~` ''' r = 1 Y = [r*exp(1j*theta) for theta in linspace(0,2*pi, 200)] Y = array(Y) plt.plot(real(Y), imag(Y), 'r') plt.ylabel('Imaginary') plt.xlabel('Real') plt.axhline(y=0,color='black') plt.axvline(x=0, color='black') def argand(complex_number): ''' This function takes a complex number. ''' y = complex_number x1,y1 = [0,real(y)], [0, imag(y)] x2,y2 = [real(y), real(y)], [0, imag(y)] plt.plot(x1,y1, 'r') # Draw the hypotenuse plt.plot(x2,y2, 'r') # Draw the projection on real-axis plt.plot(real(y), imag(y), 'bo') [argand(r*exp(1j*theta)) for theta in linspace(0,2*pi,100)] plt.show() https://github.com/QuantumNovice/Matplotlib-Argand-Diagram/blob/master/argand.py
How to plot complex numbers (Argand Diagram) using matplotlib
I'd like to create an Argand Diagram from a set of complex numbers using matplotlib. Are there any pre-built functions to help me do this? Can anyone recommend an approach? Image by LeonardoG, CC-SA-3.0
[ "I'm not sure exactly what you're after here...you have a set of complex numbers, and want to map them to the plane by using their real part as the x coordinate and the imaginary part as y? \nIf so you can get the real part of any python imaginary number with number.real and the imaginary part with number.imag. If you're using numpy, it also provides a set of helper functions numpy.real and numpy.imag etc. which work on numpy arrays.\nSo for instance if you had an array of complex numbers stored something like this:\nIn [13]: a = n.arange(5) + 1j*n.arange(6,11)\n\nIn [14]: a\nOut[14]: array([ 0. +6.j, 1. +7.j, 2. +8.j, 3. +9.j, 4.+10.j])\n\n...you can just do\nIn [15]: fig,ax = subplots()\n\nIn [16]: ax.scatter(a.real,a.imag)\n\nThis plots dots on an argand diagram for each point.\nedit: For the plotting part, you must of course have imported matplotlib.pyplot via from matplotlib.pyplot import * or (as I did) use the ipython shell in pylab mode.\n", "To follow up @inclement's answer; the following function produces an argand plot that is centred around 0,0 and scaled to the maximum absolute value in the set of complex numbers.\nI used the plot function and specified solid lines from (0,0). These can be removed by replacing ro- with ro.\ndef argand(a):\n import matplotlib.pyplot as plt\n import numpy as np\n for x in range(len(a)):\n plt.plot([0,a[x].real],[0,a[x].imag],'ro-',label='python')\n limit=np.max(np.ceil(np.absolute(a))) # set limits for axis\n plt.xlim((-limit,limit))\n plt.ylim((-limit,limit))\n plt.ylabel('Imaginary')\n plt.xlabel('Real')\n plt.show()\n\nFor example:\n>>> a = n.arange(5) + 1j*n.arange(6,11)\n>>> from argand import argand\n>>> argand(a)\n\nproduces:\n\nEDIT:\nI have just realised there is also a polar plot function:\nfor x in a:\n plt.polar([0,angle(x)],[0,abs(x)],marker='o')\n\n\n", "If you prefer a plot like the one below\none type of plot\nor this one second type of plot\nyou can do this simply by these two lines (as an example for the plots above):\nz=[20+10j,15,-10-10j,5+15j] # array of complex values\n\ncomplex_plane2(z,1) # function to be called\n\nby using a simple jupyter code from here\nhttps://github.com/osnove/other/blob/master/complex_plane.py\nI have written it for my own purposes. Even better it it helps to others.\n", "To get that:\n\nYou can use:\n\ncmath.polar to convert a complex number to polar rho-theta coordinates. In the code below this function is first vectorized in order to process an array of complex numbers instead of a single number, this is just to prevent the use an explicit loop.\n\nA pyplot axis with its projection type set to polar. Plot can be done using pyplot.stem or pyplot.scatter.\n\n\nIn order to plot horizontal and vertical lines for Cartesian coordinates there are two possibilities:\n\nAdd a Cartesian axis and plot Cartesian coordinates. This solution is described in this question. I don't think it's an easy solution as the Cartesian axis won't be centered, nor it will have the correct scaling factor.\n\nUse the polar axis, and translate Cartesian coordinates for projections into polar coordinates. This is the solution I used to plot the graph above. To not clutter the graph I've shown only one point with its projected Cartesian coordinates.\n\n\nCode used for the plot above:\nfrom cmath import pi, e, polar\nfrom numpy import linspace, vectorize, sin, cos\nfrom numpy.random import rand\nfrom matplotlib import pyplot as plt\n\n# Arrays of evenly spaced angles, and random lengths\nangles = linspace(0, 2*pi, 12, endpoint=False)\nlengths = 3*rand(*angles.shape)\n\n# Create an array of complex numbers in Cartesian form\nz = lengths * e ** (1j*angles)\n\n# Convert back to polar form\nvect_polar = vectorize(polar)\nrho_theta = vect_polar(z)\n\n# Plot numbers on polar projection\nfig, ax = plt.subplots(subplot_kw={'projection': 'polar'})\nax.stem(rho_theta[1], rho_theta[0])\n\n# Get a number, find projections on axes\nn = 11\nrho, theta = rho_theta[0][n], rho_theta[1][n]\na = cos(theta)\nb = sin(theta)\nrho_h, theta_h = abs(a)*rho, 0 if a >= 0 else -pi\nrho_v, theta_v = abs(b)*rho, pi/2 if b >= 0 else -pi/2\n\n# Plot h/v lines on polar projection\nax.plot((theta_h, theta), (rho_h, rho), c='r', ls='--')\nax.plot((theta, theta_v), (rho, rho_v), c='g', ls='--')\n\n", "import matplotlib.pyplot as plt\nfrom numpy import *\n\n\n'''\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`\nThis draws the axis for argand diagram\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`\n'''\nr = 1\nY = [r*exp(1j*theta) for theta in linspace(0,2*pi, 200)]\nY = array(Y)\nplt.plot(real(Y), imag(Y), 'r')\nplt.ylabel('Imaginary')\nplt.xlabel('Real')\nplt.axhline(y=0,color='black')\nplt.axvline(x=0, color='black')\n\n\ndef argand(complex_number):\n '''\n This function takes a complex number.\n '''\n y = complex_number\n x1,y1 = [0,real(y)], [0, imag(y)]\n x2,y2 = [real(y), real(y)], [0, imag(y)]\n\n\n plt.plot(x1,y1, 'r') # Draw the hypotenuse\n plt.plot(x2,y2, 'r') # Draw the projection on real-axis\n\n plt.plot(real(y), imag(y), 'bo')\n\n[argand(r*exp(1j*theta)) for theta in linspace(0,2*pi,100)]\nplt.show()\n\nhttps://github.com/QuantumNovice/Matplotlib-Argand-Diagram/blob/master/argand.py\n" ]
[ 21, 13, 2, 1, 0 ]
[]
[]
[ "complex_numbers", "matplotlib", "numpy", "plot", "python" ]
stackoverflow_0017445720_complex_numbers_matplotlib_numpy_plot_python.txt
Q: Adding data to the dictionary I have a function that loops through a database. And it writes the result to a dictionary. But it doesn't work. Each loop doesn't write new data into the dictionary, but overwrites the previous ones. How may I fix the error in my code? A fragment of my code: if x is not None: for key, value in qur_list.items(): with conn.cursor() as cursor: sql = f""" SELECT DISTINCT {value}; """ cursor.execute(sql) result = {} for fnc in cursor.fetchall(): if fnc[0] in result.fromkeys([0], [1]): result[fnc[0]].append(fnc[1]) else: result[fnc[0]] = fnc[1] A: just move result = {} outside of for loop. Also fnc[0] should be unique. Think how to define unique key in dictionary
Adding data to the dictionary
I have a function that loops through a database. And it writes the result to a dictionary. But it doesn't work. Each loop doesn't write new data into the dictionary, but overwrites the previous ones. How may I fix the error in my code? A fragment of my code: if x is not None: for key, value in qur_list.items(): with conn.cursor() as cursor: sql = f""" SELECT DISTINCT {value}; """ cursor.execute(sql) result = {} for fnc in cursor.fetchall(): if fnc[0] in result.fromkeys([0], [1]): result[fnc[0]].append(fnc[1]) else: result[fnc[0]] = fnc[1]
[ "just move result = {} outside of for loop. Also fnc[0] should be unique. Think how to define unique key in dictionary\n" ]
[ 0 ]
[]
[]
[ "function", "loops", "python" ]
stackoverflow_0074489464_function_loops_python.txt
Q: Kivymd MDLabel padding or margin I hope you are doing great. I would like to keep space between the text in MDLabel and the edges of the screen any help ideas? this is my code for the first page .kv MDScreen: name:"splash" MDFloatLayout: md_bg_color: (255/255, 250/255, 245/255, 1) Image: source:"assets/1.png" size_hint:.50,.50 pos_hint:{"center_x":.5,"center_y":.8} MDLabel: text:"the Vine Reco App" pos_hint:{"center_x":.5,"center_y":.63} halign:"center" theme_text_color:"Custom" text_color: (106/255, 90/255, 200/255, 1) font_size:"28sp" font_name:"Lemonada" MDLabel: text:"Recognizing the Type of the vine based on the image of list leaves" pos_hint:{"center_x":.5,"center_y":.4} halign:"center" theme_text_color:"Custom" text_color: (5/255, 215/255, 80/255, 1) font_size:"22sp" MDRaisedButton: text:"Get Started" font_name: 'Fonts/Poppins-Regular.ttf' font_size:35 markup: True pos_hint:{"center_x":.5,"center_y":.12} md_bg_color:(140/255, 220/255, 150/255, 1) text_color: (22/255, 160/255, 133/255, 1) the image i will attached shows where i need spaces A: I got the answer: MDLabel: text:"Recognizing the Type of the vine based on the image of list leaves" pos_hint:{"center_x":.5,"center_y":.4} halign:"center" theme_text_color:"Custom" text_color: (5/255, 215/255, 80/255, 1) font_size:"22sp" padding:[40,0]
Kivymd MDLabel padding or margin
I hope you are doing great. I would like to keep space between the text in MDLabel and the edges of the screen any help ideas? this is my code for the first page .kv MDScreen: name:"splash" MDFloatLayout: md_bg_color: (255/255, 250/255, 245/255, 1) Image: source:"assets/1.png" size_hint:.50,.50 pos_hint:{"center_x":.5,"center_y":.8} MDLabel: text:"the Vine Reco App" pos_hint:{"center_x":.5,"center_y":.63} halign:"center" theme_text_color:"Custom" text_color: (106/255, 90/255, 200/255, 1) font_size:"28sp" font_name:"Lemonada" MDLabel: text:"Recognizing the Type of the vine based on the image of list leaves" pos_hint:{"center_x":.5,"center_y":.4} halign:"center" theme_text_color:"Custom" text_color: (5/255, 215/255, 80/255, 1) font_size:"22sp" MDRaisedButton: text:"Get Started" font_name: 'Fonts/Poppins-Regular.ttf' font_size:35 markup: True pos_hint:{"center_x":.5,"center_y":.12} md_bg_color:(140/255, 220/255, 150/255, 1) text_color: (22/255, 160/255, 133/255, 1) the image i will attached shows where i need spaces
[ "I got the answer:\nMDLabel:\n text:\"Recognizing the Type of the vine based on the image of list leaves\"\n pos_hint:{\"center_x\":.5,\"center_y\":.4}\n halign:\"center\"\n theme_text_color:\"Custom\"\n text_color: (5/255, 215/255, 80/255, 1)\n font_size:\"22sp\"\n padding:[40,0]\n\n" ]
[ 0 ]
[]
[]
[ "design_patterns", "kivy", "kivy_language", "kivymd", "python" ]
stackoverflow_0074489003_design_patterns_kivy_kivy_language_kivymd_python.txt
Q: Is there a way to order Wagtail Blocks in the Admin Panel Currently I have all blocks split out into different groups for the page editors to easily navigate through the different block options. However, from reading the documentation I cannot see any way to specifically order the groups. It would be great to be able to customise this so that I could have the text editor group at the top, the image group/carousel block next to each other etc. A: This isn't currently supported - as of Wagtail 4.1, groups of blocks are always listed alphabetically by group name. Here's where this is implemented. You could probably override this behaviour by subclassing StreamField and defining your own grouped_child_blocks method, but be aware that this isn't an officially documented method and could potentially be changed or dropped without warning in a future Wagtail release.
Is there a way to order Wagtail Blocks in the Admin Panel
Currently I have all blocks split out into different groups for the page editors to easily navigate through the different block options. However, from reading the documentation I cannot see any way to specifically order the groups. It would be great to be able to customise this so that I could have the text editor group at the top, the image group/carousel block next to each other etc.
[ "This isn't currently supported - as of Wagtail 4.1, groups of blocks are always listed alphabetically by group name. Here's where this is implemented.\nYou could probably override this behaviour by subclassing StreamField and defining your own grouped_child_blocks method, but be aware that this isn't an officially documented method and could potentially be changed or dropped without warning in a future Wagtail release.\n" ]
[ 0 ]
[]
[]
[ "admin", "content_management_system", "django", "python", "wagtail" ]
stackoverflow_0074488997_admin_content_management_system_django_python_wagtail.txt
Q: updating the column basis checking the condition Id condition2 score A pass 0 A fail 0 B pass 0 B level_1 0 B fail 0 C fail 0 D fail 0 Expected Dataframe : Id condition2 score A pass 1 A fail 1 B pass 1 B level_1 1 B fail 1 C fail 0 D fail 0 looking to tag score as 1 for each row of unique Id , if the condition 2 has either pass or level_1 in any of the row. df['score'] = df.groupby('Id')['condition2'].transform(lambda x: x.eq('pass').any().astype(int)) what modifications to be done on above code A: Lets use isin to find the ids which have pass or level_1: m = df['condition2'].isin(['pass', 'level_1']) df['score'] = df['Id'].isin(df.loc[m, 'Id']).astype(int) If you still want to use groupby and transform..here is the fix to your existing approach: m = df['condition2'].isin(['pass', 'level_1']) df['score'] = m.groupby(df['Id']).transform('any').astype(int) Id condition2 score 0 A pass 1 1 A fail 1 2 B pass 1 3 B level_1 1 4 B fail 1 5 C fail 0 6 D fail 0 A: Another possible solution, list Comprehension: df.assign(score=[1*any(df.loc[df.Id == i, 'condition2'].isin(['pass', 'level_1'])) for i in df.Id]) Output: Id condition2 score 0 A pass 1 1 A fail 1 2 B pass 1 3 B level_1 1 4 B fail 1 5 C fail 0 6 D fail 0
updating the column basis checking the condition
Id condition2 score A pass 0 A fail 0 B pass 0 B level_1 0 B fail 0 C fail 0 D fail 0 Expected Dataframe : Id condition2 score A pass 1 A fail 1 B pass 1 B level_1 1 B fail 1 C fail 0 D fail 0 looking to tag score as 1 for each row of unique Id , if the condition 2 has either pass or level_1 in any of the row. df['score'] = df.groupby('Id')['condition2'].transform(lambda x: x.eq('pass').any().astype(int)) what modifications to be done on above code
[ "Lets use isin to find the ids which have pass or level_1:\nm = df['condition2'].isin(['pass', 'level_1'])\ndf['score'] = df['Id'].isin(df.loc[m, 'Id']).astype(int)\n\nIf you still want to use groupby and transform..here is the fix to your existing approach:\nm = df['condition2'].isin(['pass', 'level_1'])\ndf['score'] = m.groupby(df['Id']).transform('any').astype(int)\n\n\n Id condition2 score\n0 A pass 1\n1 A fail 1\n2 B pass 1\n3 B level_1 1\n4 B fail 1\n5 C fail 0\n6 D fail 0\n\n", "Another possible solution, list Comprehension:\ndf.assign(score=[1*any(df.loc[df.Id == i,\n 'condition2'].isin(['pass', 'level_1'])) for i in df.Id])\n\nOutput:\n Id condition2 score\n0 A pass 1\n1 A fail 1\n2 B pass 1\n3 B level_1 1\n4 B fail 1\n5 C fail 0\n6 D fail 0\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074489447_dataframe_pandas_python.txt
Q: comparing lists in python, with a twist So I have two lists I want to compare, listA and listB. If an item from listA appears in listB, I want to remove it from listB. I can do this with: listA = ["config", "\n", "config checkpoint"] listB = ["config exclusive", "config checkpoint test", "config", "config", "config", "\n", "hello"] listB = [line for line in listB if not any(line in item for item in listA)] But where things now become more complex, is that I have some lines I want to remove only if the list item matches exactly (as it currently does), but also lines that I want to remove if the item from listB contains the item from listA, i.e. a partial match. I'm not sure whether it can be done succinctly within the same function. I've explored using .startswith, rawstrings to add ^ and $ on the end of the complete lines, importing re.match (I couldn't iterate within the given code). I think it might just be a beautiful dream, but can anyone think of an elegant way of doing this within the same pass? A: If, listA is a list of regex patterns (as you wrote in comments), you can do: import re listA = ["^config$", "^\n$", "^config checkpoint"] listB = ["config exclusive", "config checkpoint test", "config", "config", "config", "\n", "hello"] listB = [line for line in listB if not any(re.match(item, line) for item in listA)] print(listB) Output ['config exclusive', 'hello'] A: You could try using difflib.get_close_matches, that comes by default with Python. Example: import difflib listA = ["config", "\n", "config checkpoint"] listB = ["config exclusive", "config checkpoint test", "config", "config", "config", "\n", "hello"] new_listB = [line for line in listB if len(difflib.get_close_matches(line, listA, n=len(listA), cutoff=0.4)) == 0] print(new_listB) # Prints: # # ['hello'] Notes The function difflib.get_close_matches, contains a parameter named cutoff. This parameter accepts values between 0 and 1 and is equal to 0.6 by default. The ideia here is that the lower you set this cutoff parameter, the lesser strict the function will be when trying to find elements from listA that match line. Here's an example: difflib.get_close_matches('John', ['John', 'Joe', 'Jane', 'Janet'], cutoff=0.2, n=100) # Returns: # # ['John', 'Joe', 'Jane', 'Janet'] difflib.get_close_matches('John', ['John', 'Joe', 'Jane', 'Janet'], cutoff=0.6, n=100) # Returns: # # ['John']
comparing lists in python, with a twist
So I have two lists I want to compare, listA and listB. If an item from listA appears in listB, I want to remove it from listB. I can do this with: listA = ["config", "\n", "config checkpoint"] listB = ["config exclusive", "config checkpoint test", "config", "config", "config", "\n", "hello"] listB = [line for line in listB if not any(line in item for item in listA)] But where things now become more complex, is that I have some lines I want to remove only if the list item matches exactly (as it currently does), but also lines that I want to remove if the item from listB contains the item from listA, i.e. a partial match. I'm not sure whether it can be done succinctly within the same function. I've explored using .startswith, rawstrings to add ^ and $ on the end of the complete lines, importing re.match (I couldn't iterate within the given code). I think it might just be a beautiful dream, but can anyone think of an elegant way of doing this within the same pass?
[ "If, listA is a list of regex patterns (as you wrote in comments), you can do:\nimport re\n\nlistA = [\"^config$\", \"^\\n$\", \"^config checkpoint\"]\nlistB = [\"config exclusive\", \"config checkpoint test\", \"config\", \"config\", \"config\", \"\\n\", \"hello\"]\n\nlistB = [line for line in listB if not any(re.match(item, line) for item in listA)]\nprint(listB)\n\nOutput\n['config exclusive', 'hello']\n\n", "You could try using difflib.get_close_matches, that comes by default with Python.\nExample:\n\nimport difflib\n\nlistA = [\"config\", \"\\n\", \"config checkpoint\"]\nlistB = [\"config exclusive\", \"config checkpoint test\", \"config\", \"config\", \"config\", \"\\n\", \"hello\"]\n\nnew_listB = [line for line in listB if len(difflib.get_close_matches(line, listA, n=len(listA), cutoff=0.4)) == 0]\nprint(new_listB)\n# Prints:\n#\n# ['hello']\n\nNotes\nThe function difflib.get_close_matches, contains a parameter named cutoff. This parameter accepts values between 0 and 1 and is equal to 0.6 by default. The ideia here is that the lower you set this cutoff parameter, the lesser strict the function will be when trying to find elements from listA that match line. Here's an example:\n\ndifflib.get_close_matches('John', ['John', 'Joe', 'Jane', 'Janet'], cutoff=0.2, n=100)\n# Returns:\n#\n# ['John', 'Joe', 'Jane', 'Janet']\n\ndifflib.get_close_matches('John', ['John', 'Joe', 'Jane', 'Janet'], cutoff=0.6, n=100)\n# Returns:\n#\n# ['John']\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "list", "python", "string" ]
stackoverflow_0074489212_list_python_string.txt
Q: convert tuple to dict and accessing its values inputTuple = ({'mobile': '91245555555', 'email': 'xyz@gmail.com', 'name': 'xyz', 'app_registration': 1},) print(type(inputTuple)) # <class 'tuple'> my_dict = dict(inputTuple) print(my_dict) #ValueError: dictionary update sequence element #0 has length 4; 2 is required mobile = my_dict.get("mobile") email = my_dict.get("email") name = my_dict.get("name") print(mobile) print(email) print(name) how to get now each data from this tuple, first how to convert this to dict, i need to convert to dict and have to get all the key pair values,and not by using index values Thanks for the answers A: Do you just want my_dict = inputTuple[0] data = my_dict['mobile'] print(data) A: inputTuple = ({'mobile': '91245555555', 'email': 'xyz@gmail.com', 'name': 'xyz', 'app_registration': 1}) In the question the inputTuple value ended with 'comma' which will make it as tuple and if we remove that it will be dict. and it worked in below way after change in question. mobile = data.get("mobile", None) Thanks for all
convert tuple to dict and accessing its values
inputTuple = ({'mobile': '91245555555', 'email': 'xyz@gmail.com', 'name': 'xyz', 'app_registration': 1},) print(type(inputTuple)) # <class 'tuple'> my_dict = dict(inputTuple) print(my_dict) #ValueError: dictionary update sequence element #0 has length 4; 2 is required mobile = my_dict.get("mobile") email = my_dict.get("email") name = my_dict.get("name") print(mobile) print(email) print(name) how to get now each data from this tuple, first how to convert this to dict, i need to convert to dict and have to get all the key pair values,and not by using index values Thanks for the answers
[ "Do you just want\nmy_dict = inputTuple[0]\ndata = my_dict['mobile']\nprint(data) \n\n", "inputTuple = ({'mobile': '91245555555', 'email': 'xyz@gmail.com', 'name': 'xyz', 'app_registration': 1})\n\nIn the question the inputTuple value ended with 'comma' which will make it as tuple and if we remove that it will be dict. and it worked in below way after change in question.\n mobile = data.get(\"mobile\", None)\n\nThanks for all\n" ]
[ 1, -1 ]
[]
[]
[ "dictionary", "key", "python", "tuples" ]
stackoverflow_0074472214_dictionary_key_python_tuples.txt
Q: Python Selenium `execute_cdp_cmd` only works at the first run I am trying to change device geolocation using Selenium Python (with Selenium wire to catch http requests) by: from seleniumwire import webdriver options = webdriver.EdgeOptions() options.accept_insecure_certs = True options.add_argument('--disable-blink-features=AutomationControlled') driver = webdriver.Edge(seleniumwire_options={ 'port': 12345, 'disable_encoding': True}, options=options) # List of coordinates lats = [44.55, 43.63, 52.25, 45.48, 47.35] longs = [-77.33, -80.33, -81.62, -76.81, -84.62] for i in range(len(lats)): coordinates = { "latitude": lats[i], "longitude": longs[i], "accuracy": i + 1 } driver.execute_cdp_cmd("Emulation.setGeolocationOverride", coordinates) driver.get(some_website) # I can see the coordinates passed to the `execute_cdp_cmd` changed every loop print(coordinates) However, the actual geolocation will always be the first coordinates. The only way is to create a new driver at each loop but that's really bad performance. What am I doing wrong? A: Have you tried using Emulation.clearGeolocationOverride prior to calling Emulation.setGeolocationOverride? from seleniumwire import webdriver options = webdriver.EdgeOptions() options.accept_insecure_certs = True options.add_argument('--disable-blink-features=AutomationControlled') driver = webdriver.Edge(seleniumwire_options={ 'port': 12345, 'disable_encoding': True}, options=options) # List of coordinates lats = [] longs = [] for i in range(len(lats)): coordinates = { "latitude": lats[i], "longitude": longs[i], "accuracy": i + 1 } driver.execute_cdp_cmd("Emulation.clearGeolocationOverride") driver.execute_cdp_cmd("Emulation.setGeolocationOverride", coordinates) driver.get(some_website) print(coordinates)
Python Selenium `execute_cdp_cmd` only works at the first run
I am trying to change device geolocation using Selenium Python (with Selenium wire to catch http requests) by: from seleniumwire import webdriver options = webdriver.EdgeOptions() options.accept_insecure_certs = True options.add_argument('--disable-blink-features=AutomationControlled') driver = webdriver.Edge(seleniumwire_options={ 'port': 12345, 'disable_encoding': True}, options=options) # List of coordinates lats = [44.55, 43.63, 52.25, 45.48, 47.35] longs = [-77.33, -80.33, -81.62, -76.81, -84.62] for i in range(len(lats)): coordinates = { "latitude": lats[i], "longitude": longs[i], "accuracy": i + 1 } driver.execute_cdp_cmd("Emulation.setGeolocationOverride", coordinates) driver.get(some_website) # I can see the coordinates passed to the `execute_cdp_cmd` changed every loop print(coordinates) However, the actual geolocation will always be the first coordinates. The only way is to create a new driver at each loop but that's really bad performance. What am I doing wrong?
[ "Have you tried using Emulation.clearGeolocationOverride prior to calling Emulation.setGeolocationOverride?\nfrom seleniumwire import webdriver\n\noptions = webdriver.EdgeOptions()\noptions.accept_insecure_certs = True\noptions.add_argument('--disable-blink-features=AutomationControlled')\ndriver = webdriver.Edge(seleniumwire_options={\n 'port': 12345, 'disable_encoding': True}, options=options)\n\n# List of coordinates\nlats = []\nlongs = []\n\nfor i in range(len(lats)):\n coordinates = {\n \"latitude\": lats[i],\n \"longitude\": longs[i],\n \"accuracy\": i + 1\n }\n\n driver.execute_cdp_cmd(\"Emulation.clearGeolocationOverride\")\n driver.execute_cdp_cmd(\"Emulation.setGeolocationOverride\", coordinates)\n driver.get(some_website)\n\n print(coordinates)\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium", "selenium_webdriver" ]
stackoverflow_0074483259_python_selenium_selenium_webdriver.txt
Q: os.pipe() forcing me to write an input I have code in python which I modify the stdin and stderr, it work succesfuly but at the end of the program it asks for input (for the shell) python code: import os import subprocess stdin_read, stdin_write = os.pipe() stderr_read, stderr_write = os.pipe() os.write(stdin_write, b'\x00\x0a\x00\xff') os.write(stderr_write, b'\x00\x0a\x02\xff') subprocess.Popen(["./prog"],stdin=stdin_read, stderr=stderr_read) code in c (prog.c) #include <stdio.h> #include <stdlib.h> #include <string.h> int main(){ char buf[4]; read(0, buf, 4); if(!memcmp(buf, "\x00\x0a\x00\xff", 4)){ printf("success -A \n"); } read(2, buf, 4); if(!memcmp(buf, "\x00\x0a\x02\xff", 4)){ printf("success -B \n"); } return 0; } I typed theecho "hello world" terminal-prove I was expecting the program to end and not let me write input to the shell. Do I need to close the pipe I created? What should I do so that this won`t happen to me? A: Notice the Python program ended before printing success -A and success -B. Apparently ./prog started, then Python ended, then ./prog printed its thing. The pipe still exists until all programs that have access to it close it (or end) so ./prog can still read the data from both pipes. You typed the echo command into the shell prompt after python ended. It's still the same prompt even though ./prog printed two lines in the middle, between the prompt and your command. If you want Python to wait for ./prog to finish you can do that: my_proc = subprocess.Popen(["./prog"],stdin=stdin_read, stderr=stderr_read) my_proc.wait()
os.pipe() forcing me to write an input
I have code in python which I modify the stdin and stderr, it work succesfuly but at the end of the program it asks for input (for the shell) python code: import os import subprocess stdin_read, stdin_write = os.pipe() stderr_read, stderr_write = os.pipe() os.write(stdin_write, b'\x00\x0a\x00\xff') os.write(stderr_write, b'\x00\x0a\x02\xff') subprocess.Popen(["./prog"],stdin=stdin_read, stderr=stderr_read) code in c (prog.c) #include <stdio.h> #include <stdlib.h> #include <string.h> int main(){ char buf[4]; read(0, buf, 4); if(!memcmp(buf, "\x00\x0a\x00\xff", 4)){ printf("success -A \n"); } read(2, buf, 4); if(!memcmp(buf, "\x00\x0a\x02\xff", 4)){ printf("success -B \n"); } return 0; } I typed theecho "hello world" terminal-prove I was expecting the program to end and not let me write input to the shell. Do I need to close the pipe I created? What should I do so that this won`t happen to me?
[ "Notice the Python program ended before printing success -A and success -B. Apparently ./prog started, then Python ended, then ./prog printed its thing. The pipe still exists until all programs that have access to it close it (or end) so ./prog can still read the data from both pipes.\nYou typed the echo command into the shell prompt after python ended. It's still the same prompt even though ./prog printed two lines in the middle, between the prompt and your command.\nIf you want Python to wait for ./prog to finish you can do that:\nmy_proc = subprocess.Popen([\"./prog\"],stdin=stdin_read, stderr=stderr_read)\nmy_proc.wait()\n\n" ]
[ 1 ]
[]
[]
[ "c", "file_descriptor", "linux", "python" ]
stackoverflow_0074489387_c_file_descriptor_linux_python.txt
Q: How to access `ApplyResult` and `Event` types in the multiprocessing library I've written a working wrapper around the python multiprocessing code so I can easily start, clean up, and catch errors in my processes. I've recently decided to go back and add proper type hints to this code, however I can't figure out how to use the types defined in multiprocessing correctly. I have a function which accepts a list of ApplyResult objects and extracts those results, before returning a list of those results (if successful) from typing import Any from typing import TypeVar import multiprocessing as mp _T = TypeVar("_T") def wait_for_pool_results( results: list[mp.pool.ApplyResult[_T]], terminate_process_event: mp.Event, result_timeout: int, ) -> list[_T]: do_stuff() When running this code I get the following error: results: list[mp.pool.ApplyResult[_T]], AttributeError: module 'multiprocessing' has no attribute 'pool' Looking through the code, this is the location of the ApplyResult definition, and it's not available via mp.ApplyResult either. I could change this type hint to an Any to get around the issue (I currently do). How do I access the ApplyResult type from python's multiprocessing library? Furthermore, although I can assign the mp.Event type mypy complains that Mypy: Function "multiprocessing.Event" is not valid as a type. How do I correctly access this type too? A: To resolve such issues with standard library, usually typeshed repo is useful enough. In mp __init__.py Event is defined as some attribute of context. Going to mp context.py, we find out that Event is defined as synchronize.Event, and in mp synchronize.py we finally find the class definition. The issue with mp.pool is of different kind: it has to be imported as from multiprocessing.pool import ApplyResult (or by aliasing import of multiprocessing.pool - but not as attribute of mp), because it is a nested module. See this SO question for reference. So, the following shows proper typing in this context: from typing import Any, TypeVar from multiprocessing.pool import ApplyResult from multiprocessing.synchronize import Event _T = TypeVar("_T") def wait_for_pool_results( results: list[ApplyResult[_T]], terminate_process_event: Event, result_timeout: int, ) -> list[_T]: ... (playground) Also, regarding your runtime error: if some annotations are accepted by mypy, but cause runtime issues, it often means you need from __future__ import annotations as first line of the file. It enables postponed evaluation of annotations, so any valid python is accepted in annotations. UPD: mypy allows mp.pool attribute, because it cannot reliably track whether this import happened before. For runtime, consider the following scenario: >>> import multiprocessing as mp >>> mp.pool Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'multiprocessing' has no attribute 'pool'. Did you mean: 'Pool'? >>> from multiprocessing import pool >>> mp.pool <module 'multiprocessing.pool' from '...'> So, mypy cannot reliably know whether some kind of import (including importlib.import_module or __import__ calls) of submodule happened before, so it assumes that it was. Atrribute errors of such kind are easy to resolve when they arise, so this decision makes sense from usability point of view.
How to access `ApplyResult` and `Event` types in the multiprocessing library
I've written a working wrapper around the python multiprocessing code so I can easily start, clean up, and catch errors in my processes. I've recently decided to go back and add proper type hints to this code, however I can't figure out how to use the types defined in multiprocessing correctly. I have a function which accepts a list of ApplyResult objects and extracts those results, before returning a list of those results (if successful) from typing import Any from typing import TypeVar import multiprocessing as mp _T = TypeVar("_T") def wait_for_pool_results( results: list[mp.pool.ApplyResult[_T]], terminate_process_event: mp.Event, result_timeout: int, ) -> list[_T]: do_stuff() When running this code I get the following error: results: list[mp.pool.ApplyResult[_T]], AttributeError: module 'multiprocessing' has no attribute 'pool' Looking through the code, this is the location of the ApplyResult definition, and it's not available via mp.ApplyResult either. I could change this type hint to an Any to get around the issue (I currently do). How do I access the ApplyResult type from python's multiprocessing library? Furthermore, although I can assign the mp.Event type mypy complains that Mypy: Function "multiprocessing.Event" is not valid as a type. How do I correctly access this type too?
[ "To resolve such issues with standard library, usually typeshed repo is useful enough. In mp __init__.py Event is defined as some attribute of context. Going to mp context.py, we find out that Event is defined as synchronize.Event, and in mp synchronize.py we finally find the class definition.\nThe issue with mp.pool is of different kind: it has to be imported as from multiprocessing.pool import ApplyResult (or by aliasing import of multiprocessing.pool - but not as attribute of mp), because it is a nested module. See this SO question for reference.\nSo, the following shows proper typing in this context:\nfrom typing import Any, TypeVar\n\nfrom multiprocessing.pool import ApplyResult\nfrom multiprocessing.synchronize import Event\n\n_T = TypeVar(\"_T\")\n\ndef wait_for_pool_results(\n results: list[ApplyResult[_T]],\n terminate_process_event: Event,\n result_timeout: int,\n) -> list[_T]:\n ...\n\n(playground)\nAlso, regarding your runtime error: if some annotations are accepted by mypy, but cause runtime issues, it often means you need from __future__ import annotations as first line of the file. It enables postponed evaluation of annotations, so any valid python is accepted in annotations.\nUPD:\nmypy allows mp.pool attribute, because it cannot reliably track whether this import happened before. For runtime, consider the following scenario:\n>>> import multiprocessing as mp\n\n>>> mp.pool\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: module 'multiprocessing' has no attribute 'pool'. Did you mean: 'Pool'?\n\n>>> from multiprocessing import pool\n\n>>> mp.pool\n<module 'multiprocessing.pool' from '...'>\n\n\nSo, mypy cannot reliably know whether some kind of import (including importlib.import_module or __import__ calls) of submodule happened before, so it assumes that it was. Atrribute errors of such kind are easy to resolve when they arise, so this decision makes sense from usability point of view.\n" ]
[ 1 ]
[]
[]
[ "multiprocessing", "mypy", "python", "python_typing" ]
stackoverflow_0074488948_multiprocessing_mypy_python_python_typing.txt
Q: I have a numpy array with the shape of 480x600, numpy complex numbers, there is a way to append it in a empty array which has more of these inside? Ok, so in this loop in a function of a class for oo in range(norient): ... for ss in range(nscale): filt=logGabor[ss]*spread This filt numpy array contains numpy complex numbers. So this filt numpy array has a shape of 480x600 and it would do it like 12 times, so I would like to have a numpy array with 12 values which contains the other arrays of 480x600. In the init of the class I started a self.espacial=np.empty(shape=(0,12),dtype=complex) and in the end of the for I tried to append it, I read that numpy append doesn't work very well in a loop and gave me. self.espacial=np.append(self.espacial,filt) ValueError: all the input arrays must have same number of dimensions" A: Numpy arrays aren't very good if the size constantly changes, instead collect into a list and convert to an array at the end: special = [] for oo in range(norient): … for ss in range(nscale): filt=logGabor[ss]*spread special.append(filt) special = np.array(special)
I have a numpy array with the shape of 480x600, numpy complex numbers, there is a way to append it in a empty array which has more of these inside?
Ok, so in this loop in a function of a class for oo in range(norient): ... for ss in range(nscale): filt=logGabor[ss]*spread This filt numpy array contains numpy complex numbers. So this filt numpy array has a shape of 480x600 and it would do it like 12 times, so I would like to have a numpy array with 12 values which contains the other arrays of 480x600. In the init of the class I started a self.espacial=np.empty(shape=(0,12),dtype=complex) and in the end of the for I tried to append it, I read that numpy append doesn't work very well in a loop and gave me. self.espacial=np.append(self.espacial,filt) ValueError: all the input arrays must have same number of dimensions"
[ "Numpy arrays aren't very good if the size constantly changes, instead collect into a list and convert to an array at the end:\nspecial = []\nfor oo in range(norient):\n …\n for ss in range(nscale):\n filt=logGabor[ss]*spread\n special.append(filt)\n\nspecial = np.array(special)\n\n" ]
[ 1 ]
[]
[]
[ "append", "arrays", "complex_numbers", "numpy", "python" ]
stackoverflow_0074489552_append_arrays_complex_numbers_numpy_python.txt
Q: Skulpt vs Trinket.io Python Version I'm confused. On http://skulpt.org/ it says, under "what's new?": Python 3 Grammar. The master branch is now building and running using the grammar for Python 3.7.3. There are still lots of things to implement under the hood, but we have made a huge leap forward in Python 3 compatibility. We will still support Python 2 as an option going forward for projects that rely on it. It also says on the site that the way to embed Skulpt on a site is using Trinket.io. However, on Trinket.io, if you create a new Trinket, the only free option is "Python". It doesn't specify the version, but there is a locked option for "Python 3." What is the deal please? Is it that Trinket.io offers a paid version of Skulp that supports Python 3? Or are the default "Python" trinkets "kinda sorta" Python 3, as described on the Skulpt homepage? Or what? The bottom line is, I want to embed some Python 3 on my site, but I'm not ready to pay for a subscription. Is this possible please? A: It looks like Trinket.io doesn't use Skulpt for Python 3 anymore. When you run a Python 3 code on Trinket.io it says "Connecting to server" which wouldn't be necessary with Skulpt.
Skulpt vs Trinket.io Python Version
I'm confused. On http://skulpt.org/ it says, under "what's new?": Python 3 Grammar. The master branch is now building and running using the grammar for Python 3.7.3. There are still lots of things to implement under the hood, but we have made a huge leap forward in Python 3 compatibility. We will still support Python 2 as an option going forward for projects that rely on it. It also says on the site that the way to embed Skulpt on a site is using Trinket.io. However, on Trinket.io, if you create a new Trinket, the only free option is "Python". It doesn't specify the version, but there is a locked option for "Python 3." What is the deal please? Is it that Trinket.io offers a paid version of Skulp that supports Python 3? Or are the default "Python" trinkets "kinda sorta" Python 3, as described on the Skulpt homepage? Or what? The bottom line is, I want to embed some Python 3 on my site, but I'm not ready to pay for a subscription. Is this possible please?
[ "It looks like Trinket.io doesn't use Skulpt for Python 3 anymore. When you run a Python 3 code on Trinket.io it says \"Connecting to server\" which wouldn't be necessary with Skulpt.\n" ]
[ 0 ]
[]
[]
[ "embed", "python", "python_3.x", "skulpt" ]
stackoverflow_0074478932_embed_python_python_3.x_skulpt.txt
Q: Cartopy not able to Identify GEOS for PROJ install on Windows I am trying to install Cartopy on Windows. I have installed all the dependencies from their website, however when I go to run pip install Cartopy I get: Complete output (5 lines): setup.py:117: UserWarning: Unable to determine GEOS version. Ensure you have 3.7.2 or later installed, or installation may fail. warnings.warn( setup.py:166: UserWarning: Unable to determine Proj version. Ensure you have 8.0.0 or later installed, or installation may fail. warnings.warn( Proj version 0.0.0 is installed, but cartopy requires at least version 8.0.0 I have ran and succesfully completed pip install proj pip install geos A: Installing Cartopy on Windows using pip is not trivial. Nevertheless, here is a cartopy installation overview using the method that worked for me, specifically for Windows and without using conda. Start by uninstalling proj, geos, and shapely if they are already installed, otherwise skip to step 2. This will facilitate linking them in later steps. pip uninstall shapely pip uninstall proj pip uninstall geos Install proj and geos from OSGeo4W. You cannot use pip to install these because pip points to other projects of the same name. Instead, use the OSGeo4W installer: https://trac.osgeo.org/osgeo4w/ Run as admin and use all the default options, including default installation directories (Advanced Install -> Install from Internet -> All Users -> Next -> Direct Connection -> download.osgeo.org). Then search proj, expand Libs and click the top two "skip"s (proj and proj-data) once each to toggle to the latest release. Now search geos, expand Libs again, and toggle the first "skip" (geos) once to the latest version. Then click next, allow the installer to load dependencies, and click next. The installation took ~5 minutes for me. You now have proj and geos installed. Install shapely from the .whl. You cannot use the method listed in the cartopy install instructions; it fails to link shapely to geos and you will get an error when importing cartopy. Instead, head to https://www.lfd.uci.edu/~gohlke/pythonlibs/#shapely and download the version that suits your python installation (e.g. if you run 64-bit python 3.10, download Shapely‑1.8.1.post1‑cp310‑cp310‑win_amd64.whl) Now you can run pip install \{path}\{to}\{whl}\{Shapely_file.whl} Install cartopy from the .whl. You can download one that suits you here: https://www.lfd.uci.edu/~gohlke/pythonlibs/#cartopy Pick the one that suits your system (e.g. if you run 64-bit python 3.10, download Cartopy‑0.20.2‑cp310‑cp310‑win_amd64.whl) Now you can run pip install \{path}\{to}\{whl}\{Cartopy_file.whl} That's it! It took me a long while and sifting through at least a couple dozen "just use conda" threads to figure this out. Select relevant discussions: https://github.com/SciTools/cartopy/issues/1471 https://towardsdatascience.com/install-shapely-on-windows-72b6581bb46c A: The answer provided by jlave worked great in installing Cartopy. In order to successfully import Cartopy after its installation, pyproj needed to be installed from the same site as shapely. If pyproj already exists delete it. Then download the version of pyproj that fits your system from https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyproj If this step is not included I am getting the following error: ImportError: DLL load failed while importing trace: The specified module could not be found. For cartopy.crs and cartopy.trace
Cartopy not able to Identify GEOS for PROJ install on Windows
I am trying to install Cartopy on Windows. I have installed all the dependencies from their website, however when I go to run pip install Cartopy I get: Complete output (5 lines): setup.py:117: UserWarning: Unable to determine GEOS version. Ensure you have 3.7.2 or later installed, or installation may fail. warnings.warn( setup.py:166: UserWarning: Unable to determine Proj version. Ensure you have 8.0.0 or later installed, or installation may fail. warnings.warn( Proj version 0.0.0 is installed, but cartopy requires at least version 8.0.0 I have ran and succesfully completed pip install proj pip install geos
[ "Installing Cartopy on Windows using pip is not trivial. Nevertheless, here is a cartopy installation overview using the method that worked for me, specifically for Windows and without using conda.\n\nStart by uninstalling proj, geos, and shapely if they are already installed, otherwise skip to step 2. This will facilitate linking them in later steps. pip uninstall shapely pip uninstall proj pip uninstall geos\n\nInstall proj and geos from OSGeo4W. You cannot use pip to install these because pip points to other projects of the same name. Instead, use the OSGeo4W installer: https://trac.osgeo.org/osgeo4w/ Run as admin and use all the default options, including default installation directories (Advanced Install -> Install from Internet -> All Users -> Next -> Direct Connection -> download.osgeo.org). Then search proj, expand Libs and click the top two \"skip\"s (proj and proj-data) once each to toggle to the latest release. Now search geos, expand Libs again, and toggle the first \"skip\" (geos) once to the latest version. Then click next, allow the installer to load dependencies, and click next. The installation took ~5 minutes for me. You now have proj and geos installed.\n\nInstall shapely from the .whl. You cannot use the method listed in the cartopy install instructions; it fails to link shapely to geos and you will get an error when importing cartopy. Instead, head to https://www.lfd.uci.edu/~gohlke/pythonlibs/#shapely and download the version that suits your python installation (e.g. if you run 64-bit python 3.10, download Shapely‑1.8.1.post1‑cp310‑cp310‑win_amd64.whl) Now you can run pip install \\{path}\\{to}\\{whl}\\{Shapely_file.whl}\n\nInstall cartopy from the .whl. You can download one that suits you here: https://www.lfd.uci.edu/~gohlke/pythonlibs/#cartopy Pick the one that suits your system (e.g. if you run 64-bit python 3.10, download Cartopy‑0.20.2‑cp310‑cp310‑win_amd64.whl) Now you can run pip install \\{path}\\{to}\\{whl}\\{Cartopy_file.whl}\n\n\nThat's it! It took me a long while and sifting through at least a couple dozen \"just use conda\" threads to figure this out.\nSelect relevant discussions: https://github.com/SciTools/cartopy/issues/1471\nhttps://towardsdatascience.com/install-shapely-on-windows-72b6581bb46c\n", "The answer provided by jlave worked great in installing Cartopy.\nIn order to successfully import Cartopy after its installation, pyproj needed to be installed from the same site as shapely.\n\nIf pyproj already exists delete it.\nThen download the version of pyproj that fits your system from https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyproj\n\nIf this step is not included I am getting the following error:\nImportError: DLL load failed while importing trace: The specified module could not be found. \n\nFor cartopy.crs and cartopy.trace\n" ]
[ 14, 0 ]
[ "Do yourself a favour and use conda (or even better mamba) to manage your package-dependencies!\n1 line and it will work out of the box in Windows, MacOS and Linux.\nconda install -c conda-forge cartopy\n\nManaging dependencies yourself is tedious and error-prone, especially when it comes to c or c++ dependencies (which is the case for geo-libraries such as pyproj or gdal)\n... it's also what cartopy recommends in their docs!\n" ]
[ -4 ]
[ "cartopy", "geos", "pip", "proj", "python" ]
stackoverflow_0070177062_cartopy_geos_pip_proj_python.txt
Q: LibreOffice, Python, get-pip, pip imports ok but then? module is in LibreOffice sitelib folder Related questions: How update Libre Office Python on windows? Pyuno on Python 3.6 installation issue I have downloaded get-pip.py to my LibreOffice program folder, and used it to install pip. Using pip in that folder, I have installed pymodbus. pip list shows that pymodbus is installed for that version of python, in that folder. And pymodbus is there, in the site-packages folder. But when I try to run a script ("from pymodbus.client.sync import ModbusTcpClient ") in APSO, I get this error: <class 'ImportError'>: No module named 'pymodbus.client.sync' (or 'pymodbus.client.sync.ModbusTcpClient' is unknown) File "C:\Program Files (x86)\misc\LibreOffice\program\pythonscript.py", line 1057, in getScript mod = self.provCtx.getModuleByUrl( fileUri ) File "C:\Program Files (x86)\misc\LibreOffice\program\pythonscript.py", line 494, in getModuleByUrl exec(code, entry.module.__dict__) File "C:\Program Files (x86)\misc\LibreOffice\share\Scripts\python\MyTestScript.py", line 8, in <module> from pymodbus.client.sync import ModbusTcpClient File "C:\Program Files (x86)\misc\LibreOffice\program\uno.py", line 425, in _uno_import raise uno_import_exc File "C:\Program Files (x86)\misc\LibreOffice\program\uno.py", line 346, in _uno_import return _builtin_import(name, *optargs, **kwargs) pip list tells me that the only known site packages are distlib, filelock, pip, platformirs, pymodbus, setuptools, virtualenv, wheel. This is Win7/64, LibreOffice 7.4 (python-core-3.8.14). Not particularly related questions: pip installed module but python gives Import error Too many different Python versions on my system and causing problems https://extensions.libreoffice.org/en/extensions/show/apso-alternative-script-organizer-for-python is installed and can be used: https://superuser.com/questions/1297120/use-a-python-function-as-formula-in-libreoffice-calc-cells There is no other python installed (but there has been). I don't know why a package which is in sitelibs should show that error: I don't know if I've done anything wrong or different. Does it make any sense to anyone else? I've used that same package and same statement in other installations of python 3.8: This is the first time I've done anything with LibreOffice. A: It turns out that, due to breaking changes in version 3.0 of pymodbus, the documentation at https://pymodbus-n.readthedocs.io/en/latest/readme.html#summary (Docs Β» PyModbus - A Python Modbus Stack, Summary) is not actually correct. And, of course, my reference implementation using Anaconda somehow got out of sync: conda-forge claims that its pyModbus version is 3.2, but somehow it managed to give me 2.3 instead. So this is not a LibreOffice question at all, and should be closed.
LibreOffice, Python, get-pip, pip imports ok but then? module is in LibreOffice sitelib folder
Related questions: How update Libre Office Python on windows? Pyuno on Python 3.6 installation issue I have downloaded get-pip.py to my LibreOffice program folder, and used it to install pip. Using pip in that folder, I have installed pymodbus. pip list shows that pymodbus is installed for that version of python, in that folder. And pymodbus is there, in the site-packages folder. But when I try to run a script ("from pymodbus.client.sync import ModbusTcpClient ") in APSO, I get this error: <class 'ImportError'>: No module named 'pymodbus.client.sync' (or 'pymodbus.client.sync.ModbusTcpClient' is unknown) File "C:\Program Files (x86)\misc\LibreOffice\program\pythonscript.py", line 1057, in getScript mod = self.provCtx.getModuleByUrl( fileUri ) File "C:\Program Files (x86)\misc\LibreOffice\program\pythonscript.py", line 494, in getModuleByUrl exec(code, entry.module.__dict__) File "C:\Program Files (x86)\misc\LibreOffice\share\Scripts\python\MyTestScript.py", line 8, in <module> from pymodbus.client.sync import ModbusTcpClient File "C:\Program Files (x86)\misc\LibreOffice\program\uno.py", line 425, in _uno_import raise uno_import_exc File "C:\Program Files (x86)\misc\LibreOffice\program\uno.py", line 346, in _uno_import return _builtin_import(name, *optargs, **kwargs) pip list tells me that the only known site packages are distlib, filelock, pip, platformirs, pymodbus, setuptools, virtualenv, wheel. This is Win7/64, LibreOffice 7.4 (python-core-3.8.14). Not particularly related questions: pip installed module but python gives Import error Too many different Python versions on my system and causing problems https://extensions.libreoffice.org/en/extensions/show/apso-alternative-script-organizer-for-python is installed and can be used: https://superuser.com/questions/1297120/use-a-python-function-as-formula-in-libreoffice-calc-cells There is no other python installed (but there has been). I don't know why a package which is in sitelibs should show that error: I don't know if I've done anything wrong or different. Does it make any sense to anyone else? I've used that same package and same statement in other installations of python 3.8: This is the first time I've done anything with LibreOffice.
[ "It turns out that, due to breaking changes in version 3.0 of pymodbus, the documentation at https://pymodbus-n.readthedocs.io/en/latest/readme.html#summary (Docs Β» PyModbus - A Python Modbus Stack, Summary) is not actually correct.\nAnd, of course, my reference implementation using Anaconda somehow got out of sync: conda-forge claims that its pyModbus version is 3.2, but somehow it managed to give me 2.3 instead.\nSo this is not a LibreOffice question at all, and should be closed.\n" ]
[ 0 ]
[]
[]
[ "pip", "python" ]
stackoverflow_0074484853_pip_python.txt
Q: Automated Messages in discord.py (discord bot) I'm programming a discord bot which should start some comands including a timer and two surveys every tuesday and thursday at 11.30 am. Unfortunately the documentary is outdated and older articles in stack overflow do not work anymore. How do I do that in Python or is this impossible? The single commands are already programmed and work without problems. A: Actually , This Option Can't Be Added With Discord.py . You Should Use Time Or Datetime Modules . To Manage Scripts At The Correct Times . Check This Article ! A: I got my soultion. I worked with apscheduler and now I can time it for days and times. This can be closed.
Automated Messages in discord.py (discord bot)
I'm programming a discord bot which should start some comands including a timer and two surveys every tuesday and thursday at 11.30 am. Unfortunately the documentary is outdated and older articles in stack overflow do not work anymore. How do I do that in Python or is this impossible? The single commands are already programmed and work without problems.
[ "Actually , This Option Can't Be Added With Discord.py .\nYou Should Use Time Or Datetime Modules . \nTo Manage Scripts At The Correct Times .\nCheck This Article !\n", "I got my soultion. I worked with apscheduler and now I can time it for days and times.\nThis can be closed.\n" ]
[ 0, 0 ]
[]
[]
[ "automation", "bots", "discord", "discord.py", "python" ]
stackoverflow_0074430312_automation_bots_discord_discord.py_python.txt
Q: Get tables from AWS Glue using boto3 I need to harvest tables and column names from AWS Glue crawler metadata catalogue. I used boto3 but constantly getting number of 100 tables even though there are more. Setting up NextToken doesn't help. Please help if possible. Desired results is list as follows: lst = [table_one.col_one, table_one.col_two, table_two.col_one....table_n.col_n] def harvest_aws_crawler(): glue = boto3.client('glue', region_name='') response = glue.get_tables(DatabaseName='', NextToken = '') #response syntax: #https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/glue.html#Glue.Client.get_tables crawler_list_tables = [] for tables in response['TableList']: while (response.get('NextToken') is not None): crawler_list_tables.append(tables['Name']) break print(len(crawler_list_tables)) harvest_aws_crawler() UPDATED code, still need to have tablename+columnname: def harvest_aws_crawler(): glue = boto3.client('glue', region_name='') next_token = "" #response syntax: #https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/glue.html#Glue.Client.get_tables response = glue.get_tables(DatabaseName='', NextToken = next_token) tables_from_crawler = [] while True: table_list = response['TableList'] for table_dict in table_list: table_name = table_dict['Name'] #append table_name+column_name for columns in table_name['StorageDescriptor']['Columns']: tables_from_crawler.append(table_name + '.' + columns['Name']) #tables_from_crawler.append(table_name) next_token = response.get('NextToken') if next_token is None: break print(tables_from_crawler) harvest_aws_crawler() A: You can try the below approach by using the paginator option: def get_tables_for_database(database): starting_token = None next_page = True tables = [] while next_page: paginator = glue_client.get_paginator(operation_name="get_tables") response_iterator = paginator.paginate( DatabaseName=database, PaginationConfig={"PageSize": 100, "StartingToken": starting_token}, ) for elem in response_iterator: tables += [ { "name": table["Name"], } for table in elem["TableList"] ] try: starting_token = elem["NextToken"] except: next_page = False return tables and then do invoke the method to list out the tables for a given database: for table in get_tables_for_database(database): print(f"Table: {table['name']}") If you want to list the tables for every database out there in Glue, you may have to do an additional for loop in order to retrieve the databases first, and then extract the tables using the above snippet as your inner loop for each database. A: Adding sub-loop did the trick to get table+column result. #harvest aws crawler metadata next_token = "" client = boto3.client('glue',region_name='us-east-1') crawler_tables = [] while True: response = client.get_tables(DatabaseName = '', NextToken = next_token) for tables in response['TableList']: for columns in tables['StorageDescriptor']['Columns']: crawler_tables.append(tables['Name'] + '.' + columns['Name']) next_token = response.get('NextToken') if next_token is None: break print(crawler_tables) A: You should use MaxResults response = glue.get_tables(DatabaseName='', NextToken = '', MaxResults = number_that_greater_than_your_actual_tables) A: tables = list(dynamodb_resource.tables.all()) worked for me. And if I need only names in my script, additionally I use table_name = tables_names[x].name
Get tables from AWS Glue using boto3
I need to harvest tables and column names from AWS Glue crawler metadata catalogue. I used boto3 but constantly getting number of 100 tables even though there are more. Setting up NextToken doesn't help. Please help if possible. Desired results is list as follows: lst = [table_one.col_one, table_one.col_two, table_two.col_one....table_n.col_n] def harvest_aws_crawler(): glue = boto3.client('glue', region_name='') response = glue.get_tables(DatabaseName='', NextToken = '') #response syntax: #https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/glue.html#Glue.Client.get_tables crawler_list_tables = [] for tables in response['TableList']: while (response.get('NextToken') is not None): crawler_list_tables.append(tables['Name']) break print(len(crawler_list_tables)) harvest_aws_crawler() UPDATED code, still need to have tablename+columnname: def harvest_aws_crawler(): glue = boto3.client('glue', region_name='') next_token = "" #response syntax: #https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/glue.html#Glue.Client.get_tables response = glue.get_tables(DatabaseName='', NextToken = next_token) tables_from_crawler = [] while True: table_list = response['TableList'] for table_dict in table_list: table_name = table_dict['Name'] #append table_name+column_name for columns in table_name['StorageDescriptor']['Columns']: tables_from_crawler.append(table_name + '.' + columns['Name']) #tables_from_crawler.append(table_name) next_token = response.get('NextToken') if next_token is None: break print(tables_from_crawler) harvest_aws_crawler()
[ "You can try the below approach by using the paginator option:\ndef get_tables_for_database(database):\n starting_token = None\n next_page = True\n tables = []\n while next_page:\n paginator = glue_client.get_paginator(operation_name=\"get_tables\")\n response_iterator = paginator.paginate(\n DatabaseName=database,\n PaginationConfig={\"PageSize\": 100, \"StartingToken\": starting_token},\n )\n for elem in response_iterator:\n tables += [\n {\n \"name\": table[\"Name\"],\n }\n for table in elem[\"TableList\"]\n ]\n try:\n starting_token = elem[\"NextToken\"]\n except:\n next_page = False\n return tables\n\nand then do invoke the method to list out the tables for a given database:\nfor table in get_tables_for_database(database):\n print(f\"Table: {table['name']}\")\n\nIf you want to list the tables for every database out there in Glue, you may have to do an additional for loop in order to retrieve the databases first, and then extract the tables using the above snippet as your inner loop for each database.\n", "Adding sub-loop did the trick to get table+column result.\n#harvest aws crawler metadata\nnext_token = \"\"\nclient = boto3.client('glue',region_name='us-east-1')\ncrawler_tables = []\n\nwhile True:\n response = client.get_tables(DatabaseName = '', NextToken = next_token)\n for tables in response['TableList']:\n for columns in tables['StorageDescriptor']['Columns']:\n crawler_tables.append(tables['Name'] + '.' + columns['Name'])\n next_token = response.get('NextToken')\n if next_token is None:\n break\nprint(crawler_tables)\n\n", "You should use MaxResults\nresponse = glue.get_tables(DatabaseName='', NextToken = '', MaxResults = number_that_greater_than_your_actual_tables)\n\n", "tables = list(dynamodb_resource.tables.all()) worked for me. And if I need only names in my script, additionally I use table_name = tables_names[x].name \n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "amazon_web_services", "aws_glue", "boto3", "pyspark", "python" ]
stackoverflow_0066545190_amazon_web_services_aws_glue_boto3_pyspark_python.txt
Q: Is there a workaround to prevent Gmail API for python from asking for a new token each time I run my python script? I have a python script that sends emails with attachments using GMAIL's API. Each time(mostly after a day) I run the script, I get an error that the token's invalid. The only solution I have identified so far is to download the json file each time I run the script but I was expecting this to be done only once as I intend to convert the script to a desktop application. A: Google sends you an authToken and a RefreshToken, who need to be stored to refresh your token when he is no longer valid. Check that : https://developers.google.com/identity/protocols/oauth2 A: There are two types of tokens access tokens and refresh tokens. Access tokens are only good for an hour. Refresh tokens are long lived and should work until the access has been removed. Or if your application is still in the testing phase then the token will only work for seven days. There is one other thing, if you are creating your tokens from google oauth2 playground I bleave they are only good for three hours give or take. The best solution for all of the above is to ensure that your app is first off set to prodctuion, and second that you are properly storing your token after you have created it. In the sample below the token is stored in the token.json for later use. def Authorize(credentials_file_path, token_file_path): """Shows basic usage of authorization""" try: credentials = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists(token_file_path): try: credentials = Credentials.from_authorized_user_file(token_file_path, SCOPES) credentials.refresh(Request()) except google.auth.exceptions.RefreshError as error: # if refresh token fails, reset creds to none. credentials = None print(f'An refresh authorization error occurred: {error}') # If there are no (valid) credentials available, let the user log in. if not credentials or not credentials.valid: if credentials and credentials.expired and credentials.refresh_token: credentials.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( credentials_file_path, SCOPES) credentials = flow.run_local_server(port=0) # Save the credentials for the next run with open(token_file_path, 'w') as token: token.write(credentials.to_json()) except HttpError as error: # Todo handle error print(f'An authorization error occurred: {error}') return credentials if __name__ == '__main__': creds = Authorize('C:\\YouTube\\dev\\credentials.json', "token.json")
Is there a workaround to prevent Gmail API for python from asking for a new token each time I run my python script?
I have a python script that sends emails with attachments using GMAIL's API. Each time(mostly after a day) I run the script, I get an error that the token's invalid. The only solution I have identified so far is to download the json file each time I run the script but I was expecting this to be done only once as I intend to convert the script to a desktop application.
[ "Google sends you an authToken and a RefreshToken, who need to be stored to refresh your token when he is no longer valid.\nCheck that :\nhttps://developers.google.com/identity/protocols/oauth2\n", "There are two types of tokens access tokens and refresh tokens.\nAccess tokens are only good for an hour. Refresh tokens are long lived and should work until the access has been removed. Or if your application is still in the testing phase then the token will only work for seven days. There is one other thing, if you are creating your tokens from google oauth2 playground I bleave they are only good for three hours give or take.\nThe best solution for all of the above is to ensure that your app is first off set to prodctuion, and second that you are properly storing your token after you have created it.\nIn the sample below the token is stored in the token.json for later use.\ndef Authorize(credentials_file_path, token_file_path):\n \"\"\"Shows basic usage of authorization\"\"\"\n try:\n credentials = None\n # The file token.json stores the user's access and refresh tokens, and is\n # created automatically when the authorization flow completes for the first\n # time.\n if os.path.exists(token_file_path):\n try:\n credentials = Credentials.from_authorized_user_file(token_file_path, SCOPES)\n credentials.refresh(Request())\n except google.auth.exceptions.RefreshError as error:\n # if refresh token fails, reset creds to none.\n credentials = None\n print(f'An refresh authorization error occurred: {error}')\n # If there are no (valid) credentials available, let the user log in.\n if not credentials or not credentials.valid:\n if credentials and credentials.expired and credentials.refresh_token:\n credentials.refresh(Request())\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n credentials_file_path, SCOPES)\n credentials = flow.run_local_server(port=0)\n # Save the credentials for the next run\n with open(token_file_path, 'w') as token:\n token.write(credentials.to_json())\n except HttpError as error:\n # Todo handle error\n print(f'An authorization error occurred: {error}')\n\n return credentials\n\nif __name__ == '__main__':\n creds = Authorize('C:\\\\YouTube\\\\dev\\\\credentials.json', \"token.json\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "api", "gmail", "python" ]
stackoverflow_0074488359_api_gmail_python.txt
Q: Client get message from server using asyncio in python We are trying to use asyncio to run a straightforward client/server. The server is an echo server with two possible commands sent by the client, "quit" and "timer". The timer command starts a timer that will print a message in the console every second (at the server and client), and the quit command closes the connection. The actual problem is the following: When we run the server and the client, and we start the timer, the result of the timer is not sent to the client. It blocks the server and the client. I believe that the problem is on the client's side. However, I was not able to detect it. Server import asyncio import time HOST = "127.0.0.1" PORT = 9999 class Timer(object): '''Simple timer class that can be started and stopped.''' def __init__(self, writer: asyncio.StreamWriter, name = None, interval = 1) -> None: self.name = name self.interval = interval self.writer = writer async def _tick(self) -> None: while True: await asyncio.sleep(self.interval) delta = time.time() - self._init_time self.writer.write(f"Timer {delta} ticked\n".encode()) self.writer.drain() print("Delta time: ", delta) async def start(self) -> None: self._init_time = time.time() self.task = asyncio.create_task(self._tick()) async def stop(self) -> None: self.task.cancel() print("Delta time: ", time.time() - self._init_time) async def msg_handler(reader: asyncio.StreamReader, writer: asyncio.StreamWriter) -> None: '''Handle the echo protocol.''' # timer task that the client can start: timer_task = False try: while True: data = await reader.read(1024) # Read 256 bytes from the reader. Size of the message msg = data.decode() # Decode the message addr, port = writer.get_extra_info("peername") # Get the address of the client print(f"Received {msg!r} from {addr}:{port!r}") send_message = "Message received: " + msg writer.write(send_message.encode()) # Echo the data back to the client await writer.drain() # This will wait until everything is clear to move to the next thing. if data == b"quit" and timer_task is True: # cancel the timer_task (if any) if timer_task: timer_task.cancel() await timer_task writer.close() # Close the connection await writer.wait_closed() # Wait for the connection to close elif data == b"quit" and timer_task is False: writer.close() # Close the connection await writer.wait_closed() # Wait for the connection to close elif data == b"start" and timer_task is False: print("Starting timer") t = Timer(writer) timer_task = True await t.start() elif data == b"stop" and timer_task is True: print("Stopping timer") await t.stop() timer_task = False except ConnectionResetError: print("Client disconnected") async def run_server() -> None: # Our awaitable callable. # This callable is ran when the server recieves some data server = await asyncio.start_server(msg_handler, HOST, PORT) async with server: await server.serve_forever() if __name__ == "__main__": loop = asyncio.new_event_loop() # new_event_loop() is for python 3.10. For older versions, use get_event_loop() loop.run_until_complete(run_server()) Client import asyncio HOST = '127.0.0.1' PORT = 9999 async def run_client() -> None: # It's a coroutine. It will wait until the connection is established reader, writer = await asyncio.open_connection(HOST, PORT) while True: message = input('Enter a message: ') writer.write(message.encode()) await writer.drain() data = await reader.read(1024) if not data: raise Exception('Socket not communicating with the client') print(f"Received {data.decode()!r}") if (message == 'quit'): writer.write(b"quit") writer.close() await writer.wait_closed() exit(2) # break # Don't know if this is necessary if __name__ == '__main__': loop = asyncio.new_event_loop() loop.run_until_complete(run_client()) A: The client blocks on the input() function. This question is similar to server stop receiving msg after 1 msg receive A: Finally, I found a possible solution, by separating the thread. import asyncio import websockets import warnings warnings.filterwarnings("ignore") async def send_msg(websocket): while True: imp = await asyncio.get_event_loop().run_in_executor(None, lambda: input("Enter something: ")) print("MESSAGE: ", imp) await websocket.send(imp) #return imp async def recv_msg(websocket): while True: msg = await websocket.recv() print(f":> {msg}") async def echo_loop(): uri = f"ws://localhost:8765" async with websockets.connect(uri, ssl=None) as websocket: while True: await asyncio.gather(recv_msg(websocket),send_msg(websocket)) if __name__ == "__main__": asyncio.get_event_loop().run_until_complete(echo_loop()) asyncio.get_event_loop().run_forever() A: It seems that there is no clear solution. In particular, there have been many changes in python since the early releases of asyncio, so many possible solutions are outdated. I change the code to use WebSockets. However, the problem persists: input blocks the code, and none of the solutions above have solved my problem. Below is the new version of the code (and the error remains): Server import asyncio import websockets import time class Timer(object): '''Simple timer class that can be started and stopped.''' def __init__(self, websocket, name=None, interval=1) -> None: self.websocket = websocket self.name = name self.interval = interval async def _tick(self) -> None: while True: await asyncio.sleep(self.interval) await self.websocket.send("tick") print("Delta time: ", time.time() - self._init_time) async def start(self) -> None: self._init_time = time.time() self.task = asyncio.create_task(self._tick()) async def stop(self) -> None: self.task.cancel() print("Delta time: ", time.time() - self._init_time) async def handler(websocket): print("[WS-SERVER] client connected") while True: try: msg = await websocket.recv() print(f"<: {msg}") await websocket.send("Message received. {}".format(msg)) if(msg == "start"): timer = Timer(websocket) await timer.start() except websockets.ConnectionClosed: print("[WS-SERVER] client disconnected") break async def main(): async with websockets.serve(handler, "localhost", 8765): print("[WS-SERVER] ready") await asyncio.Future() # run forever if __name__ == "__main__": asyncio.run(main()) Client import asyncio import websockets '''async function that recieves and prints messages from the server''' async def recieve_message(websocket): msg1 = await websocket.recv() print(f"<: {msg1}") async def send_message(websocket): msg = input("Put your message here: ") await websocket.send(msg) print(":> Sent message: ", msg) async def handler(): uri = "ws://localhost:8765" async with websockets.connect(uri) as websocket: while True: '''run input() in a separate thread''' recv_msg, send_msg = await asyncio.gather( recieve_message(websocket), send_message(websocket), return_exceptions=True) if(send_msg == "test"): print("Supertest") async def main(): await handler() await asyncio.Future() # run forever if __name__ == "__main__": asyncio.run(handler()) print("[WS-CLIENT] bye")
Client get message from server using asyncio in python
We are trying to use asyncio to run a straightforward client/server. The server is an echo server with two possible commands sent by the client, "quit" and "timer". The timer command starts a timer that will print a message in the console every second (at the server and client), and the quit command closes the connection. The actual problem is the following: When we run the server and the client, and we start the timer, the result of the timer is not sent to the client. It blocks the server and the client. I believe that the problem is on the client's side. However, I was not able to detect it. Server import asyncio import time HOST = "127.0.0.1" PORT = 9999 class Timer(object): '''Simple timer class that can be started and stopped.''' def __init__(self, writer: asyncio.StreamWriter, name = None, interval = 1) -> None: self.name = name self.interval = interval self.writer = writer async def _tick(self) -> None: while True: await asyncio.sleep(self.interval) delta = time.time() - self._init_time self.writer.write(f"Timer {delta} ticked\n".encode()) self.writer.drain() print("Delta time: ", delta) async def start(self) -> None: self._init_time = time.time() self.task = asyncio.create_task(self._tick()) async def stop(self) -> None: self.task.cancel() print("Delta time: ", time.time() - self._init_time) async def msg_handler(reader: asyncio.StreamReader, writer: asyncio.StreamWriter) -> None: '''Handle the echo protocol.''' # timer task that the client can start: timer_task = False try: while True: data = await reader.read(1024) # Read 256 bytes from the reader. Size of the message msg = data.decode() # Decode the message addr, port = writer.get_extra_info("peername") # Get the address of the client print(f"Received {msg!r} from {addr}:{port!r}") send_message = "Message received: " + msg writer.write(send_message.encode()) # Echo the data back to the client await writer.drain() # This will wait until everything is clear to move to the next thing. if data == b"quit" and timer_task is True: # cancel the timer_task (if any) if timer_task: timer_task.cancel() await timer_task writer.close() # Close the connection await writer.wait_closed() # Wait for the connection to close elif data == b"quit" and timer_task is False: writer.close() # Close the connection await writer.wait_closed() # Wait for the connection to close elif data == b"start" and timer_task is False: print("Starting timer") t = Timer(writer) timer_task = True await t.start() elif data == b"stop" and timer_task is True: print("Stopping timer") await t.stop() timer_task = False except ConnectionResetError: print("Client disconnected") async def run_server() -> None: # Our awaitable callable. # This callable is ran when the server recieves some data server = await asyncio.start_server(msg_handler, HOST, PORT) async with server: await server.serve_forever() if __name__ == "__main__": loop = asyncio.new_event_loop() # new_event_loop() is for python 3.10. For older versions, use get_event_loop() loop.run_until_complete(run_server()) Client import asyncio HOST = '127.0.0.1' PORT = 9999 async def run_client() -> None: # It's a coroutine. It will wait until the connection is established reader, writer = await asyncio.open_connection(HOST, PORT) while True: message = input('Enter a message: ') writer.write(message.encode()) await writer.drain() data = await reader.read(1024) if not data: raise Exception('Socket not communicating with the client') print(f"Received {data.decode()!r}") if (message == 'quit'): writer.write(b"quit") writer.close() await writer.wait_closed() exit(2) # break # Don't know if this is necessary if __name__ == '__main__': loop = asyncio.new_event_loop() loop.run_until_complete(run_client())
[ "The client blocks on the input() function. This question is similar to server stop receiving msg after 1 msg receive\n", "Finally, I found a possible solution, by separating the thread.\nimport asyncio\nimport websockets\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nasync def send_msg(websocket):\n while True:\n imp = await asyncio.get_event_loop().run_in_executor(None, lambda: input(\"Enter something: \"))\n print(\"MESSAGE: \", imp)\n await websocket.send(imp)\n #return imp\n\nasync def recv_msg(websocket):\n while True:\n msg = await websocket.recv()\n print(f\":> {msg}\")\n\n\nasync def echo_loop():\n uri = f\"ws://localhost:8765\"\n async with websockets.connect(uri, ssl=None) as websocket:\n while True:\n await asyncio.gather(recv_msg(websocket),send_msg(websocket))\n\n\nif __name__ == \"__main__\":\n asyncio.get_event_loop().run_until_complete(echo_loop())\n asyncio.get_event_loop().run_forever()\n\n", "It seems that there is no clear solution. In particular, there have been many changes in python since the early releases of asyncio, so many possible solutions are outdated.\nI change the code to use WebSockets. However, the problem persists: input blocks the code, and none of the solutions above have solved my problem.\nBelow is the new version of the code (and the error remains):\nServer\nimport asyncio\nimport websockets\nimport time\n\nclass Timer(object):\n '''Simple timer class that can be started and stopped.'''\n\n def __init__(self, websocket, name=None, interval=1) -> None:\n self.websocket = websocket\n self.name = name\n self.interval = interval\n\n async def _tick(self) -> None:\n while True:\n await asyncio.sleep(self.interval)\n await self.websocket.send(\"tick\")\n print(\"Delta time: \", time.time() - self._init_time)\n\n async def start(self) -> None:\n self._init_time = time.time()\n self.task = asyncio.create_task(self._tick())\n\n async def stop(self) -> None:\n self.task.cancel()\n print(\"Delta time: \", time.time() - self._init_time)\n\nasync def handler(websocket):\n print(\"[WS-SERVER] client connected\")\n while True:\n try:\n msg = await websocket.recv()\n print(f\"<: {msg}\")\n await websocket.send(\"Message received. {}\".format(msg))\n if(msg == \"start\"):\n timer = Timer(websocket)\n await timer.start()\n\n except websockets.ConnectionClosed:\n print(\"[WS-SERVER] client disconnected\")\n break\n\nasync def main():\n async with websockets.serve(handler, \"localhost\", 8765):\n print(\"[WS-SERVER] ready\")\n await asyncio.Future() # run forever\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n\nClient\nimport asyncio\nimport websockets\n\n\n'''async function that recieves and prints messages from the server'''\nasync def recieve_message(websocket):\n msg1 = await websocket.recv()\n print(f\"<: {msg1}\")\n\nasync def send_message(websocket):\n msg = input(\"Put your message here: \")\n await websocket.send(msg)\n print(\":> Sent message: \", msg)\n\nasync def handler():\n uri = \"ws://localhost:8765\"\n async with websockets.connect(uri) as websocket:\n\n while True:\n '''run input() in a separate thread'''\n\n recv_msg, send_msg = await asyncio.gather(\n recieve_message(websocket),\n send_message(websocket),\n return_exceptions=True)\n\n if(send_msg == \"test\"):\n print(\"Supertest\")\n\n\nasync def main():\n await handler()\n await asyncio.Future() # run forever\n\nif __name__ == \"__main__\":\n asyncio.run(handler())\n print(\"[WS-CLIENT] bye\")\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "python", "python_asyncio" ]
stackoverflow_0074468560_python_python_asyncio.txt
Q: TypeError: list indices must be integers or slices, not str - dealing with dataframes I'm having the error below with a project and looked for some explanation (like this page) and I get the cause of the error. But I couldn't figure out what might be the problem in this case. Traceback (most recent call last): File "c:\Users\luisa.oliveira\Programs\VScode\dashboard-ecg-atualizado\app\views\dashboard.py", line 688, in n_protocolo_callback g1, quantidades = graficos( File "c:\Users\luisa.oliveira\Programs\VScode\dashboard-ecg-atualizado\app\views\dashboard.py", line 51, in graficos desejados = ids[ids["ID"] == numero]["Protocolo"] TypeError: list indices must be integers or slices, not str some background to line 688: def n_protocolo_callback(data, laudos, idade, sexo, operacao, pagina, quais_graficos): idade = faixas_idades.get(idade) idade_inicial, idade_final = None, None sexo = sexos.get(sexo) if idade: idade_inicial, idade_final = idade ids = laudos dados.loc[:, "Data"] = pd.to_datetime(dados.loc[:, "Data"]) # cria primeiro grΓ‘fico g1, quantidades = graficos( dados, ids, data, laudos, [idade_inicial, idade_final], sexo ) beginnig of func: graficos() def graficos(dados, ids, data, laudos, idade, sexo): data_inicial, data_final = data idade_inicial, idade_final = idade figura = [] n_laudos = [numeros_diagnosticos.get(laudo) for laudo in laudos] mostrar = [] for i, (nome, numero) in enumerate(zip(laudos, n_laudos)): desejados = ids[ids["ID"] == numero]["Protocolo"] Edit: "ids" is a csv with 2 columns (ID and Protocolo) that's read by pd.read_csv. "numero" is an integer and "nome" a string I think the problem has to do with indexing ids with "ID", but I'm not sure if that's right or how to solve it A: Global variables can be accessed inside of functions as long as they are on the right side of an assignment: # Global variable x = 3 def fun(): # assign value of global variable x to local variable y y = x If, however you have a variable with the same name of a global variable on the left side of an assignment, you'll create a local variable inside your function: # Global variable x = 3 def fun(): # Initialize local variable x = 5 print(x) print(x) # prints 3 fun() # prints 5 print(x) # prints 3 fun() assigns a different value to x but it doesn't affect the global variable outside. If you define a function, all parameters are of local scope. When you pass those arguments to a function, Python checks in the current scope for all variable names and only goes to the next scope if it doesn't find all variables. # Global variabl x = 3 def outer(): def inner(x): print(x) inner(x) outer() # Prints 3 Try to avoid using global variables as they obfuscate your code. I removed the parts not affecting your outcome for this explanation: import pandas as pd # global variables ids = pd.DataFrame({"ID":[1, 2, 3, 4, 5, 6], "Protocolo":[444, 555, 666, 777, 888, 999,]}) # Presumably some list as this caused your TypeError laudos = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] def n_protocolo_callback(laudos): # Initialize a local variable ids with values from laudos ids = laudos # Calling graficos with local variable of ids g1, quantidades = graficos(ids, laudos) # Define local variable of ids in the scope of function graficos def graficos(ids, laudos): n_laudos = [numeros_diagnosticos.get(laudo) for laudo in laudos] for i, (nome, numero) in enumerate(zip(laudos, n_laudos)): # Accessing local variable ids desejados = ids[ids["ID"] == numero]["Protocolo"] # Calling n_protocolo_callback will cause a TypeError # n_protocolo_callback(laudos) The error is caused as you re-assigned ids with the values of laudos inside n_protocolo_callback and then passed the local variable to graficos. When you use global variables you don't need to add them as parameter to your function. The TypeError is caused as a list can't be indexed with a string. As the error message reads you can index either using integer or slices, i.e: l = [555, 666, 777, 888, 999] # Accessing first element using integer print(l[0]) # prints 555 # Accessing first 3 elements using slice: print(l[0:3]) # prints [555, 666, 777]
TypeError: list indices must be integers or slices, not str - dealing with dataframes
I'm having the error below with a project and looked for some explanation (like this page) and I get the cause of the error. But I couldn't figure out what might be the problem in this case. Traceback (most recent call last): File "c:\Users\luisa.oliveira\Programs\VScode\dashboard-ecg-atualizado\app\views\dashboard.py", line 688, in n_protocolo_callback g1, quantidades = graficos( File "c:\Users\luisa.oliveira\Programs\VScode\dashboard-ecg-atualizado\app\views\dashboard.py", line 51, in graficos desejados = ids[ids["ID"] == numero]["Protocolo"] TypeError: list indices must be integers or slices, not str some background to line 688: def n_protocolo_callback(data, laudos, idade, sexo, operacao, pagina, quais_graficos): idade = faixas_idades.get(idade) idade_inicial, idade_final = None, None sexo = sexos.get(sexo) if idade: idade_inicial, idade_final = idade ids = laudos dados.loc[:, "Data"] = pd.to_datetime(dados.loc[:, "Data"]) # cria primeiro grΓ‘fico g1, quantidades = graficos( dados, ids, data, laudos, [idade_inicial, idade_final], sexo ) beginnig of func: graficos() def graficos(dados, ids, data, laudos, idade, sexo): data_inicial, data_final = data idade_inicial, idade_final = idade figura = [] n_laudos = [numeros_diagnosticos.get(laudo) for laudo in laudos] mostrar = [] for i, (nome, numero) in enumerate(zip(laudos, n_laudos)): desejados = ids[ids["ID"] == numero]["Protocolo"] Edit: "ids" is a csv with 2 columns (ID and Protocolo) that's read by pd.read_csv. "numero" is an integer and "nome" a string I think the problem has to do with indexing ids with "ID", but I'm not sure if that's right or how to solve it
[ "Global variables can be accessed inside of functions as long as they are on the right side of an assignment:\n# Global variable\nx = 3\n\ndef fun():\n # assign value of global variable x to local variable y\n y = x\n\nIf, however you have a variable with the same name of a global variable on the left side of an assignment, you'll create a local variable inside your function:\n# Global variable\nx = 3\n\ndef fun():\n # Initialize local variable\n x = 5\n print(x)\n\nprint(x) # prints 3\nfun() # prints 5\nprint(x) # prints 3\n\nfun() assigns a different value to x but it doesn't affect the global variable outside.\nIf you define a function, all parameters are of local scope.\nWhen you pass those arguments to a function, Python checks in the current scope for all variable names and only goes to the next scope if it doesn't find all variables.\n# Global variabl\nx = 3\n\ndef outer():\n def inner(x):\n print(x)\n inner(x)\n\nouter() # Prints 3\n\nTry to avoid using global variables as they obfuscate your code.\nI removed the parts not affecting your outcome for this explanation:\nimport pandas as pd\n\n# global variables\nids = pd.DataFrame({\"ID\":[1, 2, 3, 4, 5, 6], \"Protocolo\":[444, 555, 666, 777, 888, 999,]})\n\n# Presumably some list as this caused your TypeError\nlaudos = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n\n\ndef n_protocolo_callback(laudos):\n # Initialize a local variable ids with values from laudos\n ids = laudos\n # Calling graficos with local variable of ids\n g1, quantidades = graficos(ids, laudos)\n\n\n# Define local variable of ids in the scope of function graficos\ndef graficos(ids, laudos):\n n_laudos = [numeros_diagnosticos.get(laudo) for laudo in laudos]\n\n for i, (nome, numero) in enumerate(zip(laudos, n_laudos)):\n # Accessing local variable ids\n desejados = ids[ids[\"ID\"] == numero][\"Protocolo\"]\n\n# Calling n_protocolo_callback will cause a TypeError\n# n_protocolo_callback(laudos)\n\nThe error is caused as you re-assigned ids with the values of laudos inside n_protocolo_callback and then passed the local variable to graficos.\nWhen you use global variables you don't need to add them as parameter to your function.\n\nThe TypeError is caused as a list can't be indexed with a string.\nAs the error message reads you can index either using integer or slices, i.e:\nl = [555, 666, 777, 888, 999]\n\n# Accessing first element using integer\nprint(l[0]) # prints 555\n\n# Accessing first 3 elements using slice:\nprint(l[0:3]) # prints [555, 666, 777]\n\n" ]
[ 0 ]
[]
[]
[ "callback", "list", "pandas", "plotly_dash", "python" ]
stackoverflow_0074478000_callback_list_pandas_plotly_dash_python.txt
Q: Set two colors for a point of a matplotlib-scatter plot So Realising that this may not possible. What I want to do, looks something like this: point_x = [1] point_y = [1] col1 = ['blue'] col2 = ['red'] plt.scatter(point_x,point_y, c=col1,marker='o') plt.scatter(point_x,point_y, c=col2,marker=donut?) This would represent one point, where a portion of the (let's say) sphere, is color1, and a portion of (probably) a donut around the center of sphere, is color2. Has anyone tried this? A: maybe specifiying the point size s would help from matplotlib import pyplot as plt point_x = 1 point_y = 1 col1 = ['blue'] col2 = ['red'] plt.scatter(point_x, point_y, c=col1, marker='o', s=1000) plt.scatter(point_x, point_y, c=col2, marker='o', s=500) plt.show() output
Set two colors for a point of a matplotlib-scatter plot
So Realising that this may not possible. What I want to do, looks something like this: point_x = [1] point_y = [1] col1 = ['blue'] col2 = ['red'] plt.scatter(point_x,point_y, c=col1,marker='o') plt.scatter(point_x,point_y, c=col2,marker=donut?) This would represent one point, where a portion of the (let's say) sphere, is color1, and a portion of (probably) a donut around the center of sphere, is color2. Has anyone tried this?
[ "maybe specifiying the point size s would help\nfrom matplotlib import pyplot as plt\n\npoint_x = 1\npoint_y = 1\n\ncol1 = ['blue']\ncol2 = ['red']\n\nplt.scatter(point_x, point_y, c=col1, marker='o', s=1000)\nplt.scatter(point_x, point_y, c=col2, marker='o', s=500)\nplt.show()\n\noutput\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "python", "scatter_plot" ]
stackoverflow_0074489736_matplotlib_python_scatter_plot.txt
Q: best way to subtract mean monthly values for each grid in Python xarray A toy dataset from here: import numpy as np import pandas as pd import seaborn as sns import xarray as xr np.random.seed(123) xr.set_options(display_style="html") times = pd.date_range("2000-01-01", "2001-12-31", name="time") annual_cycle = np.sin(2 * np.pi * (times.dayofyear.values / 365.25 - 0.28)) base = 10 + 15 * annual_cycle.reshape(-1, 1) tmin_values = base + 3 * np.random.randn(annual_cycle.size, 3) tmax_values = base + 10 + 3 * np.random.randn(annual_cycle.size, 3) ds = xr.Dataset( { "tmin": (("time", "location"), tmin_values), "tmax": (("time", "location"), tmax_values), }, {"time": times, "location": ["IA", "IN", "IL"]}, ) I know that here I can find how to subtract mean monthly values from a variable in xarray.DataSet() as below: climatology = ds.groupby("time.month").mean("time") anomalies = ds.groupby("time.month") - climatology anomalies.mean("location").to_dataframe()[["tmin", "tmax"]].plot() Then, can I do the subtraction for each location? I tried to do it for location-month groups, but the xarray.DataSet.groupby() doesn't allow to pass multiple groups. Then, I tried to make the location-month using xarray.DataSet.stack(), but it only allows to pass dimension; I could extract month values using time.month but they were restored as a new variable, not a dimension. I can use for or xarray.DataSet.apply() for all locations, but it is too slow (I have about 65000 locations). Expected results or processes are something like: for each location: climatology = ds.groupby("time.month").mean("time") anomalies = ds.groupby("time.month") - climatology A solution within only xarray would be the best, but if it is possible and quite fast using pd.DataFrame() or others, then those solutions are also welcome. Edit Here is my current solution using `pd.DataFrame()' # convert to pd.dataframe df = ds.to_dataframe() # get mean monthly values months = df.index.get_level_values('time').month df_meanMonths = df.groupby([pd.Grouper(level='location'), months]).mean() # rename and reindex df_meanMonths.rename(columns={'tmin': 'tminMM', 'tmax': 'tmaxMM'}, inplace=True) df_meanMonths.index.set_names('month', level='time', inplace=True) # merge df['month'] = df.index.get_level_values('time').month vars_join = ['tminMM', 'tmaxMM'] join_right = df_meanMonths[vars_join] # results df.reset_index().set_index(['location', 'month']).merge(join_right, how='left', left_index=True, right_on=['location', 'month']) A: I think that you might be looking for is this: anomalies = xr.apply_ufunc( lambda x, mean: x - mean, ds.tmax.groupby('time.month'), ds.tmax.groupby('time.month').mean() ).drop('month') for just the tmax variable (a DataArray) or anomalies = xr.apply_ufunc( lambda x, means: x - means, ds.groupby('time.month'), ds.groupby('time.month').mean() ).drop('month') for all variables in the Dataset
best way to subtract mean monthly values for each grid in Python xarray
A toy dataset from here: import numpy as np import pandas as pd import seaborn as sns import xarray as xr np.random.seed(123) xr.set_options(display_style="html") times = pd.date_range("2000-01-01", "2001-12-31", name="time") annual_cycle = np.sin(2 * np.pi * (times.dayofyear.values / 365.25 - 0.28)) base = 10 + 15 * annual_cycle.reshape(-1, 1) tmin_values = base + 3 * np.random.randn(annual_cycle.size, 3) tmax_values = base + 10 + 3 * np.random.randn(annual_cycle.size, 3) ds = xr.Dataset( { "tmin": (("time", "location"), tmin_values), "tmax": (("time", "location"), tmax_values), }, {"time": times, "location": ["IA", "IN", "IL"]}, ) I know that here I can find how to subtract mean monthly values from a variable in xarray.DataSet() as below: climatology = ds.groupby("time.month").mean("time") anomalies = ds.groupby("time.month") - climatology anomalies.mean("location").to_dataframe()[["tmin", "tmax"]].plot() Then, can I do the subtraction for each location? I tried to do it for location-month groups, but the xarray.DataSet.groupby() doesn't allow to pass multiple groups. Then, I tried to make the location-month using xarray.DataSet.stack(), but it only allows to pass dimension; I could extract month values using time.month but they were restored as a new variable, not a dimension. I can use for or xarray.DataSet.apply() for all locations, but it is too slow (I have about 65000 locations). Expected results or processes are something like: for each location: climatology = ds.groupby("time.month").mean("time") anomalies = ds.groupby("time.month") - climatology A solution within only xarray would be the best, but if it is possible and quite fast using pd.DataFrame() or others, then those solutions are also welcome. Edit Here is my current solution using `pd.DataFrame()' # convert to pd.dataframe df = ds.to_dataframe() # get mean monthly values months = df.index.get_level_values('time').month df_meanMonths = df.groupby([pd.Grouper(level='location'), months]).mean() # rename and reindex df_meanMonths.rename(columns={'tmin': 'tminMM', 'tmax': 'tmaxMM'}, inplace=True) df_meanMonths.index.set_names('month', level='time', inplace=True) # merge df['month'] = df.index.get_level_values('time').month vars_join = ['tminMM', 'tmaxMM'] join_right = df_meanMonths[vars_join] # results df.reset_index().set_index(['location', 'month']).merge(join_right, how='left', left_index=True, right_on=['location', 'month'])
[ "I think that you might be looking for is this:\nanomalies = xr.apply_ufunc(\n lambda x, mean: x - mean, \n ds.tmax.groupby('time.month'),\n ds.tmax.groupby('time.month').mean()\n).drop('month')\n\nfor just the tmax variable (a DataArray) or\nanomalies = xr.apply_ufunc(\n lambda x, means: x - means, \n ds.groupby('time.month'),\n ds.groupby('time.month').mean()\n).drop('month')\n\nfor all variables in the Dataset\n" ]
[ 1 ]
[]
[]
[ "python", "python_xarray" ]
stackoverflow_0066903278_python_python_xarray.txt
Q: subprocess problem with PyDub: Python 3.63 v Python 3.10 Until recently I've been using python 3.63. When I need to use Pydub's audio_segment I do it like this to avoid a flash of the console when the app is frozen in a pyinstaller exe: subprocess.STARTUPINFO.dwFlags |= subprocess.STARTF_USESHOWWINDOW audio = AudioSegment.from_file('path_to_file') Since moving to Python 3.10 this creates the error: AttributeError: type object 'STARTUPINFO' has no attribute 'dwFlags' I have tried adding options like: creationflags=subprocess.CREATE_NO_WINDOW and startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW startupinfo.wShowWindow = subprocess.SW_HIDE p = subprocess.Popen(conversion_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo) to pydub itself, but the flash of console is always there. I've Googled for hours and got nowhere... A: I finally got to the bottom of this. A module of pydub called utils.py contains a couple of subprocess calls. I changed the calls from: command = [prober] + command_args output = Popen(command, stdout=PIPE).communicate([0].decode("utf-8") To: command = [prober] + command_args startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW startupinfo.wShowWindow = subprocess.SW_HIDE res = subprocess.Popen(command, startupinfo=startupinfo, stdout=subprocess.PIPE, stderr=subprocess.PIPE) I did the same in the audio_segment.py module. Plus of course changed: from subprocess import popen, PIPE to import subprocess
subprocess problem with PyDub: Python 3.63 v Python 3.10
Until recently I've been using python 3.63. When I need to use Pydub's audio_segment I do it like this to avoid a flash of the console when the app is frozen in a pyinstaller exe: subprocess.STARTUPINFO.dwFlags |= subprocess.STARTF_USESHOWWINDOW audio = AudioSegment.from_file('path_to_file') Since moving to Python 3.10 this creates the error: AttributeError: type object 'STARTUPINFO' has no attribute 'dwFlags' I have tried adding options like: creationflags=subprocess.CREATE_NO_WINDOW and startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW startupinfo.wShowWindow = subprocess.SW_HIDE p = subprocess.Popen(conversion_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo) to pydub itself, but the flash of console is always there. I've Googled for hours and got nowhere...
[ "I finally got to the bottom of this.\nA module of pydub called utils.py contains a couple of subprocess calls. I changed the calls from:\ncommand = [prober] + command_args\noutput = Popen(command, stdout=PIPE).communicate([0].decode(\"utf-8\")\n\nTo:\ncommand = [prober] + command_args\nstartupinfo = subprocess.STARTUPINFO() \nstartupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW\nstartupinfo.wShowWindow = subprocess.SW_HIDE\nres = subprocess.Popen(command, startupinfo=startupinfo, \n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\nI did the same in the audio_segment.py module.\nPlus of course changed: from subprocess import popen, PIPE to import subprocess\n" ]
[ 1 ]
[]
[]
[ "pydub", "python", "subprocess" ]
stackoverflow_0074448497_pydub_python_subprocess.txt
Q: How to get ALL (or multiple) pair's historical klines from Binance API in ONE request? I have a trading bot that trades multiple pairs (30-40). It uses the previous 5m candle for the price input. Therefore, I get 5m history for ALL pairs one by one. Currently, the full cycle takes about 10 minutes, so the 5m candles get updated once in 10m, which is no good. Any ideas on how to speed things up? A: I think the best option for you will be websocket connection. You cannot recieve kline data once per eg. 5 minutes, but you can recieve every change in candle like you see it in graph. Binance API provide only this, but in compound with websocket connection it will by realy fast, not 10 minutes. After recieve data you only must to specify when candle was closed, you can do it from timestamps that are in json data ('t' and 'T'). [documentation here] You must install websockets library. pip install websockets And here is some sample code how it can work. import asyncio import websockets async def candle_stick_data(): url = "wss://stream.binance.com:9443/ws/" #steam address first_pair = 'bnbbtc@kline_1m' #first pair async with websockets.connect(url+first_pair) as sock: pairs = '{"method": "SUBSCRIBE", "params": ["xrpbtc@kline_1m","ethbtc@kline_1m" ], "id": 1}' #other pairs await sock.send(pairs) print(f"> {pairs}") while True: resp = await sock.recv() print(f"< {resp}") asyncio.get_event_loop().run_until_complete(candle_stick_data()) Output: < {"e":"kline","E":1599828802835,"s":"XRPBTC","k":{"t":1599828780000,"T":1599828839999,"s":"XRPBTC","i":"1m","f":76140140,"L":76140145,"o":"0.00002346","c":"0.00002346","h":"0.00002346","l":"0.00002345","v":"700.00000000","n":6,"x":false,"q":"0.01641578","V":"78.00000000","Q":"0.00182988","B":"0"}} < {"e":"kline","E":1599828804297,"s":"BNBBTC","k":{"t":1599828780000,"T":1599828839999,"s":"BNBBTC","i":"1m","f":87599856,"L":87599935,"o":"0.00229400","c":"0.00229610","h":"0.00229710","l":"0.00229400","v":"417.88000000","n":80,"x":false,"q":"0.95933156","V":"406.63000000","Q":"0.93351653","B":"0"}} < {"e":"kline","E":1599828804853,"s":"ETHBTC","k":{"t":1599828780000,"T":1599828839999,"s":"ETHBTC","i":"1m","f":193235180,"L":193235214,"o":"0.03551300","c":"0.03551700","h":"0.03551800","l":"0.03551300","v":"21.52300000","n":35,"x":false,"q":"0.76437246","V":"11.53400000","Q":"0.40962829","B":"0"}} < {"e":"kline","E":1599828806303,"s":"BNBBTC","k":{"t":1599828780000,"T":1599828839999,"s":"BNBBTC","i":"1m","f":87599856,"L":87599938,"o":"0.00229400","c":"0.00229620","h":"0.00229710","l":"0.00229400","v":"420.34000000","n":83,"x":false,"q":"0.96497998","V":"406.63000000","Q":"0.93351653","B":"0"}} A: Just to follow up on that answer. You can see the candle closing as the websocket return data for every tick has a boolean property for if the candle is closed or not i.e. on a 5min timeframe if the candle closed on the 5min mark A: Another follow up to the rozumir's answer. If you use "async for" loop and "try-except" you can guarantee a reconnect after disconnect. Since binance disconnects websocket connections after 24 hours, this can be very handful. Sample code for websockets 10.4 is: import asyncio import websockets async def candle_stick_data(): url = "wss://stream.binance.com:9443/ws/" #steam address first_pair = 'btcusdt@kline_15m' #first pair async for sock in websockets.connect(url+first_pair): try: pairs = '{"method": "SUBSCRIBE", "params:["ethusdt@kline_15m","algousdt@kline_15m","nearusdt@kline_15m","atomusdt@kline_15m"], "id": 1}' #other pairs await sock.send(pairs) print(f"> {pairs}") while True: resp = await sock.recv() print(f"< {resp}") except websockets.ConnectionClosed: continue asyncio.run(candle_stick_data())
How to get ALL (or multiple) pair's historical klines from Binance API in ONE request?
I have a trading bot that trades multiple pairs (30-40). It uses the previous 5m candle for the price input. Therefore, I get 5m history for ALL pairs one by one. Currently, the full cycle takes about 10 minutes, so the 5m candles get updated once in 10m, which is no good. Any ideas on how to speed things up?
[ "I think the best option for you will be websocket connection. You cannot recieve kline data once per eg. 5 minutes, but you can recieve every change in candle like you see it in graph. Binance API provide only this, but in compound with websocket connection it will by realy fast, not 10 minutes.\nAfter recieve data you only must to specify when candle was closed, you can do it from timestamps that are in json data ('t' and 'T'). [documentation here]\nYou must install websockets library.\npip install websockets\n\nAnd here is some sample code how it can work.\nimport asyncio\nimport websockets\n\n\nasync def candle_stick_data():\n url = \"wss://stream.binance.com:9443/ws/\" #steam address\n first_pair = 'bnbbtc@kline_1m' #first pair\n async with websockets.connect(url+first_pair) as sock:\n pairs = '{\"method\": \"SUBSCRIBE\", \"params\": [\"xrpbtc@kline_1m\",\"ethbtc@kline_1m\" ], \"id\": 1}' #other pairs\n\n await sock.send(pairs)\n print(f\"> {pairs}\")\n while True:\n resp = await sock.recv()\n print(f\"< {resp}\")\n\nasyncio.get_event_loop().run_until_complete(candle_stick_data())\n\nOutput:\n< {\"e\":\"kline\",\"E\":1599828802835,\"s\":\"XRPBTC\",\"k\":{\"t\":1599828780000,\"T\":1599828839999,\"s\":\"XRPBTC\",\"i\":\"1m\",\"f\":76140140,\"L\":76140145,\"o\":\"0.00002346\",\"c\":\"0.00002346\",\"h\":\"0.00002346\",\"l\":\"0.00002345\",\"v\":\"700.00000000\",\"n\":6,\"x\":false,\"q\":\"0.01641578\",\"V\":\"78.00000000\",\"Q\":\"0.00182988\",\"B\":\"0\"}}\n< {\"e\":\"kline\",\"E\":1599828804297,\"s\":\"BNBBTC\",\"k\":{\"t\":1599828780000,\"T\":1599828839999,\"s\":\"BNBBTC\",\"i\":\"1m\",\"f\":87599856,\"L\":87599935,\"o\":\"0.00229400\",\"c\":\"0.00229610\",\"h\":\"0.00229710\",\"l\":\"0.00229400\",\"v\":\"417.88000000\",\"n\":80,\"x\":false,\"q\":\"0.95933156\",\"V\":\"406.63000000\",\"Q\":\"0.93351653\",\"B\":\"0\"}}\n< {\"e\":\"kline\",\"E\":1599828804853,\"s\":\"ETHBTC\",\"k\":{\"t\":1599828780000,\"T\":1599828839999,\"s\":\"ETHBTC\",\"i\":\"1m\",\"f\":193235180,\"L\":193235214,\"o\":\"0.03551300\",\"c\":\"0.03551700\",\"h\":\"0.03551800\",\"l\":\"0.03551300\",\"v\":\"21.52300000\",\"n\":35,\"x\":false,\"q\":\"0.76437246\",\"V\":\"11.53400000\",\"Q\":\"0.40962829\",\"B\":\"0\"}}\n< {\"e\":\"kline\",\"E\":1599828806303,\"s\":\"BNBBTC\",\"k\":{\"t\":1599828780000,\"T\":1599828839999,\"s\":\"BNBBTC\",\"i\":\"1m\",\"f\":87599856,\"L\":87599938,\"o\":\"0.00229400\",\"c\":\"0.00229620\",\"h\":\"0.00229710\",\"l\":\"0.00229400\",\"v\":\"420.34000000\",\"n\":83,\"x\":false,\"q\":\"0.96497998\",\"V\":\"406.63000000\",\"Q\":\"0.93351653\",\"B\":\"0\"}}\n\n", "Just to follow up on that answer. You can see the candle closing as the websocket return data for every tick has a boolean property for if the candle is closed or not i.e. on a 5min timeframe if the candle closed on the 5min mark\n", "Another follow up to the rozumir's answer. If you use \"async for\" loop and \"try-except\" you can guarantee a reconnect after disconnect. Since binance disconnects websocket connections after 24 hours, this can be very handful. Sample code for websockets 10.4 is:\nimport asyncio\nimport websockets\n\nasync def candle_stick_data():\n\n url = \"wss://stream.binance.com:9443/ws/\" #steam address\n first_pair = 'btcusdt@kline_15m' #first pair\n\n async for sock in websockets.connect(url+first_pair):\n\n try:\n pairs = '{\"method\": \"SUBSCRIBE\", \"params:[\"ethusdt@kline_15m\",\"algousdt@kline_15m\",\"nearusdt@kline_15m\",\"atomusdt@kline_15m\"], \"id\": 1}' #other pairs\n \n await sock.send(pairs)\n print(f\"> {pairs}\")\n while True:\n \n resp = await sock.recv() \n print(f\"< {resp}\")\n \n except websockets.ConnectionClosed:\n continue\n\nasyncio.run(candle_stick_data())\n\n" ]
[ 10, 1, 0 ]
[]
[]
[ "algorithmic_trading", "api", "binance", "python", "trading" ]
stackoverflow_0063515267_algorithmic_trading_api_binance_python_trading.txt
Q: Error while targeting a Julia function into multiprocessing.Process of Python I am trying to parallelize a code in python by using multiprocessing.Process which targets a Julia function. The function works fine when I call it directly, i.e. when I execute: if __name__ == "__main__": import julia julia.Julia(compiled_modules=False) julia.Pkg_jl.func_jl(*args) However, I have an error when I define the same function as a target in a Process function. This is the code: from multiprocessing import Process import julia julia.Julia(compiled_modules=False) class JuliaProcess(object): ... def _wrapper(self, *args): ret = julia.Pkg_jl.func_jl(args) self.queue.put(ret) # this is for save the result of the function def run(self, *args): p = Process(target=self._wrapper, args=args) self.processes.append(p) # this is for save the process job p.start() ... if __name__ == "__main__": ... Jlproc = JuliaProcess() Jlproc.run(some_args) The error is when the Process starts, with the following output: fatal: error thrown and no exception handler available. ReadOnlyMemoryError() unknown function (ip: 0x7f9df81cb8f0) ... If I try to compile the julia modules in the _wrapper function, i.e.: from multiprocessing import Process import julia class JuliaProcess(object): ... def _wrapper(self, *args): julia.Julia(compiled_modules=False) ret = julia.Pkg_jl.func_jl(args) self.queue.put(ret) # this is for save the result of the function def run(self, *args): p = Process(target=self._wrapper, args=args) self.processes.append(p) # this is for save the process job p.start() ... if __name__ == "__main__": ... Jlproc = JuliaProcess() Jlproc.run(some_args) I have the following error: raise JuliaError(u'Exception \'{}\' occurred while calling julia code:\n{}' julia.core.JuliaError: Exception 'ReadOnlyMemoryError' occurred while calling julia code: const PyCall = Base.require(Base.PkgId(Base.UUID("438e738f-606a-5dbb-bf0a-cddfbfd45ab0"), "PyCall")) ... Does anyone know what is happening? and whether it is possible using python to parallelize julia functions as I suggest. A: I finally solved the error. The syntaxis is not the problem, but the instance on which Julia packages are precompiled. In the first code, the error is in the call [Jl]: julia.Julia(compiled_modules=False) just before Julia is imported. The second code works fine since the expression [Jl] is precompiled in the target process. Below, I share an example that works fine if you have Julia and PyCall duly installed. #!/usr/bin/env python3 # coding=utf-8 from multiprocessing import Process, Queue import julia class JuliaProcess(object): def __init__(self): self.processes = [] self.queue = Queue() def _wrapper(self, *args): julia.Julia(compiled_modules=False) from julia import LinearAlgebra as LA ret = LA.dot(args[0],args[1]) self.queue.put(ret) # this is for save the result of the function def run(self, *args): p = Process(target=self._wrapper, args=args) self.processes.append(p) # this is for save the process job p.start() def wait(self): self.rets = [] for p in self.processes: ret = self.queue.get() self.rets.append(ret) for p in self.processes: p.join() if __name__ == "__main__": jp = JuliaProcess() jp.run([1,5,6],[1,3,2]) jp.wait() print(jp.rets)
Error while targeting a Julia function into multiprocessing.Process of Python
I am trying to parallelize a code in python by using multiprocessing.Process which targets a Julia function. The function works fine when I call it directly, i.e. when I execute: if __name__ == "__main__": import julia julia.Julia(compiled_modules=False) julia.Pkg_jl.func_jl(*args) However, I have an error when I define the same function as a target in a Process function. This is the code: from multiprocessing import Process import julia julia.Julia(compiled_modules=False) class JuliaProcess(object): ... def _wrapper(self, *args): ret = julia.Pkg_jl.func_jl(args) self.queue.put(ret) # this is for save the result of the function def run(self, *args): p = Process(target=self._wrapper, args=args) self.processes.append(p) # this is for save the process job p.start() ... if __name__ == "__main__": ... Jlproc = JuliaProcess() Jlproc.run(some_args) The error is when the Process starts, with the following output: fatal: error thrown and no exception handler available. ReadOnlyMemoryError() unknown function (ip: 0x7f9df81cb8f0) ... If I try to compile the julia modules in the _wrapper function, i.e.: from multiprocessing import Process import julia class JuliaProcess(object): ... def _wrapper(self, *args): julia.Julia(compiled_modules=False) ret = julia.Pkg_jl.func_jl(args) self.queue.put(ret) # this is for save the result of the function def run(self, *args): p = Process(target=self._wrapper, args=args) self.processes.append(p) # this is for save the process job p.start() ... if __name__ == "__main__": ... Jlproc = JuliaProcess() Jlproc.run(some_args) I have the following error: raise JuliaError(u'Exception \'{}\' occurred while calling julia code:\n{}' julia.core.JuliaError: Exception 'ReadOnlyMemoryError' occurred while calling julia code: const PyCall = Base.require(Base.PkgId(Base.UUID("438e738f-606a-5dbb-bf0a-cddfbfd45ab0"), "PyCall")) ... Does anyone know what is happening? and whether it is possible using python to parallelize julia functions as I suggest.
[ "I finally solved the error.\nThe syntaxis is not the problem, but the instance on which Julia packages are precompiled.\nIn the first code, the error is in the call [Jl]:\njulia.Julia(compiled_modules=False)\n\njust before Julia is imported.\nThe second code works fine since the expression [Jl] is precompiled in the target process.\nBelow, I share an example that works fine if you have Julia and PyCall duly installed.\n#!/usr/bin/env python3\n# coding=utf-8\n\nfrom multiprocessing import Process, Queue\nimport julia\n\nclass JuliaProcess(object):\n def __init__(self):\n self.processes = []\n self.queue = Queue()\n\n def _wrapper(self, *args):\n julia.Julia(compiled_modules=False)\n from julia import LinearAlgebra as LA\n ret = LA.dot(args[0],args[1])\n self.queue.put(ret) # this is for save the result of the function\n\n def run(self, *args):\n p = Process(target=self._wrapper, args=args)\n self.processes.append(p) # this is for save the process job\n p.start()\n\n def wait(self):\n self.rets = []\n \n for p in self.processes:\n ret = self.queue.get()\n self.rets.append(ret)\n\n for p in self.processes:\n p.join()\n\nif __name__ == \"__main__\":\n jp = JuliaProcess()\n jp.run([1,5,6],[1,3,2])\n jp.wait()\n print(jp.rets)\n\n" ]
[ 0 ]
[]
[]
[ "julia", "multiprocessing", "pycall", "python" ]
stackoverflow_0074438358_julia_multiprocessing_pycall_python.txt
Q: How to use pytest-custom_exit_code plugin Need help! I have a job on Gitlab ci, that runs tests and reruns failed ones. If there are no failed tests, job fails with exit code 5, that means that there are no tests for running. I found out that there is plugin "pytest-custom_exit_code", but I don't know how to correctly use it. I need just to add command 'pytest --suppress-no-test-exit-code' to my runner.sh? It looks like this now: #!/bin/sh /usr/local/bin/pytest -m test /usr/local/bin/pytest -m --last-failed --last-failed-no-failures none test A: Assumption here is that plugin is installed first using pip install pytest-custom_exit_code command like option pytest --suppress-no-test-exit-code should work after that. If configuration file like .pytest.ini is used , following lines should be added in it [pytest] addopts = --suppress-no-test-exit-code
How to use pytest-custom_exit_code plugin
Need help! I have a job on Gitlab ci, that runs tests and reruns failed ones. If there are no failed tests, job fails with exit code 5, that means that there are no tests for running. I found out that there is plugin "pytest-custom_exit_code", but I don't know how to correctly use it. I need just to add command 'pytest --suppress-no-test-exit-code' to my runner.sh? It looks like this now: #!/bin/sh /usr/local/bin/pytest -m test /usr/local/bin/pytest -m --last-failed --last-failed-no-failures none test
[ "Assumption here is that plugin is installed first using\npip install pytest-custom_exit_code\n\ncommand like option pytest --suppress-no-test-exit-code should work after that.\nIf configuration file like .pytest.ini is used , following lines should be added in it\n[pytest]\naddopts = --suppress-no-test-exit-code\n\n" ]
[ 0 ]
[]
[]
[ "pytest", "python" ]
stackoverflow_0073091711_pytest_python.txt
Q: Seaborn Boxplot with jittered outliers I want a Boxplot with jittered outliers. But only the outliers not the non-outliers. Searching the web you often find a workaround combining sns.boxplot() and sns.swarmplot(). The problem with that figure is that the outliers are drawn twice. I don't need the red ones I only need the jittered (green) ones. Also the none-outliers are drawn. I don't need them also. I also have a feautre request at upstream open about it. But on my current research there is no Seaborn-inbuild solution for that. This is an MWE reproducing the boxplot shown. #!/usr/bin/env python3 import random import pandas import seaborn as sns import matplotlib.pyplot as plt sns.set_theme() random.seed(0) df = pandas.DataFrame({ 'Vals': random.choices(range(200), k=200)}) df_outliers = pandas.DataFrame({ 'Vals': random.choices(range(400, 700), k=20)}) df = pandas.concat([df, df_outliers], axis=0) flierprops = { 'marker': 'o', 'markeredgecolor': 'red', 'markerfacecolor': 'none' } # Usual boxplot ax = sns.boxplot(y='Vals', data=df, flierprops=flierprops) # Add jitter with the swarmplot function ax = sns.swarmplot(y='Vals', data=df, linewidth=.75, color='none', edgecolor='green') plt.show() A: Here is an approach to have jittered outliers. The jitter is similar to sns.stripplot(), not to sns.swarmplot() which uses a rather elaborate spreading algorithm. Basically, all the "line" objects of the subplot are checked whether they have a marker. The x-positions of the "lines" with a marker are moved a bit to create jitter. You might want to vary the amount of jitter, e.g. when you are working with hue. import random import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set_theme() random.seed(0) df = pd.DataFrame({ 'Vals': random.choices(range(200), k=200)}) df_outliers = pd.DataFrame({ 'Vals': random.choices(range(400, 700), k=20)}) df = pd.concat([df, df_outliers], axis=0) flierprops = { 'marker': 'o', 'markeredgecolor': 'red', 'markerfacecolor': 'none' } # Usual boxplot ax = sns.boxplot(y='Vals', data=df, flierprops=flierprops) for l in ax.lines: if l.get_marker() != '': xs = l.get_xdata() xs += np.random.uniform(-0.2, 0.2, len(xs)) l.set_xdata(xs) plt.tight_layout() plt.show() An alternative approach could be to filter out the outliers, and then call sns.swarmplot() or sns.stripplot() only with those points. As seaborn doesn't return the values calculated to position the whiskers, you might need to calculate those again via scipy, taking into account seaborn's filtering on x and on hue.
Seaborn Boxplot with jittered outliers
I want a Boxplot with jittered outliers. But only the outliers not the non-outliers. Searching the web you often find a workaround combining sns.boxplot() and sns.swarmplot(). The problem with that figure is that the outliers are drawn twice. I don't need the red ones I only need the jittered (green) ones. Also the none-outliers are drawn. I don't need them also. I also have a feautre request at upstream open about it. But on my current research there is no Seaborn-inbuild solution for that. This is an MWE reproducing the boxplot shown. #!/usr/bin/env python3 import random import pandas import seaborn as sns import matplotlib.pyplot as plt sns.set_theme() random.seed(0) df = pandas.DataFrame({ 'Vals': random.choices(range(200), k=200)}) df_outliers = pandas.DataFrame({ 'Vals': random.choices(range(400, 700), k=20)}) df = pandas.concat([df, df_outliers], axis=0) flierprops = { 'marker': 'o', 'markeredgecolor': 'red', 'markerfacecolor': 'none' } # Usual boxplot ax = sns.boxplot(y='Vals', data=df, flierprops=flierprops) # Add jitter with the swarmplot function ax = sns.swarmplot(y='Vals', data=df, linewidth=.75, color='none', edgecolor='green') plt.show()
[ "Here is an approach to have jittered outliers. The jitter is similar to sns.stripplot(), not to sns.swarmplot() which uses a rather elaborate spreading algorithm. Basically, all the \"line\" objects of the subplot are checked whether they have a marker. The x-positions of the \"lines\" with a marker are moved a bit to create jitter. You might want to vary the amount of jitter, e.g. when you are working with hue.\nimport random\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nsns.set_theme()\n\nrandom.seed(0)\n\ndf = pd.DataFrame({\n 'Vals': random.choices(range(200), k=200)})\ndf_outliers = pd.DataFrame({\n 'Vals': random.choices(range(400, 700), k=20)})\n\ndf = pd.concat([df, df_outliers], axis=0)\n\nflierprops = {\n 'marker': 'o',\n 'markeredgecolor': 'red',\n 'markerfacecolor': 'none'\n}\n\n# Usual boxplot\nax = sns.boxplot(y='Vals', data=df, flierprops=flierprops)\n\nfor l in ax.lines:\n if l.get_marker() != '':\n xs = l.get_xdata()\n xs += np.random.uniform(-0.2, 0.2, len(xs))\n l.set_xdata(xs)\n\nplt.tight_layout()\nplt.show()\n\n\nAn alternative approach could be to filter out the outliers, and then call sns.swarmplot() or sns.stripplot() only with those points. As seaborn doesn't return the values calculated to position the whiskers, you might need to calculate those again via scipy, taking into account seaborn's filtering on x and on hue.\n" ]
[ 3 ]
[]
[]
[ "python", "seaborn" ]
stackoverflow_0074488328_python_seaborn.txt
Q: Remove white space plot matplotlib I'm trying to get something like this: with this code x = np.arange(l, r, s) y = np.arange(b, t, s) X, Y = np.meshgrid(x, y) Z = f(X,Y) plt.axis('equal') plt.pcolormesh(X, Y, Z) plt.savefig("image.png",dpi=300) But I get this: How could I remove the white regions? I really appreciate any kind of help. A: i would use the pyplot subplots to define the figures size and therefor aspect like this import numpy as np from matplotlib import pyplot as plt def f(x,y): return x + y x = np.arange(1, 10, .1) y = np.arange(1, 10, .1) X, Y = np.meshgrid(x, y) Z = f(X,Y) f, ax = plt.subplots(figsize=(4, 4)) plt.pcolormesh(X, Y, Z) plt.show() output as pointed out by @JohanC, this also works and might be a better solution in some cases. it does not require the 'subplots' function which returns a figure and a subplot. plt.pcolormesh(X, Y, Z) plt.axis('scaled') plt.show() output
Remove white space plot matplotlib
I'm trying to get something like this: with this code x = np.arange(l, r, s) y = np.arange(b, t, s) X, Y = np.meshgrid(x, y) Z = f(X,Y) plt.axis('equal') plt.pcolormesh(X, Y, Z) plt.savefig("image.png",dpi=300) But I get this: How could I remove the white regions? I really appreciate any kind of help.
[ "i would use the pyplot subplots to define the figures size and therefor aspect like this\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\ndef f(x,y):\n return x + y\n\nx = np.arange(1, 10, .1)\ny = np.arange(1, 10, .1)\nX, Y = np.meshgrid(x, y)\nZ = f(X,Y)\n\n\nf, ax = plt.subplots(figsize=(4, 4))\nplt.pcolormesh(X, Y, Z)\nplt.show()\n\noutput\n\nas pointed out by @JohanC, this also works and might be a better solution in some cases. it does not require the 'subplots' function which returns a figure and a subplot.\nplt.pcolormesh(X, Y, Z)\nplt.axis('scaled')\nplt.show()\n\noutput\n\n" ]
[ 1 ]
[ "Anwered here You can remove the margins at the edges of the plot.\nplt.margins(x=0)\n\n" ]
[ -1 ]
[ "matplotlib", "plot", "python" ]
stackoverflow_0074489519_matplotlib_plot_python.txt
Q: for loop append in a list, but the input is a data frame I have a bit python code below. Just an example to show the problem: I would like to select some lines in a data frame basing on some values. Somehow this needs to be in a for loop, and I used .append() to add each selection of rows into a final file. But the result is not the same as what I expected. I learned by reading quite some posts that we should not append as a data frame in a loop. So I don't know how I could do this now. Could somebody help please? Thanks a lot! import pandas as pd df = pd.DataFrame({'a': [4, 5, 6, 7], 'b': [10, 20, 30, 40], 'c': [100, 50, -30, -50]}) df['diff'] = (df['b'] - df['c']).abs() print(df) df1 = df[df['diff'] == 90] df2 = df[df['diff'] == 60] list = [df1, df2] def try_1(list): output = [] for item in list: output.append(item) return output print(try_1(list)) output from the code a b c diff 0 4 10 100 90 1 5 20 50 30 2 6 30 -30 60 3 7 40 -50 90 [ a b c diff 0 4 10 100 90 3 7 40 -50 90, a b c diff 2 6 30 -30 60] but the expected output of print(try_1(list)) a b c diff 4 10 100 90 7 40 -50 90 6 30 -30 60 Also, I need to write this final one into a file. I tried .write(), and it complained not a string. How could I solve this please? Thanks! A: Your code just recreates the same list you had before, you can just use pd.concat instead, to write it to a frame you have to convert it to a str first: import pandas as pd df = pd.DataFrame({'a': [4, 5, 6, 7], 'b': [10, 20, 30, 40], 'c': [100, 50, -30, -50]}) df['diff'] = (df['b'] - df['c']).abs() # print(df) df1 = df[df['diff'] == 90] df2 = df[df['diff'] == 60] my_list = [df1, df2] all_frames = pd.concat(my_list) with open("file", "w") as f: f.write(str(all_frames)) If you need to append in a for loop and occasionaly write you could do it like this: import pandas as pd df = pd.DataFrame({'a': [4, 5, 6, 7], 'b': [10, 20, 30, 40], 'c': [100, 50, -30, -50]}) df['diff'] = (df['b'] - df['c']).abs() # print(df) df1 = df[df['diff'] == 90] df2 = df[df['diff'] == 60] my_list = [df1, df2] for i in range(20): my_list.append(df2) if i % 5 == 0: # whenever we want to write all_frames = pd.concat(my_list) my_list = [all_frames] with open("file", "w") as f: f.write(str(all_frames))
for loop append in a list, but the input is a data frame
I have a bit python code below. Just an example to show the problem: I would like to select some lines in a data frame basing on some values. Somehow this needs to be in a for loop, and I used .append() to add each selection of rows into a final file. But the result is not the same as what I expected. I learned by reading quite some posts that we should not append as a data frame in a loop. So I don't know how I could do this now. Could somebody help please? Thanks a lot! import pandas as pd df = pd.DataFrame({'a': [4, 5, 6, 7], 'b': [10, 20, 30, 40], 'c': [100, 50, -30, -50]}) df['diff'] = (df['b'] - df['c']).abs() print(df) df1 = df[df['diff'] == 90] df2 = df[df['diff'] == 60] list = [df1, df2] def try_1(list): output = [] for item in list: output.append(item) return output print(try_1(list)) output from the code a b c diff 0 4 10 100 90 1 5 20 50 30 2 6 30 -30 60 3 7 40 -50 90 [ a b c diff 0 4 10 100 90 3 7 40 -50 90, a b c diff 2 6 30 -30 60] but the expected output of print(try_1(list)) a b c diff 4 10 100 90 7 40 -50 90 6 30 -30 60 Also, I need to write this final one into a file. I tried .write(), and it complained not a string. How could I solve this please? Thanks!
[ "Your code just recreates the same list you had before, you can just use pd.concat instead, to write it to a frame you have to convert it to a str first:\nimport pandas as pd\n\ndf = pd.DataFrame({'a': [4, 5, 6, 7], 'b': [10, 20, 30, 40], 'c': [100, 50, -30, -50]})\ndf['diff'] = (df['b'] - df['c']).abs()\n# print(df)\ndf1 = df[df['diff'] == 90]\ndf2 = df[df['diff'] == 60]\n\nmy_list = [df1, df2]\n\nall_frames = pd.concat(my_list)\nwith open(\"file\", \"w\") as f:\n f.write(str(all_frames))\n\n\nIf you need to append in a for loop and occasionaly write you could do it like this:\nimport pandas as pd\n\ndf = pd.DataFrame({'a': [4, 5, 6, 7], 'b': [10, 20, 30, 40], 'c': [100, 50, -30, -50]})\ndf['diff'] = (df['b'] - df['c']).abs()\n# print(df)\ndf1 = df[df['diff'] == 90]\ndf2 = df[df['diff'] == 60]\n\nmy_list = [df1, df2]\nfor i in range(20):\n my_list.append(df2)\n if i % 5 == 0: # whenever we want to write\n all_frames = pd.concat(my_list)\n my_list = [all_frames]\n with open(\"file\", \"w\") as f:\n f.write(str(all_frames))\n\n" ]
[ 0 ]
[]
[]
[ "loops", "pandas", "python" ]
stackoverflow_0074490019_loops_pandas_python.txt
Q: UpdateOrAdd() changes to Pandas DataFrame Hi I'm wondering what is the fastest, most easy way to AddOrUpdate data in a Pandas DataFrame import pandas as pd # Original DataFrame pd.DataFrame([ {'A':'a1','B':'b1','C':'c1'}, {'A':'a3','B':'b2','C':'c2'}, {'A':'a3','B':'b3','C':'c3'}, ]) Original DataFrame : A B C 0 a1 b1 c1 1 a3 b2 c2 2 a3 b3 c3 # A List of changes changes = [ {'id':0, 'A':'aNEW','C':'cNEW'}, {'id':2, 'B':'bNEW'}, {'id':3, 'A':'aNEW','C':'cNEW'}}, ] # HOW TO ? df.UpdateOrAdd(changes) Resulting DataFrame : A B C 0 aNEW b1 cNEW 1 a3 b2 c2 2 a3 bNEW c3 3 aNEW None cNEW AddOrUpdate a Pandas DataFrame with a list of changes A: You can use craft a DataFrame from the dictionary, then align the indices with reindex and combine_first: df2 = pd.DataFrame(changes).set_index('id') out = (df2.reindex(df.index.union(df2.index)) .combine_first(df) ) Output: A B C 0 aNEW b1 cNEW 1 a3 b2 c2 2 a3 bNEW c3 3 aNEW NaN cNEW As a method If you really want, you can add this as a DataFrame method using monkey patching: def AddOrUpdate(self, other): if not isinstance(other, pd.DataFrame): other = pd.DataFrame(other) other = other.set_index('id') return (other.reindex(self.index.union(other.index)) .combine_first(df) ) pd.DataFrame.AddOrUpdate = AddOrUpdate out = df.AddOrUpdate(changes) A: If you have DataFrame index as integers starting with 0 and having continous values, you can just use .loc and add a new row creating new index at next row based on row count: df.loc[df.shape[0]] = ['aNEW', None, 'cNEW'] #df A B C 0 a1 b1 c1 1 a3 b2 c2 2 a3 b3 c3 3 aNEW None cNEW You can pass dictionary too, you don't need to include key/value pair for None if you don't care whether it is None or NaN: df.loc[df.shape[0]] = {'A': 'aNew+', 'C': 'cNew+'} #df A B C 0 a1 b1 c1 1 a3 b2 c2 2 a3 b3 c3 3 aNew+ NaN cNew+
UpdateOrAdd() changes to Pandas DataFrame
Hi I'm wondering what is the fastest, most easy way to AddOrUpdate data in a Pandas DataFrame import pandas as pd # Original DataFrame pd.DataFrame([ {'A':'a1','B':'b1','C':'c1'}, {'A':'a3','B':'b2','C':'c2'}, {'A':'a3','B':'b3','C':'c3'}, ]) Original DataFrame : A B C 0 a1 b1 c1 1 a3 b2 c2 2 a3 b3 c3 # A List of changes changes = [ {'id':0, 'A':'aNEW','C':'cNEW'}, {'id':2, 'B':'bNEW'}, {'id':3, 'A':'aNEW','C':'cNEW'}}, ] # HOW TO ? df.UpdateOrAdd(changes) Resulting DataFrame : A B C 0 aNEW b1 cNEW 1 a3 b2 c2 2 a3 bNEW c3 3 aNEW None cNEW AddOrUpdate a Pandas DataFrame with a list of changes
[ "You can use craft a DataFrame from the dictionary, then align the indices with reindex and combine_first:\ndf2 = pd.DataFrame(changes).set_index('id')\n\nout = (df2.reindex(df.index.union(df2.index))\n .combine_first(df)\n )\n\nOutput:\n A B C\n0 aNEW b1 cNEW\n1 a3 b2 c2\n2 a3 bNEW c3\n3 aNEW NaN cNEW\n\nAs a method\nIf you really want, you can add this as a DataFrame method using monkey patching:\ndef AddOrUpdate(self, other):\n if not isinstance(other, pd.DataFrame):\n other = pd.DataFrame(other)\n other = other.set_index('id')\n return (other.reindex(self.index.union(other.index))\n .combine_first(df)\n )\n\npd.DataFrame.AddOrUpdate = AddOrUpdate\n\nout = df.AddOrUpdate(changes)\n\n", "If you have DataFrame index as integers starting with 0 and having continous values, you can just use .loc and add a new row creating new index at next row based on row count:\ndf.loc[df.shape[0]] = ['aNEW', None, 'cNEW']\n\n#df\nA B C\n0 a1 b1 c1\n1 a3 b2 c2\n2 a3 b3 c3\n3 aNEW None cNEW\n\nYou can pass dictionary too, you don't need to include key/value pair for None if you don't care whether it is None or NaN:\ndf.loc[df.shape[0]] = {'A': 'aNew+', 'C': 'cNew+'}\n\n#df\nA B C\n0 a1 b1 c1\n1 a3 b2 c2\n2 a3 b3 c3\n3 aNew+ NaN cNew+\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074490151_pandas_python.txt
Q: How to prevent pandas datafram columns from moving to new line in colab? I'm working in colab and created a dataframe using this code: def daily_sma(): symbol = 'BTCUSDT' num_bars = 70 timeframe = '1d' bars = exchange.fetch_ohlcv(symbol, timeframe = timeframe, limit = num_bars) df_d = pd.DataFrame(bars, columns = ['timestamp', 'open', 'high', 'low', 'close', 'volume']) df_d['timestamp'] = pd.to_datetime(df_d['timestamp'], unit = 'ms') df_d['sma20_d'] = df_d.close.rolling(20).mean() df_d = df_d.dropna() return df_d But when I get and print it it does something like: timestamp open high low close volume \ 19 2022-09-29 19412.82 19645.52 18843.01 19591.51 406424.93256 20 2022-09-30 19590.54 20185.00 19155.36 19422.61 444322.95340 21 2022-10-01 19422.61 19484.00 19159.42 19310.95 165625.13959 22 2022-10-02 19312.24 19395.91 18920.35 19056.80 206812.47032 23 2022-10-03 19057.74 19719.10 18959.68 19629.08 293585.75212 sma20_d 19 19795.5145 20 19684.2280 21 19558.4320 22 19391.4850 23 19364.2605 One column is below the rest of the table even though there is enough space in the console. How can I place this column next to the volume column so it can be more readable. I have tried to decrease the width between columns but it makes no difference. pd.options.display.width = 0 A: You can print with to_string: print(df.to_string()) # to set a larger, yet not unlimited width # print(df.to_string(line_width=200)) If you want a permanent change, defined in terms of number of characters: pd.set_option('display.width', 200) print(df) Example: df = pd.DataFrame(columns=[f'column_{x}' for x in range(10)], index=range(3)) print(df) print('\n\n# with to_string()') print(df.to_string()) print('\n\n# with option') pd.set_option('display.width', 200) print(df) Output: column_0 column_1 column_2 column_3 column_4 column_5 column_6 column_7 \ 0 NaN NaN NaN NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN NaN NaN NaN column_8 column_9 0 NaN NaN 1 NaN NaN 2 NaN NaN # with to_string() column_0 column_1 column_2 column_3 column_4 column_5 column_6 column_7 column_8 column_9 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN # with option column_0 column_1 column_2 column_3 column_4 column_5 column_6 column_7 column_8 column_9 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
How to prevent pandas datafram columns from moving to new line in colab?
I'm working in colab and created a dataframe using this code: def daily_sma(): symbol = 'BTCUSDT' num_bars = 70 timeframe = '1d' bars = exchange.fetch_ohlcv(symbol, timeframe = timeframe, limit = num_bars) df_d = pd.DataFrame(bars, columns = ['timestamp', 'open', 'high', 'low', 'close', 'volume']) df_d['timestamp'] = pd.to_datetime(df_d['timestamp'], unit = 'ms') df_d['sma20_d'] = df_d.close.rolling(20).mean() df_d = df_d.dropna() return df_d But when I get and print it it does something like: timestamp open high low close volume \ 19 2022-09-29 19412.82 19645.52 18843.01 19591.51 406424.93256 20 2022-09-30 19590.54 20185.00 19155.36 19422.61 444322.95340 21 2022-10-01 19422.61 19484.00 19159.42 19310.95 165625.13959 22 2022-10-02 19312.24 19395.91 18920.35 19056.80 206812.47032 23 2022-10-03 19057.74 19719.10 18959.68 19629.08 293585.75212 sma20_d 19 19795.5145 20 19684.2280 21 19558.4320 22 19391.4850 23 19364.2605 One column is below the rest of the table even though there is enough space in the console. How can I place this column next to the volume column so it can be more readable. I have tried to decrease the width between columns but it makes no difference. pd.options.display.width = 0
[ "You can print with to_string:\nprint(df.to_string())\n\n# to set a larger, yet not unlimited width\n# print(df.to_string(line_width=200))\n\nIf you want a permanent change, defined in terms of number of characters:\npd.set_option('display.width', 200)\nprint(df)\n\nExample:\ndf = pd.DataFrame(columns=[f'column_{x}' for x in range(10)], index=range(3))\n\nprint(df)\n\nprint('\\n\\n# with to_string()')\nprint(df.to_string())\n\nprint('\\n\\n# with option')\npd.set_option('display.width', 200)\nprint(df)\n\nOutput:\n column_0 column_1 column_2 column_3 column_4 column_5 column_6 column_7 \\\n0 NaN NaN NaN NaN NaN NaN NaN NaN \n1 NaN NaN NaN NaN NaN NaN NaN NaN \n2 NaN NaN NaN NaN NaN NaN NaN NaN \n\n column_8 column_9 \n0 NaN NaN \n1 NaN NaN \n2 NaN NaN \n\n\n# with to_string()\n column_0 column_1 column_2 column_3 column_4 column_5 column_6 column_7 column_8 column_9\n0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n\n\n# with option\n column_0 column_1 column_2 column_3 column_4 column_5 column_6 column_7 column_8 column_9\n0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074490203_dataframe_pandas_python.txt
Q: Calculating height and width of a bounding box in Yolov5 Currently I am working with Yolov5 and I have done training and validation on custom dataset and the results are quite impressive. Now I want to calculate the height and width of the object(bounding box) and present it on screen just like confidence score. In Yolov5 there's one option to save the cordinates of a bounding box in text file. I have done that but finding it difficult to put those as an output of detection on screen. This might be because of my limited capabilities in python. I request you all, if the knowledge allows you so kindly take a look and help me. Thank you. import argparse import os import platform import sys from pathlib import Path import torch FILE = Path(__file__).resolve() ROOT = FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative from models.common import DetectMultiBackend from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh) from utils.plots import Annotator, colors, save_one_box from utils.torch_utils import select_device, smart_inference_mode @smart_inference_mode() def run( weights=ROOT / 'yolov5s.pt', # model path or triton URL source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam) data=ROOT / 'data/coco128.yaml', # dataset.yaml path imgsz=(640, 640), # inference size (height, width) conf_thres=0.25, # confidence threshold iou_thres=0.45, # NMS IOU threshold max_det=1000, # maximum detections per image device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu view_img=False, # show results save_txt=False, # save results to *.txt save_conf=False, # save confidences in --save-txt labels save_crop=False, # save cropped prediction boxes nosave=False, # do not save images/videos classes=None, # filter by class: --class 0, or --class 0 2 3 agnostic_nms=False, # class-agnostic NMS augment=False, # augmented inference visualize=False, # visualize features update=False, # update all models project=ROOT / 'runs/detect', # save results to project/name name='exp', # save results to project/name exist_ok=False, # existing project/name ok, do not increment line_thickness=3, # bounding box thickness (pixels) hide_labels=False, # hide labels hide_conf=False, # hide confidences half=False, # use FP16 half-precision inference dnn=False, # use OpenCV DNN for ONNX inference vid_stride=1, # video frame-rate stride ): source = str(source) save_img = not nosave and not source.endswith('.txt') # save inference images is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file) screenshot = source.lower().startswith('screen') if is_url and is_file: source = check_file(source) # download # Directories save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir # Load model device = select_device(device) model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) stride, names, pt = model.stride, model.names, model.pt imgsz = check_img_size(imgsz, s=stride) # check image size # Dataloader bs = 1 # batch_size if webcam: view_img = check_imshow(warn=True) dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) bs = len(dataset) elif screenshot: dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt) else: dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) vid_path, vid_writer = [None] * bs, [None] * bs # Run inference model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup seen, windows, dt = 0, [], (Profile(), Profile(), Profile()) for path, im, im0s, vid_cap, s in dataset: with dt[0]: im = torch.from_numpy(im).to(model.device) im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 im /= 255 # 0 - 255 to 0.0 - 1.0 if len(im.shape) == 3: im = im[None] # expand for batch dim # Inference with dt[1]: visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False pred = model(im, augment=augment, visualize=visualize) # NMS with dt[2]: pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) # Second-stage classifier (optional) # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s) # Process predictions for i, det in enumerate(pred): # per image seen += 1 if webcam: # batch_size >= 1 p, im0, frame = path[i], im0s[i].copy(), dataset.count s += f'{i}: ' else: p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0) p = Path(p) # to Path save_path = str(save_dir / p.name) # im.jpg txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt s += '%gx%g ' % im.shape[2:] # print string gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh imc = im0.copy() if save_crop else im0 # for save_crop annotator = Annotator(im0, line_width=line_thickness, example=str(names)) if len(det): # Rescale boxes from img_size to im0 size det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # Print results for c in det[:, 5].unique(): n = (det[:, 5] == c).sum() # detections per class s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string # Write results for *xyxy, conf, cls in reversed(det): if save_txt: # Write to file xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format with open(f'{txt_path}.txt', 'a') as f: f.write(('%g ' * len(line)).rstrip() % line + '\n') if save_img or save_crop or view_img: # Add bbox to image c = int(cls) # integer class label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') annotator.box_label(xyxy, label, color=colors(c, True)) if save_crop: save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) # Stream results im0 = annotator.result() if view_img: if platform.system() == 'Linux' and p not in windows: windows.append(p) cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) cv2.imshow(str(p), im0) cv2.waitKey(1) # 1 millisecond # Save results (image with detections) if save_img: if dataset.mode == 'image': cv2.imwrite(save_path, im0) else: # 'video' or 'stream' if vid_path[i] != save_path: # new video vid_path[i] = save_path if isinstance(vid_writer[i], cv2.VideoWriter): vid_writer[i].release() # release previous video writer if vid_cap: # video fps = vid_cap.get(cv2.CAP_PROP_FPS) w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) else: # stream fps, w, h = 30, im0.shape[1], im0.shape[0] save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) vid_writer[i].write(im0) # Print time (inference-only) LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms") # Print results t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t) if save_txt or save_img: s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") if update: strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) def parse_opt(): parser = argparse.ArgumentParser() parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path or triton URL') parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)') parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path') parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--view-img', action='store_true', help='show results') parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes') parser.add_argument('--nosave', action='store_true', help='do not save images/videos') parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3') parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') parser.add_argument('--augment', action='store_true', help='augmented inference') parser.add_argument('--visualize', action='store_true', help='visualize features') parser.add_argument('--update', action='store_true', help='update all models') parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name') parser.add_argument('--name', default='exp', help='save results to project/name') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)') parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels') parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride') opt = parser.parse_args() opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand print_args(vars(opt)) return opt def main(opt): check_requirements(exclude=('tensorboard', 'thop')) run(**vars(opt)) if __name__ == "__main__": opt = parse_opt() main(opt) A: You have to first understand how the bounding boxes are encoded by the YOLOv7 framework. There are several ways coordinates could be stored. First, bounding box coordinates are usually expressed in the image coordinate system. The most common one has its origin in the top-left image corner and the axes (X, Y) are oriented to the right and to the bottom respectively: (0,0) x β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–ΊX β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ y─┼───────────o β”‚ β–Ό Y A bounding box can be expressed in this system via multiple coordinates: width ◄──────────► (0,0) xmin xmid xmax β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β–ΊX β”‚ β”‚ β”‚ β”‚ β–² yminβ”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€ β”‚ β”‚ β”‚ β”‚ height β”‚ ymidβ”œβ”€β”€β”€β”€β”€β”€ β”Ό β”‚ β”‚ β”‚ β”‚ box β”‚ β–Ό ymaxβ”œβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό Y Where (xmid, ymid) is the bounding box center, (width, height) its size, (xmin, ymin) its top-left corner and and (xmax, ymax) its bottom-right corner. Only four of those are sufficient to describe a bounding box entirely. Two of the most common ones are: (xmin, ymin, xmax, ymax) (xmid, ymid, width, height) You can deduce the others coordinates from those four, for instance: width = xmax - xmin xmid = (xmin + xmax) / 2 ymax = ymid + width / 2 etc... Additionally, bounding box coordinates can either be expressed in pixels (absolute coordinates) or relative to the image size (a real number in [0, 1]). If the image size is (img_w, img_h), then you can translate from absolute to relative like this: x_rel = x_abs / img_w y_rel = y_abs / img_h and from relative to absolute like this: x_abs = x_rel * img_w y_abs = y_rel * img_h Given all that, you should be able to compute the width and height the bounding boxes easily. You just need to know in which format YOLOv7 coordinates are stored. To my knowledge, YOLOv5 stores them as (xmid, ymid, width, height) in relative format. I developed a Python package to convert bounding box annotations from/into several widely used formats such as YOLO, COCO and CVAT. If that suits your need, you can install it with: pip install globox and read YOLOv7 annotations like this: from globox import * from pathlib import Path annotations = AnnotationSet.from_yolo( folder=Path("/path/to/yolo/txt/files/"), image_folder=Path("/path/to/yolo/images/"), conf_last=True, # Only for YOLOv5 and YOLOv7 ) then you have access to the coordinates of every bounding boxes: for box in annotations.all_boxes: print(box.xmin, box.ymin, box.width, box.height) I let you inspect the API for a complete overview of what is possible.
Calculating height and width of a bounding box in Yolov5
Currently I am working with Yolov5 and I have done training and validation on custom dataset and the results are quite impressive. Now I want to calculate the height and width of the object(bounding box) and present it on screen just like confidence score. In Yolov5 there's one option to save the cordinates of a bounding box in text file. I have done that but finding it difficult to put those as an output of detection on screen. This might be because of my limited capabilities in python. I request you all, if the knowledge allows you so kindly take a look and help me. Thank you. import argparse import os import platform import sys from pathlib import Path import torch FILE = Path(__file__).resolve() ROOT = FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative from models.common import DetectMultiBackend from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh) from utils.plots import Annotator, colors, save_one_box from utils.torch_utils import select_device, smart_inference_mode @smart_inference_mode() def run( weights=ROOT / 'yolov5s.pt', # model path or triton URL source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam) data=ROOT / 'data/coco128.yaml', # dataset.yaml path imgsz=(640, 640), # inference size (height, width) conf_thres=0.25, # confidence threshold iou_thres=0.45, # NMS IOU threshold max_det=1000, # maximum detections per image device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu view_img=False, # show results save_txt=False, # save results to *.txt save_conf=False, # save confidences in --save-txt labels save_crop=False, # save cropped prediction boxes nosave=False, # do not save images/videos classes=None, # filter by class: --class 0, or --class 0 2 3 agnostic_nms=False, # class-agnostic NMS augment=False, # augmented inference visualize=False, # visualize features update=False, # update all models project=ROOT / 'runs/detect', # save results to project/name name='exp', # save results to project/name exist_ok=False, # existing project/name ok, do not increment line_thickness=3, # bounding box thickness (pixels) hide_labels=False, # hide labels hide_conf=False, # hide confidences half=False, # use FP16 half-precision inference dnn=False, # use OpenCV DNN for ONNX inference vid_stride=1, # video frame-rate stride ): source = str(source) save_img = not nosave and not source.endswith('.txt') # save inference images is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file) screenshot = source.lower().startswith('screen') if is_url and is_file: source = check_file(source) # download # Directories save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir # Load model device = select_device(device) model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) stride, names, pt = model.stride, model.names, model.pt imgsz = check_img_size(imgsz, s=stride) # check image size # Dataloader bs = 1 # batch_size if webcam: view_img = check_imshow(warn=True) dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) bs = len(dataset) elif screenshot: dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt) else: dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) vid_path, vid_writer = [None] * bs, [None] * bs # Run inference model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup seen, windows, dt = 0, [], (Profile(), Profile(), Profile()) for path, im, im0s, vid_cap, s in dataset: with dt[0]: im = torch.from_numpy(im).to(model.device) im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 im /= 255 # 0 - 255 to 0.0 - 1.0 if len(im.shape) == 3: im = im[None] # expand for batch dim # Inference with dt[1]: visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False pred = model(im, augment=augment, visualize=visualize) # NMS with dt[2]: pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) # Second-stage classifier (optional) # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s) # Process predictions for i, det in enumerate(pred): # per image seen += 1 if webcam: # batch_size >= 1 p, im0, frame = path[i], im0s[i].copy(), dataset.count s += f'{i}: ' else: p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0) p = Path(p) # to Path save_path = str(save_dir / p.name) # im.jpg txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt s += '%gx%g ' % im.shape[2:] # print string gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh imc = im0.copy() if save_crop else im0 # for save_crop annotator = Annotator(im0, line_width=line_thickness, example=str(names)) if len(det): # Rescale boxes from img_size to im0 size det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # Print results for c in det[:, 5].unique(): n = (det[:, 5] == c).sum() # detections per class s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string # Write results for *xyxy, conf, cls in reversed(det): if save_txt: # Write to file xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format with open(f'{txt_path}.txt', 'a') as f: f.write(('%g ' * len(line)).rstrip() % line + '\n') if save_img or save_crop or view_img: # Add bbox to image c = int(cls) # integer class label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') annotator.box_label(xyxy, label, color=colors(c, True)) if save_crop: save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) # Stream results im0 = annotator.result() if view_img: if platform.system() == 'Linux' and p not in windows: windows.append(p) cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) cv2.imshow(str(p), im0) cv2.waitKey(1) # 1 millisecond # Save results (image with detections) if save_img: if dataset.mode == 'image': cv2.imwrite(save_path, im0) else: # 'video' or 'stream' if vid_path[i] != save_path: # new video vid_path[i] = save_path if isinstance(vid_writer[i], cv2.VideoWriter): vid_writer[i].release() # release previous video writer if vid_cap: # video fps = vid_cap.get(cv2.CAP_PROP_FPS) w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) else: # stream fps, w, h = 30, im0.shape[1], im0.shape[0] save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) vid_writer[i].write(im0) # Print time (inference-only) LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms") # Print results t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t) if save_txt or save_img: s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") if update: strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) def parse_opt(): parser = argparse.ArgumentParser() parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path or triton URL') parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)') parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path') parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--view-img', action='store_true', help='show results') parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes') parser.add_argument('--nosave', action='store_true', help='do not save images/videos') parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3') parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') parser.add_argument('--augment', action='store_true', help='augmented inference') parser.add_argument('--visualize', action='store_true', help='visualize features') parser.add_argument('--update', action='store_true', help='update all models') parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name') parser.add_argument('--name', default='exp', help='save results to project/name') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)') parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels') parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride') opt = parser.parse_args() opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand print_args(vars(opt)) return opt def main(opt): check_requirements(exclude=('tensorboard', 'thop')) run(**vars(opt)) if __name__ == "__main__": opt = parse_opt() main(opt)
[ "You have to first understand how the bounding boxes are encoded by the YOLOv7 framework. There are several ways coordinates could be stored.\nFirst, bounding box coordinates are usually expressed in the image coordinate system. The most common one has its origin in the top-left image corner and the axes (X, Y) are oriented to the right and to the bottom respectively:\n(0,0) x\n β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–ΊX\n β”‚ β”‚\n β”‚ β”‚\n β”‚ β”‚\n β”‚ β”‚\n y─┼───────────o\n β”‚\n β–Ό\n Y\n\nA bounding box can be expressed in this system via multiple coordinates:\n width\n ◄──────────►\n (0,0) xmin xmid xmax\n β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β–ΊX\n β”‚ β”‚ β”‚ β”‚\n β–² yminβ”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€\n β”‚ β”‚ β”‚ β”‚\nheight β”‚ ymidβ”œβ”€β”€β”€β”€β”€β”€ β”Ό β”‚\n β”‚ β”‚ β”‚ box β”‚\n β–Ό ymaxβ”œβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜\n β”‚\n β–Ό\n Y\n\nWhere (xmid, ymid) is the bounding box center, (width, height) its size, (xmin, ymin) its top-left corner and and (xmax, ymax) its bottom-right corner.\nOnly four of those are sufficient to describe a bounding box entirely. Two of the most common ones are:\n\n(xmin, ymin, xmax, ymax)\n(xmid, ymid, width, height)\n\nYou can deduce the others coordinates from those four, for instance:\n\nwidth = xmax - xmin\nxmid = (xmin + xmax) / 2\nymax = ymid + width / 2\netc...\n\nAdditionally, bounding box coordinates can either be expressed in pixels (absolute coordinates) or relative to the image size (a real number in [0, 1]). If the image size is (img_w, img_h), then you can translate from absolute to relative like this:\n\nx_rel = x_abs / img_w\ny_rel = y_abs / img_h\n\nand from relative to absolute like this:\n\nx_abs = x_rel * img_w\ny_abs = y_rel * img_h\n\nGiven all that, you should be able to compute the width and height the bounding boxes easily. You just need to know in which format YOLOv7 coordinates are stored. To my knowledge, YOLOv5 stores them as (xmid, ymid, width, height) in relative format.\n\nI developed a Python package to convert bounding box annotations from/into several widely used formats such as YOLO, COCO and CVAT. If that suits your need, you can install it with:\npip install globox\n\nand read YOLOv7 annotations like this:\nfrom globox import *\nfrom pathlib import Path\n\nannotations = AnnotationSet.from_yolo(\n folder=Path(\"/path/to/yolo/txt/files/\"),\n image_folder=Path(\"/path/to/yolo/images/\"),\n conf_last=True, # Only for YOLOv5 and YOLOv7\n)\n\nthen you have access to the coordinates of every bounding boxes:\nfor box in annotations.all_boxes:\n print(box.xmin, box.ymin, box.width, box.height)\n\nI let you inspect the API for a complete overview of what is possible.\n" ]
[ 1 ]
[]
[]
[ "object_detection", "python", "yolo" ]
stackoverflow_0074489223_object_detection_python_yolo.txt
Q: Get the YouTube video title by its url using python I want to get the title of a youtube video by url. I have been searching for several days but I did not get the result I have been searching for several days but I did not get the result A: By using requests and Beautiful Soup libraries you can achieve that: import requests from bs4 import BeautifulSoup r = requests.get("https://www.youtube.com/watch?v=9sg-A-eS6Ig&list=RDBAkqJT_sMKQ&index=5") soup = BeautifulSoup(r.text) link = soup.find_all(name="title")[0] title = str(link) title = title.replace("<title>","") title = title.replace("</title>","") print(title)
Get the YouTube video title by its url using python
I want to get the title of a youtube video by url. I have been searching for several days but I did not get the result I have been searching for several days but I did not get the result
[ "By using requests and Beautiful Soup libraries you can achieve that:\nimport requests\nfrom bs4 import BeautifulSoup\n\nr = requests.get(\"https://www.youtube.com/watch?v=9sg-A-eS6Ig&list=RDBAkqJT_sMKQ&index=5\")\nsoup = BeautifulSoup(r.text)\n\nlink = soup.find_all(name=\"title\")[0]\ntitle = str(link)\ntitle = title.replace(\"<title>\",\"\")\ntitle = title.replace(\"</title>\",\"\")\n\nprint(title)\n\n" ]
[ 1 ]
[]
[]
[ "extract", "python", "search", "url", "youtube" ]
stackoverflow_0074490036_extract_python_search_url_youtube.txt